Sample records for linear scale model

  1. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  2. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    USGS Publications Warehouse

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  3. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staebler, G. M.; Candy, J.; Howard, N. T.

    2016-06-15

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less

  4. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    DOE PAGES

    Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...

    2016-06-29

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less

  5. A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul

    2015-01-01

    A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.

  6. Waveform Design for Wireless Power Transfer

    NASA Astrophysics Data System (ADS)

    Clerckx, Bruno; Bayguzina, Ekaterina

    2016-12-01

    Far-field Wireless Power Transfer (WPT) has attracted significant attention in recent years. Despite the rapid progress, the emphasis of the research community in the last decade has remained largely concentrated on improving the design of energy harvester (so-called rectenna) and has left aside the effect of transmitter design. In this paper, we study the design of transmit waveform so as to enhance the DC power at the output of the rectenna. We derive a tractable model of the non-linearity of the rectenna and compare with a linear model conventionally used in the literature. We then use those models to design novel multisine waveforms that are adaptive to the channel state information (CSI). Interestingly, while the linear model favours narrowband transmission with all the power allocated to a single frequency, the non-linear model favours a power allocation over multiple frequencies. Through realistic simulations, waveforms designed based on the non-linear model are shown to provide significant gains (in terms of harvested DC power) over those designed based on the linear model and over non-adaptive waveforms. We also compute analytically the theoretical scaling laws of the harvested energy for various waveforms as a function of the number of sinewaves and transmit antennas. Those scaling laws highlight the benefits of CSI knowledge at the transmitter in WPT and of a WPT design based on a non-linear rectenna model over a linear model. Results also motivate the study of a promising architecture relying on large-scale multisine multi-antenna waveforms for WPT. As a final note, results stress the importance of modeling and accounting for the non-linearity of the rectenna in any system design involving wireless power.

  7. Scale of association: hierarchical linear models and the measurement of ecological systems

    Treesearch

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  8. A model of the extent and distribution of woody linear features in rural Great Britain.

    PubMed

    Scholefield, Paul; Morton, Dan; Rowland, Clare; Henrys, Peter; Howard, David; Norton, Lisa

    2016-12-01

    Hedges and lines of trees (woody linear features) are important boundaries that connect and enclose habitats, buffer the effects of land management, and enhance biodiversity in increasingly impoverished landscapes. Despite their acknowledged importance in the wider countryside, they are usually not considered in models of landscape function due to their linear nature and the difficulties of acquiring relevant data about their character, extent, and location. We present a model which uses national datasets to describe the distribution of woody linear features along boundaries in Great Britain. The method can be applied for other boundary types and in other locations around the world across a range of spatial scales where different types of linear feature can be separated using characteristics such as height or width. Satellite-derived Land Cover Map 2007 (LCM2007) provided the spatial framework for locating linear features and was used to screen out areas unsuitable for their occurrence, that is, offshore, urban, and forest areas. Similarly, Ordnance Survey Land-Form PANORAMA®, a digital terrain model, was used to screen out where they do not occur. The presence of woody linear features on boundaries was modelled using attributes from a canopy height dataset obtained by subtracting a digital terrain map (DTM) from a digital surface model (DSM). The performance of the model was evaluated against existing woody linear feature data in Countryside Survey across a range of scales. The results indicate that, despite some underestimation, this simple approach may provide valuable information on the extents and locations of woody linear features in the countryside at both local and national scales.

  9. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  10. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  11. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  12. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  13. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  14. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  15. Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs

    ERIC Educational Resources Information Center

    Hung, David; Lee, Shu-Shing

    2015-01-01

    Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…

  16. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  17. Mathematical model for the contribution of individual organs to non-zero y-intercepts in single and multi-compartment linear models of whole-body energy expenditure.

    PubMed

    Kaiyala, Karl J

    2014-01-01

    Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit 'local linearity.' Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying 'latent' allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses.

  18. Wave models for turbulent free shear flows

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Morris, P. J.

    1991-01-01

    New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.

  19. A Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less

  20. A Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro, E-mail: zvlah@stanford.edu, E-mail: mwhite@berkeley.edu, E-mail: aviles@berkeley.edu

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The 'new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. All the perturbative models fare better than linear theory.« less

  1. A Lagrangian effective field theory

    DOE PAGES

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro

    2015-09-02

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less

  2. The cross-over to magnetostrophic convection in planetary dynamo systems

    PubMed Central

    King, E. M.

    2017-01-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ, yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, LX≈(Λo2/Rmo)D, where Λo is the linear (or traditional) Elsasser number, Rmo is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above LX, magnetostrophic convection dynamics should not be possible. Only on scales smaller than LX should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because LX is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λo≃1 and Rmo≃103 in Earth’s core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations. PMID:28413338

  3. The cross-over to magnetostrophic convection in planetary dynamo systems.

    PubMed

    Aurnou, J M; King, E M

    2017-03-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ , yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, [Formula: see text], where Λ o is the linear (or traditional) Elsasser number, Rm o is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above [Formula: see text], magnetostrophic convection dynamics should not be possible. Only on scales smaller than [Formula: see text] should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because [Formula: see text] is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λ o ≃1 and Rm o ≃10 3 in Earth's core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations.

  4. Multiscale functions, scale dynamics, and applications to partial differential equations

    NASA Astrophysics Data System (ADS)

    Cresson, Jacky; Pierret, Frédéric

    2016-05-01

    Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.

  5. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  6. Application of a chromatography model with linear gradient elution experimental data to the rapid scale-up in ion-exchange process chromatography of proteins.

    PubMed

    Ishihara, Takashi; Kadoya, Toshihiko; Yamamoto, Shuichi

    2007-08-24

    We applied the model described in our previous paper to the rapid scale-up in the ion exchange chromatography of proteins, in which linear flow velocity, column length and gradient slope were changed. We carried out linear gradient elution experiments, and obtained data for the peak salt concentration and peak width. From these data, the plate height (HETP) was calculated as a function of the mobile phase velocity and iso-resolution curve (the separation time and elution volume relationship for the same resolution) was calculated. The scale-up chromatography conditions were determined by the iso-resolution curve. The scale-up of the linear gradient elution from 5 to 100mL and 2.5L column sizes was performed both by the separation of beta-lactoglobulin A and beta-lactoglobulin B with anion-exchange chromatography and by the purification of a recombinant protein with cation-exchange chromatography. Resolution, recovery and purity were examined in order to verify the proposed method.

  7. Tackling non-linearities with the effective field theory of dark energy and modified gravity

    NASA Astrophysics Data System (ADS)

    Frusciante, Noemi; Papadomanolakis, Georgios

    2017-12-01

    We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Hořava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.

  8. Fourier imaging of non-linear structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important,more » and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.« less

  9. The morphing of geographical features by Fourier transformation.

    PubMed

    Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.

  10. Linear discrete systems with memory: a generalization of the Langmuir model

    NASA Astrophysics Data System (ADS)

    Băleanu, Dumitru; Nigmatullin, Raoul R.

    2013-10-01

    In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.

  11. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  12. Linear and nonlinear response in sheared soft spheres

    NASA Astrophysics Data System (ADS)

    Tighe, Brian

    2013-11-01

    Packings of soft spheres provide an idealized model of foams, emulsions, and grains, while also serving as the canonical example of a system undergoing a jamming transition. Packings' mechanical response has now been studied exhaustively in the context of ``strict linear response,'' i.e. by linearizing about a stable static packing and solving the resulting equations of motion. Both because the system is close to a critical point and because the soft sphere pair potential is non-analytic at the point of contact, it is reasonable to ask under what circumstances strict linear response provides a good approximation to the actual response. We simulate sheared soft sphere packings close to jamming and identify two distinct strain scales: (i) the scale on which strict linear response fails, coinciding with a topological change in the packing's contact network; and (ii) the scale on which linear superposition of the averaged stress-strain curve breaks down. This latter scale provides a ``weak linear response'' criterion and is likely to be more experimentally relevant.

  13. Size effects in non-linear heat conduction with flux-limited behaviors

    NASA Astrophysics Data System (ADS)

    Li, Shu-Nan; Cao, Bing-Yang

    2017-11-01

    Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.

  14. Wind-invariant saltation heights imply linear scaling of aeolian saltation flux with shear stress.

    PubMed

    Martin, Raleigh L; Kok, Jasper F

    2017-06-01

    Wind-driven sand transport generates atmospheric dust, forms dunes, and sculpts landscapes. However, it remains unclear how the flux of particles in aeolian saltation-the wind-driven transport of sand in hopping trajectories-scales with wind speed, largely because models do not agree on how particle speeds and trajectories change with wind shear velocity. We present comprehensive measurements, from three new field sites and three published studies, showing that characteristic saltation layer heights remain approximately constant with shear velocity, in agreement with recent wind tunnel studies. These results support the assumption of constant particle speeds in recent models predicting linear scaling of saltation flux with shear stress. In contrast, our results refute widely used older models that assume that particle speed increases with shear velocity, thereby predicting nonlinear 3/2 stress-flux scaling. This conclusion is further supported by direct field measurements of saltation flux versus shear stress. Our results thus argue for adoption of linear saltation flux laws and constant saltation trajectories for modeling saltation-driven aeolian processes on Earth, Mars, and other planetary surfaces.

  15. Wind-invariant saltation heights imply linear scaling of aeolian saltation flux with shear stress

    PubMed Central

    Martin, Raleigh L.; Kok, Jasper F.

    2017-01-01

    Wind-driven sand transport generates atmospheric dust, forms dunes, and sculpts landscapes. However, it remains unclear how the flux of particles in aeolian saltation—the wind-driven transport of sand in hopping trajectories—scales with wind speed, largely because models do not agree on how particle speeds and trajectories change with wind shear velocity. We present comprehensive measurements, from three new field sites and three published studies, showing that characteristic saltation layer heights remain approximately constant with shear velocity, in agreement with recent wind tunnel studies. These results support the assumption of constant particle speeds in recent models predicting linear scaling of saltation flux with shear stress. In contrast, our results refute widely used older models that assume that particle speed increases with shear velocity, thereby predicting nonlinear 3/2 stress-flux scaling. This conclusion is further supported by direct field measurements of saltation flux versus shear stress. Our results thus argue for adoption of linear saltation flux laws and constant saltation trajectories for modeling saltation-driven aeolian processes on Earth, Mars, and other planetary surfaces. PMID:28630907

  16. On the Validity of the Streaming Model for the Redshift-Space Correlation Function in the Linear Regime

    NASA Astrophysics Data System (ADS)

    Fisher, Karl B.

    1995-08-01

    The relation between the galaxy correlation functions in real-space and redshift-space is derived in the linear regime by an appropriate averaging of the joint probability distribution of density and velocity. The derivation recovers the familiar linear theory result on large scales but has the advantage of clearly revealing the dependence of the redshift distortions on the underlying peculiar velocity field; streaming motions give rise to distortions of θ(Ω0.6/b) while variations in the anisotropic velocity dispersion yield terms of order θ(Ω1.2/b2). This probabilistic derivation of the redshift-space correlation function is similar in spirit to the derivation of the commonly used "streaming" model, in which the distortions are given by a convolution of the real-space correlation function with a velocity distribution function. The streaming model is often used to model the redshift-space correlation function on small, highly nonlinear, scales. There have been claims in the literature, however, that the streaming model is not valid in the linear regime. Our analysis confirms this claim, but we show that the streaming model can be made consistent with linear theory provided that the model for the streaming has the functional form predicted by linear theory and that the velocity distribution is chosen to be a Gaussian with the correct linear theory dispersion.

  17. Mathematical Model for the Contribution of Individual Organs to Non-Zero Y-Intercepts in Single and Multi-Compartment Linear Models of Whole-Body Energy Expenditure

    PubMed Central

    Kaiyala, Karl J.

    2014-01-01

    Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit ‘local linearity.’ Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying ‘latent’ allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses. PMID:25068692

  18. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  19. Testing higher-order Lagrangian perturbation theory against numerical simulations. 2: Hierarchical models

    NASA Technical Reports Server (NTRS)

    Melott, A. L.; Buchert, T.; Weib, A. G.

    1995-01-01

    We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of scales. The Lagrangian theory of gravitational instability of Friedmann-Lemaitre cosmogonies is compared with numerical simulations. We study the dynamics of hierarchical models as a second step. In the first step we analyzed the performance of the Lagrangian schemes for pancake models, the difference being that in the latter models the initial power spectrum is truncated. This work probed the quasi-linear and weakly non-linear regimes. We here explore whether the results found for pancake models carry over to hierarchical models which are evolved deeply into the non-linear regime. We smooth the initial data by using a variety of filter types and filter scales in order to determine the optimal performance of the analytical models, as has been done for the 'Zel'dovich-approximation' - hereafter TZA - in previous work. We find that for spectra with negative power-index the second-order scheme performs considerably better than TZA in terms of statistics which probe the dynamics, and slightly better in terms of low-order statistics like the power-spectrum. However, in contrast to the results found for pancake models, where the higher-order schemes get worse than TZA at late non-linear stages and on small scales, we here find that the second-order model is as robust as TZA, retaining the improvement at later stages and on smaller scales. In view of these results we expect that the second-order truncated Lagrangian model is especially useful for the modelling of standard dark matter models such as Hot-, Cold-, and Mixed-Dark-Matter.

  20. Quantum criticality of the two-channel pseudogap Anderson model: universal scaling in linear and non-linear conductance.

    PubMed

    Wu, Tsan-Pei; Wang, Xiao-Qun; Guo, Guang-Yu; Anders, Frithjof; Chung, Chung-Hou

    2016-05-05

    The quantum criticality of the two-lead two-channel pseudogap Anderson impurity model is studied. Based on the non-crossing approximation (NCA) and numerical renormalization group (NRG) approaches, we calculate both the linear and nonlinear conductance of the model at finite temperatures with a voltage bias and a power-law vanishing conduction electron density of states, ρc(ω) proportional |ω − μF|(r) (0 < r < 1) near the Fermi energy μF. At a fixed lead-impurity hybridization, a quantum phase transition from the two-channel Kondo (2CK) to the local moment (LM) phase is observed with increasing r from r = 0 to r = rc < 1. Surprisingly, in the 2CK phase, different power-law scalings from the well-known [Formula: see text] or [Formula: see text] form is found. Moreover, novel power-law scalings in conductances at the 2CK-LM quantum critical point are identified. Clear distinctions are found on the critical exponents between linear and non-linear conductance at criticality. The implications of these two distinct quantum critical properties for the non-equilibrium quantum criticality in general are discussed.

  1. Informativeness of Wind Data in Linear Madden-Julian Oscillation Prediction

    DTIC Science & Technology

    2016-08-15

    Linear inverse models (LIMs) are used to explore predictability and information content of the Madden–Julian Oscillation (MJO). Hindcast skill for...mostly at the largest scales, adds 1–2 days of skill. Keywords: linear inverse modeling; Madden–Julian Oscillation; sub-seasonal prediction 1...tion that may reflect on the MJO’s incompletely under- stood dynamics. Cavanaugh et al. (2014, hereafter C14) explored the skill of linear inverse

  2. TWO-STAGE FRAGMENTATION FOR CLUSTER FORMATION: ANALYTICAL MODEL AND OBSERVATIONAL CONSIDERATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Nicole D.; Basu, Shantanu, E-mail: nwityk@uwo.ca, E-mail: basu@uwo.ca

    2012-12-10

    Linear analysis of the formation of protostellar cores in planar magnetic interstellar clouds shows that molecular clouds exhibit a preferred length scale for collapse that depends on the mass-to-flux ratio and neutral-ion collision time within the cloud. We extend this linear analysis to the context of clustered star formation. By combining the results of the linear analysis with a realistic ionization profile for the cloud, we find that a molecular cloud may evolve through two fragmentation events in the evolution toward the formation of stars. Our model suggests that the initial fragmentation into clumps occurs for a transcritical cloud onmore » parsec scales while the second fragmentation can occur for transcritical and supercritical cores on subparsec scales. Comparison of our results with several star-forming regions (Perseus, Taurus, Pipe Nebula) shows support for a two-stage fragmentation model.« less

  3. Sea surface temperature anomalies, planetary waves, and air-sea feedback in the middle latitudes

    NASA Technical Reports Server (NTRS)

    Frankignoul, C.

    1985-01-01

    Current analytical models for large-scale air-sea interactions in the middle latitudes are reviewed in terms of known sea-surface temperature (SST) anomalies. The scales and strength of different atmospheric forcing mechanisms are discussed, along with the damping and feedback processes controlling the evolution of the SST. Difficulties with effective SST modeling are described in terms of the techniques and results of case studies, numerical simulations of mixed-layer variability and statistical modeling. The relationship between SST and diabatic heating anomalies is considered and a linear model is developed for the response of the stationary atmosphere to the air-sea feedback. The results obtained with linear wave models are compared with the linear model results. Finally, sample data are presented from experiments with general circulation models into which specific SST anomaly data for the middle latitudes were introduced.

  4. The morphing of geographical features by Fourier transformation

    PubMed Central

    Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344

  5. Acoustic Treatment Design Scaling Methods. Volume 3; Test Plans, Hardware, Results, and Evaluation

    NASA Technical Reports Server (NTRS)

    Yu, J.; Kwan, H. W.; Echternach, D. K.; Kraft, R. E.; Syed, A. A.

    1999-01-01

    The ability to design, build, and test miniaturized acoustic treatment panels on scale-model fan rigs representative of the full-scale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. To be able to use scale model treatment as a full-scale design tool, it is necessary that the designer be able to reliably translate the scale model design and performance to an equivalent full-scale design. The primary objective of the study presented in this volume of the final report was to conduct laboratory tests to evaluate liner acoustic properties and validate advanced treatment impedance models. These laboratory tests include DC flow resistance measurements, normal incidence impedance measurements, DC flow and impedance measurements in the presence of grazing flow, and in-duct liner attenuation as well as modal measurements. Test panels were fabricated at three different scale factors (i.e., full-scale, half-scale, and one-fifth scale) to support laboratory acoustic testing. The panel configurations include single-degree-of-freedom (SDOF) perforated sandwich panels, SDOF linear (wire mesh) liners, and double-degree-of-freedom (DDOF) linear acoustic panels.

  6. Lagrangian or Eulerian; real or Fourier? Not all approaches to large-scale structure are created equal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin, E-mail: tassev@astro.princeton.edu

    We present a pedagogical systematic investigation of the accuracy of Eulerian and Lagrangian perturbation theories of large-scale structure. We show that significant differences exist between them especially when trying to model the Baryon Acoustic Oscillations (BAO). We find that the best available model of the BAO in real space is the Zel'dovich Approximation (ZA), giving an accuracy of ∼<3% at redshift of z = 0 in modelling the matter 2-pt function around the acoustic peak. All corrections to the ZA around the BAO scale are perfectly perturbative in real space. Any attempt to achieve better precision requires calibrating the theorymore » to simulations because of the need to renormalize those corrections. In contrast, theories which do not fully preserve the ZA as their solution, receive O(1) corrections around the acoustic peak in real space at z = 0, and are thus of suspicious convergence at low redshift around the BAO. As an example, we find that a similar accuracy of 3% for the acoustic peak is achieved by Eulerian Standard Perturbation Theory (SPT) at linear order only at z ≈ 4. Thus even when SPT is perturbative, one needs to include loop corrections for z∼<4 in real space. In Fourier space, all models perform similarly, and are controlled by the overdensity amplitude, thus recovering standard results. However, that comes at a price. Real space cleanly separates the BAO signal from non-linear dynamics. In contrast, Fourier space mixes signal from short mildly non-linear scales with the linear signal from the BAO to the level that non-linear contributions from short scales dominate. Therefore, one has little hope in constructing a systematic theory for the BAO in Fourier space.« less

  7. Nonlinear spherical perturbations in quintessence models of dark energy

    NASA Astrophysics Data System (ADS)

    Pratap Rajvanshi, Manvendra; Bagla, J. S.

    2018-06-01

    Observations have confirmed the accelerated expansion of the universe. The accelerated expansion can be modelled by invoking a cosmological constant or a dynamical model of dark energy. A key difference between these models is that the equation of state parameter w for dark energy differs from ‑1 in dynamical dark energy (DDE) models. Further, the equation of state parameter is not constant for a general DDE model. Such differences can be probed using the variation of scale factor with time by measuring distances. Another significant difference between the cosmological constant and DDE models is that the latter must cluster. Linear perturbation analysis indicates that perturbations in quintessence models of dark energy do not grow to have a significant amplitude at small length scales. In this paper we study the response of quintessence dark energy to non-linear perturbations in dark matter. We use a fully relativistic model for spherically symmetric perturbations. In this study we focus on thawing models. We find that in response to non-linear perturbations in dark matter, dark energy perturbations grow at a faster rate than expected in linear perturbation theory. We find that dark energy perturbation remains localised and does not diffuse out to larger scales. The dominant drivers of the evolution of dark energy perturbations are the local Hubble flow and a supression of gradients of the scalar field. We also find that the equation of state parameter w changes in response to perturbations in dark matter such that it also becomes a function of position. The variation of w in space is correlated with density contrast for matter. Variation of w and perturbations in dark energy are more pronounced in response to large scale perturbations in matter while the dependence on the amplitude of matter perturbations is much weaker.

  8. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  9. Coarse-grained description of cosmic structure from Szekeres models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Roberto A.; Gaspar, I. Delgado; Hidalgo, Juan Carlos, E-mail: sussman@nucleares.unam.mx, E-mail: ismael.delgadog@uaem.edu.mx, E-mail: hidalgo@fis.unam.mx

    2016-03-01

    We show that the full dynamical freedom of the well known Szekeres models allows for the description of elaborated 3-dimensional networks of cold dark matter structures (over-densities and/or density voids) undergoing ''pancake'' collapse. By reducing Einstein's field equations to a set of evolution equations, which themselves reduce in the linear limit to evolution equations for linear perturbations, we determine the dynamics of such structures, with the spatial comoving location of each structure uniquely specified by standard early Universe initial conditions. By means of a representative example we examine in detail the density contrast, the Hubble flow and peculiar velocities ofmore » structures that evolved, from linear initial data at the last scattering surface, to fully non-linear 10–20 Mpc scale configurations today. To motivate further research, we provide a qualitative discussion on the connection of Szekeres models with linear perturbations and the pancake collapse of the Zeldovich approximation. This type of structure modelling provides a coarse grained—but fully relativistic non-linear and non-perturbative —description of evolving large scale cosmic structures before their virialisation, and as such it has an enormous potential for applications in cosmological research.« less

  10. Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models

    NASA Technical Reports Server (NTRS)

    Buchert, T.; Melott, A. L.; Weiss, A. G.

    1993-01-01

    We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.

  11. Imprints of dark energy on cosmic structure formation - I. Realistic quintessence models and the non-linear matter power spectrum

    NASA Astrophysics Data System (ADS)

    Alimi, J.-M.; Füzfa, A.; Boucher, V.; Rasera, Y.; Courtin, J.; Corasaniti, P.-S.

    2010-01-01

    Quintessence has been proposed to account for dark energy (DE) in the Universe. This component causes a typical modification of the background cosmic expansion, which, in addition to its clustering properties, can leave a potentially distinctive signature on large-scale structures. Many previous studies have investigated this topic, particularly in relation to the non-linear regime of structure formation. However, no careful pre-selection of viable quintessence models with high precision cosmological data was performed. Here we show that this has led to a misinterpretation (and underestimation) of the imprint of quintessence on the distribution of large-scale structures. To this purpose, we perform a likelihood analysis of the combined Supernova Ia UNION data set and Wilkinson Microwave Anisotropy Probe 5-yr data to identify realistic quintessence models. These are specified by different model parameter values, but still statistically indistinguishable from the vanilla Λ cold dark matter (ΛCDM). Differences are especially manifest in the predicted amplitude and shape of the linear matter power spectrum though these remain within the uncertainties of the Sloan Digital Sky Survey data. We use these models as a benchmark for studying the clustering properties of dark matter haloes by performing a series of high-resolution N-body simulations. In this first paper, we specifically focus on the non-linear matter power spectrum. We find that realistic quintessence models allow for relevant differences of the dark matter distribution with respect to the ΛCDM scenario well into the non-linear regime, with deviations of up to 40 per cent in the non-linear power spectrum. Such differences are shown to depend on the nature of DE, as well as the scale and epoch considered. At small scales (k ~ 1-5hMpc-1, depending on the redshift), the structure formation process is about 20 per cent more efficient than in ΛCDM. We show that these imprints are a specific record of the cosmic structure formation history in DE cosmologies and therefore cannot be accounted for in standard fitting functions of the non-linear matter power spectrum.

  12. Magnetotransport in a Model of a Disordered Strange Metal

    NASA Astrophysics Data System (ADS)

    Patel, Aavishkar A.; McGreevy, John; Arovas, Daniel P.; Sachdev, Subir

    2018-04-01

    Despite much theoretical effort, there is no complete theory of the "strange" metal state of the high temperature superconductors, and its linear-in-temperature T resistivity. Recent experiments showing an unexpected linear-in-field B magnetoresistivity have deepened the puzzle. We propose a simple model of itinerant electrons, interacting via random couplings, with electrons localized on a lattice of "quantum dots" or "islands." This model is solvable in a particular large-N limit and can reproduce observed behavior. The key feature of our model is that the electrons in each quantum dot are described by a Sachdev-Ye-Kitaev model describing electrons without quasiparticle excitations. For a particular choice of the interaction between the itinerant and localized electrons, this model realizes a controlled description of a diffusive marginal-Fermi liquid (MFL) without momentum conservation, which has a linear-in-T resistivity and a T ln T specific heat as T →0 . By tuning the strength of this interaction relative to the bandwidth of the itinerant electrons, we can additionally obtain a finite-T crossover to a fully incoherent regime that also has a linear-in-T resistivity. We describe the magnetotransport properties of this model and show that the MFL regime has conductivities that scale as a function of B /T ; however, the magnetoresistance saturates at large B . We then consider a macroscopically disordered sample with domains of such MFLs with varying densities of electrons and islands. Using an effective-medium approximation, we obtain a macroscopic electrical resistance that scales linearly in the magnetic field B applied perpendicular to the plane of the sample, at large B . The resistance also scales linearly in T at small B , and as T f (B /T ) at intermediate B . We consider implications for recent experiments reporting linear transverse magnetoresistance in the strange metal phases of the pnictides and cuprates.

  13. The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models

    DTIC Science & Technology

    1988-07-27

    auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the

  14. Linear and non-linear perturbations in dark energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escamilla-Rivera, Celia; Casarini, Luciano; Fabris, Júlio C.

    2016-11-01

    In this work we discuss observational aspects of three time-dependent parameterisations of the dark energy equation of state w ( z ). In order to determine the dynamics associated with these models, we calculate their background evolution and perturbations in a scalar field representation. After performing a complete treatment of linear perturbations, we also show that the non-linear contribution of the selected w ( z ) parameterisations to the matter power spectra is almost the same for all scales, with no significant difference from the predictions of the standard ΛCDM model.

  15. Optimal Scaling of Interaction Effects in Generalized Linear Models

    ERIC Educational Resources Information Center

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  16. Very low scale Coleman-Weinberg inflation with nonminimal coupling

    NASA Astrophysics Data System (ADS)

    Kaneta, Kunio; Seto, Osamu; Takahashi, Ryo

    2018-03-01

    We study viable small-field Coleman-Weinberg (CW) inflation models with the help of nonminimal coupling to gravity. The simplest small-field CW inflation model (with a low-scale potential minimum) is incompatible with the cosmological constraint on the scalar spectral index. However, there are possibilities to make the model realistic. First, we revisit the CW inflation model supplemented with a linear potential term. We next consider the CW inflation model with a logarithmic nonminimal coupling and illustrate that the model can open a new viable parameter space that includes the model with a linear potential term. We also show parameter spaces where the Hubble scale during the inflation can be as small as 10-4 GeV , 1 GeV, 1 04 GeV , and 1 08 GeV for the number of e -folds of 40, 45, 50, and 55, respectively, with other cosmological constraints being satisfied.

  17. Linear score tests for variance components in linear mixed models and applications to genetic association studies.

    PubMed

    Qu, Long; Guennel, Tobias; Marshall, Scott L

    2013-12-01

    Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.

  18. A cooperation and competition based simple cell receptive field model and study of feed-forward linear and nonlinear contributions to orientation selectivity.

    PubMed

    Bhaumik, Basabi; Mathur, Mona

    2003-01-01

    We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.

  19. Simplifying and upscaling water resources systems models that combine natural and engineered components

    NASA Astrophysics Data System (ADS)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  20. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-15

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest whichmore » leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum.« less

  1. Non-Linear Cosmological Power Spectra in Real and Redshift Space

    NASA Technical Reports Server (NTRS)

    Taylor, A. N.; Hamilton, A. J. S.

    1996-01-01

    We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.

  2. Optogenetic stimulation of a meso-scale human cortical model

    NASA Astrophysics Data System (ADS)

    Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi

    2015-03-01

    Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.

  3. Tortuosity of lightning return stroke channels

    NASA Technical Reports Server (NTRS)

    Levine, D. M.; Gilson, B.

    1984-01-01

    Data obtained from photographs of lightning are presented on the tortuosity of return stroke channels. The data were obtained by making piecewise linear fits to the channels, and recording the cartesian coordinates of the ends of each linear segment. The mean change between ends of the segments was nearly zero in the horizontal direction and was about eight meters in the vertical direction. Histograms of these changes are presented. These data were used to create model lightning channels and to predict the electric fields radiated during return strokes. This was done using a computer generated random walk in which linear segments were placed end-to-end to form a piecewise linear representation of the channel. The computer selected random numbers for the ends of the segments assuming a normal distribution with the measured statistics. Once the channels were simulated, the electric fields radiated during a return stroke were predicted using a transmission line model on each segment. It was found that realistic channels are obtained with this procedure, but only if the model includes two scales of tortuosity: fine scale irregularities corresponding to the local channel tortuosity which are superimposed on large scale horizontal drifts. The two scales of tortuosity are also necessary to obtain agreement between the electric fields computed mathematically from the simulated channels and the electric fields radiated from real return strokes. Without large scale drifts, the computed electric fields do not have the undulations characteristics of the data.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; Stanier, Adam John

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  5. A model of the human in a cognitive prediction task.

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1973-01-01

    The human decision maker's behavior when predicting future states of discrete linear dynamic systems driven by zero-mean Gaussian processes is modeled. The task is on a slow enough time scale that physiological constraints are insignificant compared with cognitive limitations. The model is basically a linear regression system identifier with a limited memory and noisy observations. Experimental data are presented and compared to the model.

  6. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE PAGES

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...

    2017-01-18

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  7. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  8. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    DOE PAGES

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  9. Assessing the scalability of dynamic field gradient focusing by linear modeling

    PubMed Central

    Tracy, Noah I.; Ivory, Cornelius F.

    2010-01-01

    Dynamic field gradient focusing (DFGF) separates and concentrates proteins in native buffers, where proteins are most soluble, using a computer-controlled electric field gradient which lets the operator adjust the pace and resolution of the separation in real-time. The work in this paper assessed whether DFGF could be scaled up from microgram analytical-scale protein loads to milligram preparative-scale loads. Linear modeling of the electric potential, protein transport, and heat transfer simulated the performance of a preparative-scale DFGF instrument. The electric potential model showed where the electrodes should be placed to optimize the shape and strength of the electric field gradient. Results from the protein transport model suggested that in 10 min the device should separate 10 mg each of two proteins whose electrophoretic mobilities differ by 5 ×. Proteins with electrophoretic mobilities differing by only 5% should separate in 3 h. The heat transfer model showed that the preparative DFGF design could dissipate 1 kW of Joule heat while keeping the separation chamber at 25°C. Model results pointed to DFGF successfully scaling up by 1000 × using the proposed instrument design. PMID:18196522

  10. Assessment of the relationship between chlorophyll fluorescence and photosynthesis across scales from measurements and simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Guanter, L.; Berry, J. A.; Tol, C. V. D.

    2016-12-01

    Solar-induced chlorophyll fluorescence (SIF) is a novel optical tool for assessment of terrestrial photosynthesis (GPP). Recent work have shown the strong link between GPP and satellite retrievals of SIF at broad scales. However, critical gaps remain between short term small-scale mechanistic understanding and seasonal global observations. In this presentation, we provide a model-based analysis of the relationship between SIF and GPP across scales for diverse vegetation types and a range of meteorological conditions, with the ultimate focus on reproducing the environmental conditions during remote sensing measurements. The coupled fluorescence-photosynthesis model SCOPE is used to simulate GPP and SIF at the both leaf and canopy levels for 13 flux sites. Analyses were conducted to investigate the effects of temporal scaling, canopy structure, overpass time, and spectral domain on the relationship between SIF and GPP. The simulated SIF is highly non-linear with GPP at the leaf level and instantaneous time scale and tends to linearize when scaling to the canopy level and daily to seasonal scales. These relationships are consistent across a wide range of vegetation types. The relationship between SIF and GPP is primarily driven by absorbed photosynthetically active radiation (APAR), especially at the seasonal scale, although the photosynthetic efficiency also contributes to strengthen the link between them. The linearization of their relationship from leaf to canopy and averaging over time is because the overall conditions of the canopy fall within the range of the linear responses of GPP and SIF to light and the photosynthetic capacity. Our results further show that the top-of-canopy relationships between simulated SIF and GPP have similar linearity regardless of whether we used the morning or midday satellite overpass times. These findings are confirmed by field measurements. In addition, the simulated red SIF at 685 nm has a similar relationship with GPP as that of far-red SIF at 740 nm at the canopy level.

  11. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  12. Optimizing BAO measurements with non-linear transformations of the Lyman-α forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xinkang; Font-Ribera, Andreu; Seljak, Uroš, E-mail: xinkang.wang@berkeley.edu, E-mail: afont@lbl.gov, E-mail: useljak@berkeley.edu

    2015-04-01

    We explore the effect of applying a non-linear transformation to the Lyman-α forest transmitted flux F=e{sup −τ} and the ability of analytic models to predict the resulting clustering amplitude. Both the large-scale bias of the transformed field (signal) and the amplitude of small scale fluctuations (noise) can be arbitrarily modified, but we were unable to find a transformation that increases significantly the signal-to-noise ratio on large scales using Taylor expansion up to the third order. In particular, however, we achieve a 33% improvement in signal to noise for Gaussianized field in transverse direction. On the other hand, we explore anmore » analytic model for the large-scale biasing of the Lyα forest, and present an extension of this model to describe the biasing of the transformed fields. Using hydrodynamic simulations we show that the model works best to describe the biasing with respect to velocity gradients, but is less successful in predicting the biasing with respect to large-scale density fluctuations, especially for very nonlinear transformations.« less

  13. Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering

    NASA Astrophysics Data System (ADS)

    Bromley, B. C.; Warren, M. S.; Zurek, W. H.

    1997-02-01

    We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.

  14. Linear scaling relationships and volcano plots in homogeneous catalysis - revisiting the Suzuki reaction.

    PubMed

    Busch, Michael; Wodrich, Matthew D; Corminboeuf, Clémence

    2015-12-01

    Linear free energy scaling relationships and volcano plots are common tools used to identify potential heterogeneous catalysts for myriad applications. Despite the striking simplicity and predictive power of volcano plots, they remain unknown in homogeneous catalysis. Here, we construct volcano plots to analyze a prototypical reaction from homogeneous catalysis, the Suzuki cross-coupling of olefins. Volcano plots succeed both in discriminating amongst different catalysts and reproducing experimentally known trends, which serves as validation of the model for this proof-of-principle example. These findings indicate that the combination of linear scaling relationships and volcano plots could serve as a valuable methodology for identifying homogeneous catalysts possessing a desired activity through a priori computational screening.

  15. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  16. Voluntary EMG-to-force estimation with a multi-scale physiological muscle model

    PubMed Central

    2013-01-01

    Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560

  17. Vanilla technicolor at linear colliders

    NASA Astrophysics Data System (ADS)

    Frandsen, Mads T.; Järvinen, Matti; Sannino, Francesco

    2011-08-01

    We analyze the reach of linear colliders for models of dynamical electroweak symmetry breaking. We show that linear colliders can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, until the maximum energy in the center of mass of the colliding leptons. In particular we analyze the Drell-Yan processes involving spin-one intermediate heavy bosons decaying either leptonically or into two standard model gauge bosons. We also analyze the light Higgs production in association with a standard model gauge boson stemming also from an intermediate spin-one heavy vector.

  18. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    NASA Astrophysics Data System (ADS)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  19. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    NASA Astrophysics Data System (ADS)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  20. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    PubMed

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  1. Madden-Julian Oscillation: Western Pacific and Indian Ocean

    NASA Astrophysics Data System (ADS)

    Fuchs, Z.; Raymond, D. J.

    2016-12-01

    The MJO has been and still remains a "holy grail" of today's atmospheric science research. Why does the MJO propagate eastward? What makes it unstable? What is the scaling for the MJO, i.e. why does it prefer long wavelengths or planetary wavenumbers 1-3? The MJO has the strongest signal in the Indian ocean and in the West Pacific, but the average vertical structure is very different in each of those basins. We look at the reanalysis/analysis FNL, ERAI vertical structure of temperature and moisture as well as the surface zonal winds for two ocean basins. We also look at data from DYNAMO and TOGA_COARE in great detail (saturation fraction, temperature, entropy, surface zonal winds, gross moist stability, etc). The findings from observations and field projects for the two ocean basins are then compared to a linear WISHE model on an equatorial beta plane. Though linear WISHE has long been discounted as a plausible model for the MJO, the version we have developed explains many of the observed features of this phenomenon, in particular, the preference for large zonal scale, the eastward propagation, the westward group velocity, and the thermodynamic structure. There is no need to postulate large-scale negative gross moist stability, as destabilization occurs via WISHE at long wavelengths only. This differs from early WISHE models because we take a moisture adjustment time scale of order one day in comparison to the much shorter time scales assumed in earlier models. Linear modeling cannot capture all of the features of the MJO, so we are in the process of adding nonlinearity.

  2. Linear bubble plume model for hypolimnetic oxygenation: Full-scale validation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Singleton, V. L.; Gantzer, P.; Little, J. C.

    2007-02-01

    An existing linear bubble plume model was improved, and data collected from a full-scale diffuser installed in Spring Hollow Reservoir, Virginia, were used to validate the model. The depth of maximum plume rise was simulated well for two of the three diffuser tests. Temperature predictions deviated from measured profiles near the maximum plume rise height, but predicted dissolved oxygen profiles compared very well with observations. A sensitivity analysis was performed. The gas flow rate had the greatest effect on predicted plume rise height and induced water flow rate, both of which were directly proportional to gas flow rate. Oxygen transfer within the hypolimnion was independent of all parameters except initial bubble radius and was inversely proportional for radii greater than approximately 1 mm. The results of this work suggest that plume dynamics and oxygen transfer can successfully be predicted for linear bubble plumes using the discrete-bubble approach.

  3. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  4. Linear scaling relationships and volcano plots in homogeneous catalysis – revisiting the Suzuki reaction† †Electronic supplementary information (ESI) available: Detailed derivation of the linear scaling relationships and construction of the volcano plots as well as comparisons of computed values using PBE0-dDsC and M06 functionals is included. See DOI: 10.1039/c5sc02910d Click here for additional data file.

    PubMed Central

    Busch, Michael; Wodrich, Matthew D.

    2015-01-01

    Linear free energy scaling relationships and volcano plots are common tools used to identify potential heterogeneous catalysts for myriad applications. Despite the striking simplicity and predictive power of volcano plots, they remain unknown in homogeneous catalysis. Here, we construct volcano plots to analyze a prototypical reaction from homogeneous catalysis, the Suzuki cross-coupling of olefins. Volcano plots succeed both in discriminating amongst different catalysts and reproducing experimentally known trends, which serves as validation of the model for this proof-of-principle example. These findings indicate that the combination of linear scaling relationships and volcano plots could serve as a valuable methodology for identifying homogeneous catalysts possessing a desired activity through a priori computational screening. PMID:28757966

  5. A Linear Electromagnetic Piston Pump

    NASA Astrophysics Data System (ADS)

    Hogan, Paul H.

    Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.

  6. Experimental analysis of bidirectional reflectance distribution function cross section conversion term in direction cosine space.

    PubMed

    Butler, Samuel D; Nauyoks, Stephen E; Marciniak, Michael A

    2015-06-01

    Of the many classes of bidirectional reflectance distribution function (BRDF) models, two popular classes of models are the microfacet model and the linear systems diffraction model. The microfacet model has the benefit of speed and simplicity, as it uses geometric optics approximations, while linear systems theory uses a diffraction approach to compute the BRDF, at the expense of greater computational complexity. In this Letter, nongrazing BRDF measurements of rough and polished surface-reflecting materials at multiple incident angles are scaled by the microfacet cross section conversion term, but in the linear systems direction cosine space, resulting in great alignment of BRDF data at various incident angles in this space. This results in a predictive BRDF model for surface-reflecting materials at nongrazing angles, while avoiding some of the computational complexities in the linear systems diffraction model.

  7. Preface: Introductory Remarks: Linear Scaling Methods

    NASA Astrophysics Data System (ADS)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up implementation questions relating to parallelization (particularly with multi-core processors starting to dominate the market) and inherent scaling and basis sets (in both normal and linear scaling codes). For now, the answer seems to lie between 100-1,000 atoms, though this depends on the type of simulation used among other factors. Basis sets are still a problematic question in the area of electronic structure calculations. The linear scaling community has largely split into two camps: those using relatively small basis sets based on local atomic-like functions (where systematic convergence to the full basis set limit is hard to achieve); and those that use necessarily larger basis sets which allow convergence systematically and therefore are the localised equivalent of plane waves. Related to basis sets is the study of Wannier functions, on which some linear scaling methods are based and which give a good point of contact with traditional techniques; they are particularly interesting for modelling unoccupied states with linear scaling methods. There are, of course, as many approaches to linear scaling solution for the density matrix as there are groups in the area, though there are various broad areas: McWeeny-based methods, fragment-based methods, recursion methods, and combinations of these. While many ideas have been in development for several years, there are still improvements emerging, as shown by the rich variety of the talks below. Applications using O(N) DFT methods are now starting to emerge, though they are still clearly not trivial. Once systems to be simulated cross the 10,000 atom barrier, only linear scaling methods can be applied, even with the most efficient standard techniques. One of the most challenging problems remaining, now that ab initio methods can be applied to large systems, is the long timescale problem. Although much of the work presented was concerned with improving the performance of the codes, and applying them to scientificallyimportant problems, there was another important theme: extending functionality. The search for greater accuracy has given an implementation of density functional designed to model van der Waals interactions accurately as well as local correlation, TDDFT and QMC and GW methods which, while not explicitly O(N), take advantage of localisation. All speakers at the workshop were invited to contribute to this issue, but not all were able to do this. Hence it is useful to give a complete list of the talks presented, with the names of the sessions; however, many talks fell within more than one area. This is an exciting time for linear scaling methods, which are already starting to contribute significantly to important scientific problems. Applications to nanostructures and biomolecules A DFT study on the structural stability of Ge 3D nanostructures on Si(001) using CONQUEST Tsuyoshi Miyazaki, D R Bowler, M J Gillan, T Otsuka and T Ohno Large scale electronic structure calculation theory and several applications Takeo Fujiwara and Takeo Hoshi ONETEP:Linear-scaling DFT with plane waves Chris-Kriton Skylaris, Peter D Haynes, Arash A Mostofi, Mike C Payne Maximally-localised Wannier functions as building blocks for large-scale electronic structure calculations Arash A Mostofi and Nicola Marzari A linear scaling three dimensional fragment method for ab initio calculations Lin-Wang Wang, Zhengji Zhao, Juan Meza Peta-scalable reactive Molecular dynamics simulation of mechanochemical processes Aiichiro Nakano, Rajiv K. Kalia, Ken-ichi Nomura, Fuyuki Shimojo and Priya Vashishta Recent developments and applications of the real-space multigrid (RMG) method Jerzy Bernholc, M Hodak, W Lu, and F Ribeiro Energy minimisation functionals and algorithms CONQUEST: A linear scaling DFT Code David R Bowler, Tsuyoshi Miyazaki, Antonio Torralba, Veronika Brazdova, Milica Todorovic, Takao Otsuka and Mike Gillan Kernel optimisation and the physical significance of optimised local orbitals in the ONETEP code Peter Haynes, Chris-Kriton Skylaris, Arash Mostofi and Mike Payne A miscellaneous overview of SIESTA algorithms Jose M Soler Wavelets as a basis set for electronic structure calculations and electrostatic problems Stefan Goedecker Wavelets as a basis set for linear scaling electronic structure calculationsMark Rayson O(N) Krylov subspace method for large-scale ab initio electronic structure calculations Taisuke Ozaki Linear scaling calculations with the divide-and-conquer approach and with non-orthogonal localized orbitals Weitao Yang Toward efficient wavefunction based linear scaling energy minimization Valery Weber Accurate O(N) first-principles DFT calculations using finite differences and confined orbitals Jean-Luc Fattebert Linear-scaling methods in dynamics simulations or beyond DFT and ground state properties An O(N) time-domain algorithm for TDDFT Guan Hua Chen Local correlation theory and electronic delocalization Joseph Subotnik Ab initio molecular dynamics with linear scaling: foundations and applications Eiji Tsuchida Towards a linear scaling Car-Parrinello-like approach to Born-Oppenheimer molecular dynamics Thomas Kühne, Michele Ceriotti, Matthias Krack and Michele Parrinello Partial linear scaling for quantum Monte Carlo calculations on condensed matter Mike Gillan Exact embedding of local defects in crystals using maximally localized Wannier functions Eric Cancès Faster GW calculations in larger model structures using ultralocalized nonorthogonal Wannier functions Paolo Umari Other approaches for linear-scaling, including methods formetals Partition-of-unity finite element method for large, accurate electronic-structure calculations of metals John E Pask and Natarajan Sukumar Semiclassical approach to density functional theory Kieron Burke Ab initio transport calculations in defected carbon nanotubes using O(N) techniques Blanca Biel, F J Garcia-Vidal, A Rubio and F Flores Large-scale calculations with the tight-binding (screened) KKR method Rudolf Zeller Acknowledgments We gratefully acknowledge funding for the workshop from the UK CCP9 network, CECAM and the ESF through the PsiK network. DRB, PDH and CKS are funded by the Royal Society. References [1] Car R and Parrinello M 1985 Phys. Rev. Lett. 55 2471 [2] Kühne T D, Krack M, Mohamed F R and Parrinello M 2007 Phys. Rev. Lett. 98 066401 [3] Goedecker S 1999 Rev. Mod. Phys. 71 1085

  8. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  9. Power spectrum estimation from peculiar velocity catalogues

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-09-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  10. Linear velocity fields in non-Gaussian models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  11. A simplified method for power-law modelling of metabolic pathways from time-course data and steady-state flux profiles.

    PubMed

    Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru

    2006-07-17

    In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.

  12. The Linear Bias in the Zeldovich Approximation and a Relation between the Number Density and the Linear Bias of Dark Halos

    NASA Astrophysics Data System (ADS)

    Fan, Zuhui

    2000-01-01

    The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.

  13. HOLLOTRON switch for megawatt lightweight space inverters

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Goebel, D. M.; Schumacher, R. W.

    1991-01-01

    The feasibility of satisfying the switching requirements for a megawatt ultralight inverter system using HOLLOTRON switch technology was determined. The existing experimental switch hardware was modified to investigate a coaxial HOLLOTRON switch configuration and the results were compared with those obtained for a modified linear HOLLOTRON configuration. It was concluded that scaling the HOLLOTRON switch to the current and voltage specifications required for a megawatt converter system is indeed feasible using a modified linear configuration. The experimental HOLLOTRON switch operated at parameters comparable to the scaled coaxial HOLLOTRON. However, the linear HOLLOTRON data verified the capability for meeting all the design objectives simultaneously including current density (greater than 2 A/sq cm), voltage (5 kV), switching frequency (20 kHz), switching time (300 ns), and forward voltage drop (less than or equal to 20 V). Scaling relations were determined and a preliminary design was completed for an engineering model linear HOLLOTRON switch to meet the megawatt converter system specifications.

  14. Confined dynamics of grafted polymer chains in solutions of linear polymer

    DOE PAGES

    Poling-Skutvik, Ryan D.; Olafson, Katy N.; Narayanan, Suresh; ...

    2017-09-11

    Here, we measure the dynamics of high molecular weight polystyrene grafted to silica nanoparticles dispersed in semidilute solutions of linear polymer. Structurally, the linear free chains do not penetrate the grafted corona but increase the osmotic pressure of the solution, collapsing the grafted polymer and leading to eventual aggregation of the grafted particles at high matrix concentrations. Dynamically, the relaxations of the grafted polymer are controlled by the solvent viscosity according to the Zimm model on short time scales. On longer time scales, the grafted chains are confined by neighboring grafted chains, preventing full relaxation over the experimental time scale.more » Adding free linear polymer to the solution does not affect the initial Zimm relaxations of the grafted polymer but does increase the confinement of the grafted chains. Finally, our results elucidate the physics underlying the slow relaxations of grafted polymer.« less

  15. Confined dynamics of grafted polymer chains in solutions of linear polymer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poling-Skutvik, Ryan D.; Olafson, Katy N.; Narayanan, Suresh

    Here, we measure the dynamics of high molecular weight polystyrene grafted to silica nanoparticles dispersed in semidilute solutions of linear polymer. Structurally, the linear free chains do not penetrate the grafted corona but increase the osmotic pressure of the solution, collapsing the grafted polymer and leading to eventual aggregation of the grafted particles at high matrix concentrations. Dynamically, the relaxations of the grafted polymer are controlled by the solvent viscosity according to the Zimm model on short time scales. On longer time scales, the grafted chains are confined by neighboring grafted chains, preventing full relaxation over the experimental time scale.more » Adding free linear polymer to the solution does not affect the initial Zimm relaxations of the grafted polymer but does increase the confinement of the grafted chains. Finally, our results elucidate the physics underlying the slow relaxations of grafted polymer.« less

  16. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  17. Downscaling modelling system for multi-scale air quality forecasting

    NASA Astrophysics Data System (ADS)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.

  18. Modelling climate change responses in tropical forests: similar productivity estimates across five models, but different mechanisms and responses

    NASA Astrophysics Data System (ADS)

    Rowland, L.; Harper, A.; Christoffersen, B. O.; Galbraith, D. R.; Imbuzeiro, H. M. A.; Powell, T. L.; Doughty, C.; Levine, N. M.; Malhi, Y.; Saleska, S. R.; Moorcroft, P. R.; Meir, P.; Williams, M.

    2014-11-01

    Accurately predicting the response of Amazonia to climate change is important for predicting changes across the globe. However, changes in multiple climatic factors simultaneously may result in complex non-linear responses, which are difficult to predict using vegetation models. Using leaf and canopy scale observations, this study evaluated the capability of five vegetation models (CLM3.5, ED2, JULES, SiB3, and SPA) to simulate the responses of canopy and leaf scale productivity to changes in temperature and drought in an Amazonian forest. The models did not agree as to whether gross primary productivity (GPP) was more sensitive to changes in temperature or precipitation. There was greater model-data consistency in the response of net ecosystem exchange to changes in temperature, than in the response to temperature of leaf area index (LAI), net photosynthesis (An) and stomatal conductance (gs). Modelled canopy scale fluxes are calculated by scaling leaf scale fluxes to LAI, and therefore in this study similarities in modelled ecosystem scale responses to drought and temperature were the result of inconsistent leaf scale and LAI responses among models. Across the models, the response of An to temperature was more closely linked to stomatal behaviour than biochemical processes. Consequently all the models predicted that GPP would be higher if tropical forests were 5 °C colder, closer to the model optima for gs. There was however no model consistency in the response of the An-gs relationship when temperature changes and drought were introduced simultaneously. The inconsistencies in the An-gs relationships amongst models were caused by to non-linear model responses induced by simultaneous drought and temperature change. To improve the reliability of simulations of the response of Amazonian rainforest to climate change the mechanistic underpinnings of vegetation models need more complete validation to improve accuracy and consistency in the scaling of processes from leaf to canopy.

  19. Modelling non-linear effects of dark energy

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis

    2018-04-01

    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  20. Emerging universe from scale invariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Campo, Sergio; Herrera, Ramón; Guendelman, Eduardo I.

    2010-06-01

    We consider a scale invariant model which includes a R{sup 2} term in action and show that a stable ''emerging universe'' scenario is possible. The model belongs to the general class of theories, where an integration measure independent of the metric is introduced. To implement scale invariance (S.I.), a dilaton field is introduced. The integration of the equations of motion associated with the new measure gives rise to the spontaneous symmetry breaking (S.S.B) of S.I. After S.S.B. of S.I. in the model with the R{sup 2} term (and first order formalism applied), it is found that a non trivial potentialmore » for the dilaton is generated. The dynamics of the scalar field becomes non linear and these non linearities are instrumental in the stability of some of the emerging universe solutions, which exists for a parameter range of the theory.« less

  1. Parameterizing atmosphere-land surface exchange for climate models with satellite data: A case study for the Southern Great Plains CART site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, W.

    High-resolution satellite data provide detailed, quantitative descriptions of land surface characteristics over large areas so that objective scale linkage becomes feasible. With the aid of satellite data, Sellers et al. and Wood and Lakshmi examined the linearity of processes scaled up from 30 m to 15 km. If the phenomenon is scale invariant, then the aggregated value of a function or flux is equivalent to the function computed from aggregated values of controlling variables. The linear relation may be realistic for limited land areas having no large surface contrasts to cause significant horizontal exchange. However, for areas with sharp surfacemore » contrasts, horizontal exchange and different dynamics in the atmospheric boundary may induce nonlinear interactions, such as at interfaces of land-water, forest-farm land, and irrigated crops-desert steppe. The linear approach, however, represents the simplest scenario, and is useful for developing an effective scheme for incorporating subgrid land surface processes into large-scale models. Our studies focus on coupling satellite data and ground measurements with a satellite-data-driven land surface model to parameterize surface fluxes for large-scale climate models. In this case study, we used surface spectral reflectance data from satellite remote sensing to characterize spatial and temporal changes in vegetation and associated surface parameters in an area of about 350 {times} 400 km covering the southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site of the US Department of Energy`s Atmospheric Radiation Measurement (ARM) Program.« less

  2. On the climate impacts from the volcanic and solar forcings

    NASA Astrophysics Data System (ADS)

    Varotsos, Costas A.; Lovejoy, Shaun

    2016-04-01

    The observed and the modelled estimations show that the main forcings on the atmosphere are of volcanic and solar origins, which act however in an opposite way. The former can be very strong and decrease at short time scales, whereas, the latter increase with time scale. On the contrary, the observed fluctuations in temperatures increase at long scales (e.g. centennial and millennial), and the solar forcings do increase with scale. The common practice is to reduce forcings to radiative equivalents assuming that their combination is linear. In order to clarify the validity of the linearity assumption and determine its range of validity, we systematically compare the statistical properties of solar only, volcanic only and combined solar and volcanic forcings over the range of time scales from one to 1000 years. Additionally, we attempt to investigate plausible reasons for the discrepancies observed between the measured and modeled anomalies of tropospheric temperatures in the tropics. For this purpose, we analyse tropospheric temperature anomalies for both the measured and modeled time series. The results obtained show that the measured temperature fluctuations reveal white noise behavior, while the modeled ones exhibit long-range power law correlations. We suggest that the persistent signal, should be removed from the modeled values in order to achieve better agreement with observations. Keywords: Scaling, Nonlinear variability, Climate system, Solar radiation

  3. Scaling effects in a non-linear electromagnetic energy harvester for wearable sensors

    NASA Astrophysics Data System (ADS)

    Geisler, M.; Boisseau, S.; Perez, M.; Ait-Ali, I.; Perraud, S.

    2016-11-01

    In the field of inertial energy harvesters targeting human mechanical energy, the ergonomics of the solutions impose to find the best compromise between dimensions reduction and electrical performance. In this paper, we study the properties of a non-linear electromagnetic generator at different scales, by performing simulations based on an experimentally validated model and real human acceleration recordings. The results display that the output power of the structure is roughly proportional to its scaling factor raised to the power of five, which indicates that this system is more relevant at lengths over a few centimetres.

  4. Minimal model for a hydrodynamic fingering instability in microroller suspensions

    NASA Astrophysics Data System (ADS)

    Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul

    2017-11-01

    We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.

  5. Dual linear structured support vector machine tracking method via scale correlation filter

    NASA Astrophysics Data System (ADS)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  6. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  7. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  8. The halo model in a massive neutrino cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massara, Elena; Villaescusa-Navarro, Francisco; Viel, Matteo, E-mail: emassara@sissa.it, E-mail: villaescusa@oats.inaf.it, E-mail: viel@oats.inaf.it

    2014-12-01

    We provide a quantitative analysis of the halo model in the context of massive neutrino cosmologies. We discuss all the ingredients necessary to model the non-linear matter and cold dark matter power spectra and compare with the results of N-body simulations that incorporate massive neutrinos. Our neutrino halo model is able to capture the non-linear behavior of matter clustering with a ∼20% accuracy up to very non-linear scales of k = 10 h/Mpc (which would be affected by baryon physics). The largest discrepancies arise in the range k = 0.5 – 1 h/Mpc where the 1-halo and 2-halo terms are comparable and are present also inmore » a massless neutrino cosmology. However, at scales k < 0.2 h/Mpc our neutrino halo model agrees with the results of N-body simulations at the level of 8% for total neutrino masses of < 0.3 eV. We also model the neutrino non-linear density field as a sum of a linear and clustered component and predict the neutrino power spectrum and the cold dark matter-neutrino cross-power spectrum up to k = 1 h/Mpc with ∼30% accuracy. For masses below 0.15 eV the neutrino halo model captures the neutrino induced suppression, casted in terms of matter power ratios between massive and massless scenarios, with a 2% agreement with the results of N-body/neutrino simulations. Finally, we provide a simple application of the halo model: the computation of the clustering of galaxies, in massless and massive neutrinos cosmologies, using a simple Halo Occupation Distribution scheme and our halo model extension.« less

  9. Scale Interactions in the Tropics from a Simple Multi-Cloud Model

    NASA Astrophysics Data System (ADS)

    Niu, X.; Biello, J. A.

    2017-12-01

    Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.

  10. Long-term forecasting of internet backbone traffic.

    PubMed

    Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe

    2005-09-01

    We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.

  11. The non-linear power spectrum of the Lyman alpha forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo

    2015-12-01

    The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate themore » comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula.« less

  12. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  13. Reconstruction and Validation of a Genome-Scale Metabolic Model for the Filamentous Fungus Neurospora crassa Using FARM

    PubMed Central

    Hood, Heather M.; Ocasio, Linda R.; Sachs, Matthew S.; Galagan, James E.

    2013-01-01

    The filamentous fungus Neurospora crassa played a central role in the development of twentieth-century genetics, biochemistry and molecular biology, and continues to serve as a model organism for eukaryotic biology. Here, we have reconstructed a genome-scale model of its metabolism. This model consists of 836 metabolic genes, 257 pathways, 6 cellular compartments, and is supported by extensive manual curation of 491 literature citations. To aid our reconstruction, we developed three optimization-based algorithms, which together comprise Fast Automated Reconstruction of Metabolism (FARM). These algorithms are: LInear MEtabolite Dilution Flux Balance Analysis (limed-FBA), which predicts flux while linearly accounting for metabolite dilution; One-step functional Pruning (OnePrune), which removes blocked reactions with a single compact linear program; and Consistent Reproduction Of growth/no-growth Phenotype (CROP), which reconciles differences between in silico and experimental gene essentiality faster than previous approaches. Against an independent test set of more than 300 essential/non-essential genes that were not used to train the model, the model displays 93% sensitivity and specificity. We also used the model to simulate the biochemical genetics experiments originally performed on Neurospora by comprehensively predicting nutrient rescue of essential genes and synthetic lethal interactions, and we provide detailed pathway-based mechanistic explanations of our predictions. Our model provides a reliable computational framework for the integration and interpretation of ongoing experimental efforts in Neurospora, and we anticipate that our methods will substantially reduce the manual effort required to develop high-quality genome-scale metabolic models for other organisms. PMID:23935467

  14. Conformational free energy of melts of ring-linear polymer blends.

    PubMed

    Subramanian, Gopinath; Shanbhag, Sachin

    2009-10-01

    The conformational free energy of ring polymers in a blend of ring and linear polymers is investigated using the bond-fluctuation model. Previously established scaling relationships for the free energy of a ring polymer are shown to be valid only in the mean-field sense, and alternative functional forms are investigated. It is shown that it may be difficult to accurately express the total free energy of a ring polymer by a simple scaling argument, or in closed form.

  15. Modeling of non-ideal hard permanent magnets with an affine-linear model, illustrated for a bar and a horseshoe magnet

    NASA Astrophysics Data System (ADS)

    Glane, Sebastian; Reich, Felix A.; Müller, Wolfgang H.

    2017-11-01

    This study is dedicated to continuum-scale material modeling of isotropic permanent magnets. An affine-linear extension to the commonly used ideal hard model for permanent magnets is proposed, motivated, and detailed. In order to demonstrate the differences between these models, bar and horseshoe magnets are considered. The structure of the boundary value problem for the magnetic field and related solution techniques are discussed. For the ideal model, closed-form analytical solutions were obtained for both geometries. Magnetic fields of the boundary value problems for both models and differently shaped magnets were computed numerically by using the boundary element method. The results show that the character of the magnetic field is strongly influenced by the model that is used. Furthermore, it can be observed that the shape of an affine-linear magnet influences the near-field significantly. Qualitative comparisons with experiments suggest that both the ideal and the affine-linear models are relevant in practice, depending on the magnetic material employed. Mathematically speaking, the ideal magnetic model is a special case of the affine-linear one. Therefore, in applications where knowledge of the near-field is important, the affine-linear model can yield more accurate results—depending on the magnetic material.

  16. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    PubMed

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenon, Cedric; Lake, Kayll

    We generalize the Swiss-cheese cosmologies so as to include nonzero linear momenta of the associated boundary surfaces. The evolution of mass scales in these generalized cosmologies is studied for a variety of models for the background without having to specify any details within the local inhomogeneities. We find that the final effective gravitational mass and size of the evolving inhomogeneities depends on their linear momenta but these properties are essentially unaffected by the details of the background model.

  18. Critical Fluctuations in Cortical Models Near Instability

    PubMed Central

    Aburn, Matthew J.; Holmes, C. A.; Roberts, James A.; Boonstra, Tjeerd W.; Breakspear, Michael

    2012-01-01

    Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human electroencephalography (EEG), however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where non-linearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power law scaling, and bistable switching have been suggested as generic indicators of the approach to bifurcation in non-linear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen–Rit model) of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations. PMID:22952464

  19. Performance of linear and nonlinear texture measures in 2D and 3D for monitoring architectural changes in osteoporosis using computer-generated models of trabecular bone

    NASA Astrophysics Data System (ADS)

    Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.

    2005-04-01

    Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling index method.

  20. Should the SCOPA-COG be modified? A Rasch analysis perspective.

    PubMed

    Forjaz, M J; Frades-Payo, B; Rodriguez-Blazquez, C; Ayala, A; Martinez-Martin, P

    2010-02-01

    The SCales for Outcomes in PArkinson's disease-Cognition (SCOPA-COG) is a specific measure of cognitive function for Parkinson's disease (PD) patients. Previous studies, under the frame of the classic test theory, indicate satisfactory psychometric properties. The Rasch model, an item response theory approach, provides new information about the scale, as well as results in a linear scale. This study aims at analysing the SCOPA-COG according to the Rasch model and, on the basis of results, suggesting modification to the SCOPA-COG. Fit to the Rasch model was analysed using a sample of 384 PD patients. A good fit was obtained after rescoring for disordered thresholds. The person separation index, a reliability measure, was 0.83. Differential item functioning was observed by age for three items and by gender for one item. The SCOPA-COG is a unidimensional measure of global cognitive function in PD patients, with good scale targeting and no empirical evidence for use of the subscale scores. Its adequate reliability and internal construct validity were supported. The SCOPA-COG, with the proposed scoring scheme, generates true linear interval scores.

  1. Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000

    NASA Astrophysics Data System (ADS)

    Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.

    2018-04-01

    The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.

  2. Large-scale geomorphology: Classical concepts reconciled and integrated with contemporary ideas via a surface processes model

    NASA Astrophysics Data System (ADS)

    Kooi, Henk; Beaumont, Christopher

    1996-02-01

    Linear systems analysis is used to investigate the response of a surface processes model (SPM) to tectonic forcing. The SPM calculates subcontinental scale denudational landscape evolution on geological timescales (1 to hundreds of million years) as the result of simultaneous hillslope transport, modeled by diffusion, and fluvial transport, modeled by advection and reaction. The tectonically forced SPM accommodates the large-scale behavior envisaged in classical and contemporary conceptual geomorphic models and provides a framework for their integration and unification. The following three model scales are considered: micro-, meso-, and macroscale. The concepts of dynamic equilibrium and grade are quantified at the microscale for segments of uniform gradient subject to tectonic uplift. At the larger meso- and macroscales (which represent individual interfluves and landscapes including a number of drainage basins, respectively) the system response to tectonic forcing is linear for uplift geometries that are symmetric with respect to baselevel and which impose a fully integrated drainage to baselevel. For these linear models the response time and the transfer function as a function of scale characterize the model behavior. Numerical experiments show that the styles of landscape evolution depend critically on the timescales of the tectonic processes in relation to the response time of the landscape. When tectonic timescales are much longer than the landscape response time, the resulting dynamic equilibrium landscapes correspond to those envisaged by Hack (1960). When tectonic timescales are of the same order as the landscape response time and when tectonic variations take the form of pulses (much shorter than the response time), evolving landscapes conform to the Penck type (1972) and to the Davis (1889, 1899) and King (1953, 1962) type frameworks, respectively. The behavior of the SPM highlights the importance of phase shifts or delays of the landform response and sediment yield in relation to the tectonic forcing. Finally, nonlinear behavior resulting from more general uplift geometries is discussed. A number of model experiments illustrate the importance of "fundamental form," which is an expression of the conformity of antecedent topography with the current tectonic regime. Lack of conformity leads to models that exhibit internal thresholds and a complex response.

  3. Comparison of the Tangent Linear Properties of Tracer Transport Schemes Applied to Geophysical Problems.

    NASA Technical Reports Server (NTRS)

    Kent, James; Holdaway, Daniel

    2015-01-01

    A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.

  4. Information Fusion from the Point of View of Communication Theory; Fusing Information to Trade-Off the Resolution of Assessments Against the Probability of Mis-Assessment

    DTIC Science & Technology

    2013-08-19

    excellence in linear models , 2010. She successfully defended her dissertation, Linear System Design for Fusion and Compression, on Aug 13, 2013. Her work was...measurements into canonical coordinates, scaling, and rotation; there is a water-filling interpretation; (3) the optimum design of a linear secondary channel of...measurements to fuse with a primary linear channel of measurements maximizes a generalized Rayleigh quotient; (4) the asymptotically optimum

  5. On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier

    2018-01-01

    Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.

  6. Sources of signal-dependent noise during isometric force production.

    PubMed

    Jones, Kelvin E; Hamilton, Antonia F; Wolpert, Daniel M

    2002-09-01

    It has been proposed that the invariant kinematics observed during goal-directed movements result from reducing the consequences of signal-dependent noise (SDN) on motor output. The purpose of this study was to investigate the presence of SDN during isometric force production and determine how central and peripheral components contribute to this feature of motor control. Peripheral and central components were distinguished experimentally by comparing voluntary contractions to those elicited by electrical stimulation of the extensor pollicis longus muscle. To determine other factors of motor-unit physiology that may contribute to SDN, a model was constructed and its output compared with the empirical data. SDN was evident in voluntary isometric contractions as a linear scaling of force variability (SD) with respect to the mean force level. However, during electrically stimulated contractions to the same force levels, the variability remained constant over the same range of mean forces. When the subjects were asked to combine voluntary with stimulation-induced contractions, the linear scaling relationship between the SD and mean force returned. The modeling results highlight that much of the basic physiological organization of the motor-unit pool, such as range of twitch amplitudes and range of recruitment thresholds, biases force output to exhibit linearly scaled SDN. This is in contrast to the square root scaling of variability with mean force present in any individual motor-unit of the pool. Orderly recruitment by twitch amplitude was a necessary condition for producing linearly scaled SDN. Surprisingly, the scaling of SDN was independent of the variability of motoneuron firing and therefore by inference, independent of presynaptic noise in the motor command. We conclude that the linear scaling of SDN during voluntary isometric contractions is a natural by-product of the organization of the motor-unit pool that does not depend on signal-dependent noise in the motor command. Synaptic noise in the motor command and common drive, which give rise to the variability and synchronization of motoneuron spiking, determine the magnitude of the force variability at a given level of mean force output.

  7. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  8. Dynamics of supersymmetric chameleons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Davis, Anne-Christine; Sakstein, Jeremy, E-mail: Philippe.Brax@cea.fr, E-mail: A.C.Davis@damtp.cam.ac.uk, E-mail: J.A.Sakstein@damtp.cam.ac.uk

    2013-10-01

    We investigate the cosmological dynamics of a class of supersymmetric chameleon models coupled to cold dark matter fermions. The model includes a cosmological constant in the form of a Fayet-Illiopoulos term, which emerges at late times due to the coupling of the chameleon to two charged scalars. Supergravity corrections ensure that the supersymmetric chameleons are efficiently screened in all astrophysical objects of interest, however this does not preclude the enhancement of gravity on linear cosmological scales. We solve the modified equations for the growth of cold dark matter density perturbations in closed form in the matter era. Using this, wemore » go on to derive the modified linear power spectrum which is characterised by two scales, the horizon size at matter-radiation equality and at the redshift when the chameleon reaches the minimum of its effective potential. We analyse the deviations from the ΛCDM predictions in the linear regime. We find that there is generically a region in the model's parameter space where the model's background cosmology coincides with that of the ΛCDM model. Furthermore, we find that characteristic deviations from ΛCDM are present on the matter power spectrum providing a clear signature of supersymmetric chameleons.« less

  9. Massively parallel and linear-scaling algorithm for second-order Møller-Plesset perturbation theory applied to the study of supramolecular wires

    NASA Astrophysics Data System (ADS)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul

    2017-03-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.

  10. Toward an Educational View of Scaling: Sufficing Standard and Not a Gold Standard

    ERIC Educational Resources Information Center

    Hung, David; Lee, Shu-Shing; Wu, Longkai

    2015-01-01

    Educational innovations in Singapore have reached fruition. It is now important to consider different innovations and issues that enable innovations to scale and become widespread. This proposition paper outlines two views of scaling and its relation to education systems. We argue that a linear model used in the medical field stresses top-down…

  11. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    PubMed

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG < 0) and endothermic (ΔH > 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  12. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    PubMed

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-05-01

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Rheological behavior of the crust and mantle in subduction zones in the time-scale range from earthquake (minute) to mln years inferred from thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.

  14. A unified stochastic formulation of dissipative quantum dynamics. II. Beyond linear response of spin baths

    NASA Astrophysics Data System (ADS)

    Hsieh, Chang-Yu; Cao, Jianshu

    2018-01-01

    We use the "generalized hierarchical equation of motion" proposed in Paper I [C.-Y. Hsieh and J. Cao, J. Chem. Phys. 148, 014103 (2018)] to study decoherence in a system coupled to a spin bath. The present methodology allows a systematic incorporation of higher-order anharmonic effects of the bath in dynamical calculations. We investigate the leading order corrections to the linear response approximations for spin bath models. Two kinds of spin-based environments are considered: (1) a bath of spins discretized from a continuous spectral density and (2) a bath of localized nuclear or electron spins. The main difference resides with how the bath frequency and the system-bath coupling parameters are distributed in an environment. When discretized from a continuous spectral density, the system-bath coupling typically scales as ˜1 /√{NB } where NB is the number of bath spins. This scaling suppresses the non-Gaussian characteristics of the spin bath and justifies the linear response approximations in the thermodynamic limit. For the nuclear/electron spin bath models, system-bath couplings are directly deduced from spin-spin interactions and do not necessarily obey the 1 /√{NB } scaling. It is not always possible to justify the linear response approximations in this case. Furthermore, if the spin-spin Hamiltonian is highly symmetrical, there exist additional constraints that generate highly non-Markovian and persistent dynamics that is beyond the linear response treatments.

  15. Frequency Preference Response to Oscillatory Inputs in Two-dimensional Neural Models: A Geometric Approach to Subthreshold Amplitude and Phase Resonance.

    PubMed

    Rotstein, Horacio G

    2014-01-01

    We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.

  16. Probing kinematics and fate of the Universe with linearly time-varying deceleration parameter

    NASA Astrophysics Data System (ADS)

    Akarsu, Özgür; Dereli, Tekin; Kumar, Suresh; Xu, Lixin

    2014-02-01

    The parametrizations q = q 0+ q 1 z and q = q 0+ q 1(1 - a/ a 0) (Chevallier-Polarski-Linder parametrization) of the deceleration parameter, which are linear in cosmic redshift z and scale factor a , have been frequently utilized in the literature to study the kinematics of the Universe. In this paper, we follow a strategy that leads to these two well-known parametrizations of the deceleration parameter as well as an additional new parametrization, q = q 0+ q 1(1 - t/ t 0), which is linear in cosmic time t. We study the features of this linearly time-varying deceleration parameter in contrast with the other two linear parametrizations. We investigate in detail the kinematics of the Universe by confronting the three models with the latest observational data. We further study the dynamics of the Universe by considering the linearly time-varying deceleration parameter model in comparison with the standard ΛCDM model. We also discuss the future of the Universe in the context of the models under consideration.

  17. The use of modified scaling factors in the design of high-power, non-linear, transmitting rod-core antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Dvorak, Steven L.; Sternberg, Ben K.

    2010-10-01

    In this paper, we develop a technique for designing high-power, non-linear, transmitting rod-core antennas by using simple modified scale factors rather than running labor-intensive numerical models. By using modified scale factors, a designer can predict changes in magnetic moment, inductance, core series loss resistance, etc. We define modified scale factors as the case when all physical dimensions of the rod antenna are scaled by p, except for the cross-sectional area of the individual wires or strips that are used to construct the core. This allows one to make measurements on a scaled-down version of the rod antenna using the same core material that will be used in the final antenna design. The modified scale factors were derived from prolate spheroidal analytical expressions for a finite-length rod antenna and were verified with experimental results. The modified scaling factors can only be used if the magnetic flux densities within the two scaled cores are the same. With the magnetic flux density constant, the two scaled cores will operate with the same complex permeability, thus changing the non-linear problem to a quasi-linear problem. We also demonstrate that by holding the number of turns times the drive current constant, while changing the number of turns, the inductance and core series loss resistance change by the number of turns squared. Experimental measurements were made on rod cores made from varying diameters of black oxide, low carbon steel wires and different widths of Metglas foil. Furthermore, we demonstrate that the modified scale factors work even in the presence of eddy currents within the core material.

  18. Linear separability in superordinate natural language concepts.

    PubMed

    Ruts, Wim; Storms, Gert; Hampton, James

    2004-01-01

    Two experiments are reported in which linear separability was investigated in superordinate natural language concept pairs (e.g., toiletry-sewing gear). Representations of the exemplars of semantically related concept pairs were derived in two to five dimensions using multidimensional scaling (MDS) of similarities based on possession of the concept features. Next, category membership, obtained from an exemplar generation study (in Experiment 1) and from a forced-choice classification task (in Experiment 2) was predicted from the coordinates of the MDS representation using log linear analysis. The results showed that all natural kind concept pairs were perfectly linearly separable, whereas artifact concept pairs showed several violations. Clear linear separability of natural language concept pairs is in line with independent cue models. The violations in the artifact pairs, however, yield clear evidence against the independent cue models.

  19. Effects of body size and gender on the population pharmacokinetics of artesunate and its active metabolite dihydroartemisinin in pediatric malaria patients.

    PubMed

    Morris, Carrie A; Tan, Beesan; Duparc, Stephan; Borghini-Fuhrer, Isabelle; Jung, Donald; Shin, Chang-Sik; Fleckenstein, Lawrence

    2013-12-01

    Despite the important role of the antimalarial artesunate and its active metabolite dihydroartemisinin (DHA) in malaria treatment efforts, there are limited data on the pharmacokinetics of these agents in pediatric patients. This study evaluated the effects of body size and gender on the pharmacokinetics of artesunate-DHA using data from pediatric and adult malaria patients. Nonlinear mixed-effects modeling was used to obtain a base model consisting of first-order artesunate absorption and one-compartment models for artesunate and for DHA. Various methods of incorporating effects of body size descriptors on clearance and volume parameters were tested. An allometric scaling model for weight and a linear body surface area (BSA) model were deemed optimal. The apparent clearance and volume of distribution of DHA obtained with the allometric scaling model, normalized to a 38-kg patient, were 63.5 liters/h and 65.1 liters, respectively. Estimates for the linear BSA model were similar. The 95% confidence intervals for the estimated gender effects on clearance and volume parameters for artesunate fell outside the predefined no-relevant-clinical-effect interval of 0.75 to 1.25. However, the effect of gender on apparent DHA clearance was almost entirely contained within this interval, suggesting a lack of an influence of gender on this parameter. Overall, the pharmacokinetics of artesunate and DHA following oral artesunate administration can be described for pediatric patients using either an allometric scaling or linear BSA model. Both models predict that, for a given artesunate dose in mg/kg of body weight, younger children are expected to have lower DHA exposure than older children or adults.

  20. The economics of leaf-gas exchange in a fluctuating environment and their upscaling to the canopy-level using turbulent transport theories

    NASA Astrophysics Data System (ADS)

    Katul, G. G.; Palmroth, S.; Manzoni, S.; Oren, R.

    2012-12-01

    Global climate models predict decreases in leaf stomatal conductance (gs) and transpiration due to increases in atmospheric CO2. The consequences of these reductions are increases in soil moisture availability and continental scale run-off at decadal time-scales. Thus, a theory explaining the differential sensitivity of stomata to changing atmospheric CO2 and other environmental conditions such as soil moisture at the ecosystem scale must be identified. Here, these responses are investigated using an optimality theory applied to stomatal conductance. An analytical model for gs is first proposed based on (a) Fickian mass transfer of CO2 and H2O through stomata; (b) a biochemical photosynthesis model that relates intercellular CO2 to net photosynthesis; and (c) a stomatal model based on optimization for maximizing carbon gains when water losses represent a cost. The optimization theory produced three gas exchange responses that are consistent with observations across a wide-range of species: (1) the sensitivity of gs to vapour pressure deficit (D) is similar to that obtained from a previous synthesis of more than 40 species, (2) the theory is consistent with the onset of an apparent 'feed-forward' mechanism in gs, and (3) the emergent non-linear relationship between the ratio of intercellular to atmospheric CO2 (ci/ca) and D agrees with the results available on this response. A simplified version of this leaf-scale approach recovers the linear relationship between stomatal conductance and leaf-photosynthesis employed in numerous climate models that currently use a variant on the 'Ball-Berry' or the 'Leuning' approaches provided the marginal water use efficiency increases linearly with atmospheric CO2. The model is then up-scaled to the canopy-level using novel theories about the structure of turbulence inside vegetation. This up-scaling proved to be effective in resolving the complex (and two-way) interactions between leaves and their immediate micro-climate. Extensions of this optimality approach to drought and salt-stressed cases are briefly presented.

  1. Progressive Mid-latitude Afforestation: Local and Remote Climate Impacts in the Framework of Two Coupled Earth System Models

    NASA Astrophysics Data System (ADS)

    Lague, Marysa

    Vegetation influences the atmosphere in complex and non-linear ways, such that large-scale changes in vegetation cover can drive changes in climate on both local and global scales. Large-scale land surface changes have been shown to introduce excess energy to one hemisphere, causing a shift in atmospheric circulation on a global scale. However, past work has not quantified how the climate response scales with the area of vegetation. Here, we systematically evaluate the response of climate to linearly increasing the area of forest cover over the northern mid-latitudes. We show that the magnitude of afforestation of the northern mid-latitudes determines the climate response in a non-linear fashion, and identify a threshold in vegetation-induced cloud feedbacks - a concept not previously addressed by large-scale vegetation manipulation experiments. Small increases in tree cover drive compensating cloud feedbacks, while latent heat fluxes reach a threshold after sufficiently large increases in tree cover, causing the troposphere to warm and dry, subsequently reducing cloud cover. Increased absorption of solar radiation at the surface is driven by both surface albedo changes and cloud feedbacks. We identify how vegetation-induced changes in cloud cover further feedback on changes in the global energy balance. We also show how atmospheric cross-equatorial energy transport changes as the area of afforestation is incrementally increased (a relationship which has not previously been demonstrated). This work demonstrates that while some climate effects (such as energy transport) of large scale mid-latitude afforestation scale roughly linearly across a wide range of afforestation areas, others (such as the local partitioning of the surface energy budget) are non-linear, and sensitive to the particular magnitude of mid-latitude forcing. Our results highlight the importance of considering both local and remote climate responses to large-scale vegetation change, and explore the scaling relationship between changes in vegetation cover and the resulting climate impacts.

  2. On the linearity of tracer bias around voids

    NASA Astrophysics Data System (ADS)

    Pollina, Giorgia; Hamaus, Nico; Dolag, Klaus; Weller, Jochen; Baldi, Marco; Moscardini, Lauro

    2017-07-01

    The large-scale structure of the Universe can be observed only via luminous tracers of the dark matter. However, the clustering statistics of tracers are biased and depend on various properties, such as their host-halo mass and assembly history. On very large scales, this tracer bias results in a constant offset in the clustering amplitude, known as linear bias. Towards smaller non-linear scales, this is no longer the case and tracer bias becomes a complicated function of scale and time. We focus on tracer bias centred on cosmic voids, I.e. depressions of the density field that spatially dominate the Universe. We consider three types of tracers: galaxies, galaxy clusters and active galactic nuclei, extracted from the hydrodynamical simulation Magneticum Pathfinder. In contrast to common clustering statistics that focus on auto-correlations of tracers, we find that void-tracer cross-correlations are successfully described by a linear bias relation. The tracer-density profile of voids can thus be related to their matter-density profile by a single number. We show that it coincides with the linear tracer bias extracted from the large-scale auto-correlation function and expectations from theory, if sufficiently large voids are considered. For smaller voids we observe a shift towards higher values. This has important consequences on cosmological parameter inference, as the problem of unknown tracer bias is alleviated up to a constant number. The smallest scales in existing data sets become accessible to simpler models, providing numerous modes of the density field that have been disregarded so far, but may help to further reduce statistical errors in constraining cosmology.

  3. A two-scale scattering model with application to the JONSWAP '75 aircraft microwave scatterometer experiment

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1977-01-01

    The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.

  4. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    PubMed

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  5. The Mach number of the cosmic flow - A critical test for current theories

    NASA Technical Reports Server (NTRS)

    Ostriker, Jeremiah P.; Suto, Yusushi

    1990-01-01

    A new cosmological, self-contained test using the ratio of mean velocity and the velocity dispersion in the mean flow frame of a group of test objects is presented. To allow comparison with linear theory, the velocity field must first be smoothed on a suitable scale. In the context of linear perturbation theory, the Mach number M(R) which measures the ratio of power on scales larger than to scales smaller than the patch size R, is independent of the perturbation amplitude and also of bias. An apparent inconsistency is found for standard values of power-law index n = 1 and cosmological density parameter Omega = 1, when comparing values of M(R) predicted by popular models with tentative available observations. Nonstandard models based on adiabatic perturbations with either negative n or small Omega value also fail, due to creation of unacceptably large microwave background fluctuations.

  6. The Debye light scattering equation's scaling relation reveals the purity of synthetic dendrimers

    NASA Astrophysics Data System (ADS)

    Tseng, Hui-Yu; Chen, Hsiao-Ping; Tang, Yi-Hsuan; Chen, Hui-Ting; Kao, Chai-Lin; Wang, Shau-Chun

    2016-03-01

    Spherical dendrimer structures cannot be structurally modeled using conventional polymer models of random coil or rod-like configurations during the calibration of the static light scattering (LS) detectors used to determine the molecular weight (M.W.) of a dendrimer or directly assess the purity of a synthetic compound. In this paper, we used the Debye equation-based scaling relation, which predicts that the static LS intensity per unit concentration is linearly proportional to the M.W. of a synthetic dendrimer in a dilute solution, as a tool to examine the purity of high-generational compounds and to monitor the progress of dendrimer preparations. Without using expensive equipment, such as nuclear magnetic resonance or mass spectrometry, this method only required an affordable flow injection set-up with an LS detector. Solutions of the purified dendrimers, including the poly(amidoamine) (PAMAM) dendrimer and its fourth to seventh generation pyridine derivatives with size range of 5-9 nm, were used to establish the scaling relation with high linearity. The use of artificially impure mixtures of six or seven generations revealed significant deviations from linearity. The raw synthesized products of the pyridine-modified PAMAM dendrimer, which included incompletely reacted dendrimers, were also examined to gauge the reaction progress. As a reaction toward a particular generational derivative of the PAMAM dendrimers proceeded over time, deviations from the linear scaling relation decreased. The difference between the polydispersity index of the incompletely converted products and that of the pure compounds was only about 0.01. The use of the Debye equation-based scaling relation, therefore, is much more useful than the polydispersity index for monitoring conversion processes toward an indicated functionality number in a given preparation.

  7. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    PubMed

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  8. Pharmaceutical Raw Material Identification Using Miniature Near-Infrared (MicroNIR) Spectroscopy and Supervised Pattern Recognition Using Support Vector Machine

    PubMed Central

    Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.

    2016-01-01

    Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624

  9. Scaling properties of Arctic sea ice deformation in high-resolution viscous-plastic sea ice models and satellite observations

    NASA Astrophysics Data System (ADS)

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2017-04-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very high grid resolution can resolve leads and deformation rates that are localised along Linear Kinematic Features (LKF). In a 1-km pan-Arctic sea ice-ocean simulation, the small scale sea-ice deformations in the Central Arctic are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS). A new coupled scaling analysis for data on Eulerian grids determines the spatial and the temporal scaling as well as the coupling between temporal and spatial scales. The spatial scaling of the modelled sea ice deformation implies multi-fractality. The spatial scaling is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling and its coupling to temporal scales with satellite observations and models with the modern elasto-brittle rheology challenges previous results with VP models at coarse resolution where no such scaling was found. The temporal scaling analysis, however, shows that the VP model does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  10. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    NASA Astrophysics Data System (ADS)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  11. Combined chamber-tower approach: Using eddy covariance measurements to cross-validate carbon fluxes modeled from manual chamber campaigns

    NASA Astrophysics Data System (ADS)

    Brümmer, C.; Moffat, A. M.; Huth, V.; Augustin, J.; Herbst, M.; Kutsch, W. L.

    2016-12-01

    Manual carbon dioxide flux measurements with closed chambers at scheduled campaigns are a versatile method to study management effects at small scales in multiple-plot experiments. The eddy covariance technique has the advantage of quasi-continuous measurements but requires large homogeneous areas of a few hectares. To evaluate the uncertainties associated with interpolating from individual campaigns to the whole vegetation period, we installed both techniques at an agricultural site in Northern Germany. The presented comparison covers two cropping seasons, winter oilseed rape in 2012/13 and winter wheat in 2013/14. Modeling half-hourly carbon fluxes from campaigns is commonly performed based on non-linear regressions for the light response and respiration. The daily averages of net CO2 modeled from chamber data deviated from eddy covariance measurements in the range of ± 5 g C m-2 day-1. To understand the observed differences and to disentangle the effects, we performed four additional setups (expert versus default settings of the non-linear regressions based algorithm, purely empirical modeling with artificial neural networks versus non-linear regressions, cross-validating using eddy covariance measurements as campaign fluxes, weekly versus monthly scheduling of campaigns) to model the half-hourly carbon fluxes for the whole vegetation period. The good agreement of the seasonal course of net CO2 at plot and field scale for our agricultural site demonstrates that both techniques are robust and yield consistent results at seasonal time scale even for a managed ecosystem with high temporal dynamics in the fluxes. This allows combining the respective advantages of factorial experiments at plot scale with dense time series data at field scale. Furthermore, the information from the quasi-continuous eddy covariance measurements can be used to derive vegetation proxies to support the interpolation of carbon fluxes in-between the manual chamber campaigns.

  12. Non-local damage rheology and size effect

    NASA Astrophysics Data System (ADS)

    Lyakhovsky, V.

    2011-12-01

    We study scaling relations controlling the onset of transiently-accelerating fracturing and transition to dynamic rupture propagation in a non-local damage rheology model. The size effect is caused principally by growth of a fracture process zone, involving stress redistribution and energy release associated with a large fracture. This implies that rupture nucleation and transition to dynamic propagation are inherently scale-dependent processes. Linear elastic fracture mechanics (LEFM) and local damage mechanics are formulated in terms of dimensionless strain components and thus do not allow introducing any space scaling, except linear relations between fracture length and displacements. Generalization of Weibull theory provides scaling relations between stress and crack length at the onset of failure. A powerful extension of the LEFM formulation is the displacement-weakening model which postulates that yielding is complete when the crack wall displacement exceeds some critical value or slip-weakening distance Dc at which a transition to kinetic friction is complete. Scaling relations controlling the transition to dynamic rupture propagation in slip-weakening formulation are widely accepted in earthquake physics. Strong micro-crack interaction in a process zone may be accounted for by adopting either integral or gradient type non-local damage models. We formulate a gradient-type model with free energy depending on the scalar damage parameter and its spatial derivative. The damage-gradient term leads to structural stresses in the constitutive stress-strain relations and a damage diffusion term in the kinetic equation for damage evolution. The damage diffusion eliminates the singular localization predicted by local models. The finite width of the localization zone provides a fundamental length scale that allows numerical simulations with the model to achieve the continuum limit. A diffusive term in the damage evolution gives rise to additional damage diffusive time scale associated with the structural length scale. The ratio between two time scales associated with damage accumulation and diffusion, the damage diffusivity ratio, reflects the role of the diffusion-controlled delocalization. We demonstrate that localized fracturing occurs at the damage diffusivity ratio below certain critical value leading to a linear scaling between stress and crack length compatible with size effect for failures at crack initiation. A subseuqent quasi-static fracture growth is self-similar with increasing size of the process zone proportional to the fracture length. At a certain stage, controlled by dynamic weakening, the self-similarity breaks down and crack velocity significantly deviates from that predicted by the quasi-static regime, the size of the process zone decreases, and the rate of crack growth ceases to be controlled by the rate of damage increase. Furthermore, the crack speed approaches that predicted by the elasto-dynamic equation. The non-local damage rheology model predicts that the nucleation size of the dynamic fracture scales with fault zone thickness distance of the stress interraction.

  13. Scaling Properties of Arctic Sea Ice Deformation in a High‐Resolution Viscous‐Plastic Sea Ice Model and in Satellite Observations

    PubMed Central

    Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Abstract Sea ice models with the traditional viscous‐plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan‐Arctic sea ice‐ocean simulation, the small‐scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data. PMID:29576996

  14. Scaling Properties of Arctic Sea Ice Deformation in a High-Resolution Viscous-Plastic Sea Ice Model and in Satellite Observations

    NASA Astrophysics Data System (ADS)

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  15. Scaling Properties of Arctic Sea Ice Deformation in a High-Resolution Viscous-Plastic Sea Ice Model and in Satellite Observations.

    PubMed

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  16. CMB hemispherical asymmetry from non-linear isocurvature perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Assadullahi, Hooshyar; Wands, David; Firouzjahi, Hassan

    2015-04-01

    We investigate whether non-adiabatic perturbations from inflation could produce an asymmetric distribution of temperature anisotropies on large angular scales in the cosmic microwave background (CMB). We use a generalised non-linear δ N formalism to calculate the non-Gaussianity of the primordial density and isocurvature perturbations due to the presence of non-adiabatic, but approximately scale-invariant field fluctuations during multi-field inflation. This local-type non-Gaussianity leads to a correlation between very long wavelength inhomogeneities, larger than our observable horizon, and smaller scale fluctuations in the radiation and matter density. Matter isocurvature perturbations contribute primarily to low CMB multipoles and hence can lead to a hemisphericalmore » asymmetry on large angular scales, with negligible asymmetry on smaller scales. In curvaton models, where the matter isocurvature perturbation is partly correlated with the primordial density perturbation, we are unable to obtain a significant asymmetry on large angular scales while respecting current observational constraints on the observed quadrupole. However in the axion model, where the matter isocurvature and primordial density perturbations are uncorrelated, we find it may be possible to obtain a significant asymmetry due to isocurvature modes on large angular scales. Such an isocurvature origin for the hemispherical asymmetry would naturally give rise to a distinctive asymmetry in the CMB polarisation on large scales.« less

  17. Linear time-varying models can reveal non-linear interactions of biomolecular regulatory networks using multiple time-series data.

    PubMed

    Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun

    2008-05-15

    Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software

  18. Halo modelling in chameleon theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombriser, Lucas; Koyama, Kazuya; Li, Baojiu, E-mail: lucas.lombriser@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: baojiu.li@durham.ac.uk

    2014-03-01

    We analyse modelling techniques for the large-scale structure formed in scalar-tensor theories of constant Brans-Dicke parameter which match the concordance model background expansion history and produce a chameleon suppression of the gravitational modification in high-density regions. Thereby, we use a mass and environment dependent chameleon spherical collapse model, the Sheth-Tormen halo mass function and linear halo bias, the Navarro-Frenk-White halo density profile, and the halo model. Furthermore, using the spherical collapse model, we extrapolate a chameleon mass-concentration scaling relation from a ΛCDM prescription calibrated to N-body simulations. We also provide constraints on the model parameters to ensure viability on localmore » scales. We test our description of the halo mass function and nonlinear matter power spectrum against the respective observables extracted from large-volume and high-resolution N-body simulations in the limiting case of f(R) gravity, corresponding to a vanishing Brans-Dicke parameter. We find good agreement between the two; the halo model provides a good qualitative description of the shape of the relative enhancement of the f(R) matter power spectrum with respect to ΛCDM caused by the extra attractive gravitational force but fails to recover the correct amplitude. Introducing an effective linear power spectrum in the computation of the two-halo term to account for an underestimation of the chameleon suppression at intermediate scales in our approach, we accurately reproduce the measurements from the N-body simulations.« less

  19. A Theoretical Basis for the Scaling Law of Broadband Shock Noise Intensity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2011-01-01

    A theoretical basis for the scaling of broadband shock noise intensity In supersonic jets was formulated considering linear shock-shear wave interaction. Modeling of broadband shock noise with the aid of shock-turbulence interaction with special reference to linear theories is briefly reviewed. An hypothesis has been postulated that the peak angle of incidence (closer to the critical angle) for the shear wave primarily governs the generation of sound in the interaction process with the noise generation contribution from off-peak incident angles being relatively unimportant. The proposed hypothesis satisfactorily explains the well-known scaling law for the broadband shock-associated noise in supersonic jets.

  20. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  1. Wavelet regression model in forecasting crude oil price

    NASA Astrophysics Data System (ADS)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  2. Relating Stellar Cycle Periods to Dynamo Calculations

    NASA Technical Reports Server (NTRS)

    Tobias, S. M.

    1998-01-01

    Stellar magnetic activity in slowly rotating stars is often cyclic, with the period of the magnetic cycle depending critically on the rotation rate and the convective turnover time of the star. Here we show that the interpretation of this law from dynamo models is not a simple task. It is demonstrated that the period is (unsurprisingly) sensitive to the precise type of non-linearity employed. Moreover the calculation of the wave-speed of plane-wave solutions does not (as was previously supposed) give an indication of the magnetic period in a more realistic dynamo model, as the changes in length-scale of solutions are not easily captured by this approach. Progress can be made, however, by considering a realistic two-dimensional model, in which the radial length-scale of waves is included. We show that it is possible in this case to derive a more robust relation between cycle period and dynamo number. For all the non-linearities considered in the most realistic model, the magnetic cycle period is a decreasing function of IDI (the amplitude of the dynamo number). However, discriminating between different non-linearities is difficult in this case and care must therefore be taken before advancing explanations for the magnetic periods of stars.

  3. Structural and electron diffraction scaling of twisted graphene bilayers

    NASA Astrophysics Data System (ADS)

    Zhang, Kuan; Tadmor, Ellad B.

    2018-03-01

    Multiscale simulations are used to study the structural relaxation in twisted graphene bilayers and the associated electron diffraction patterns. The initial twist forms an incommensurate moiré pattern that relaxes to a commensurate microstructure comprised of a repeating pattern of alternating low-energy AB and BA domains surrounding a high-energy AA domain. The simulations show that the relaxation mechanism involves a localized rotation and shrinking of the AA domains that scales in two regimes with the imposed twist. For small twisting angles, the localized rotation tends to a constant; for large twist, the rotation scales linearly with it. This behavior is tied to the inverse scaling of the moiré pattern size with twist angle and is explained theoretically using a linear elasticity model. The results are validated experimentally through a simulated electron diffraction analysis of the relaxed structures. A complex electron diffraction pattern involving the appearance of weak satellite peaks is predicted for the small twist regime. This new diffraction pattern is explained using an analytical model in which the relaxation kinematics are described as an exponentially-decaying (Gaussian) rotation field centered on the AA domains. Both the angle-dependent scaling and diffraction patterns are in quantitative agreement with experimental observations. A Matlab program for extracting the Gaussian model parameters accompanies this paper.

  4. Sea-ice deformation in a coupled ocean-sea-ice model and in satellite remote sensing data

    NASA Astrophysics Data System (ADS)

    Spreen, Gunnar; Kwok, Ron; Menemenlis, Dimitris; Nguyen, An T.

    2017-07-01

    A realistic representation of sea-ice deformation in models is important for accurate simulation of the sea-ice mass balance. Simulated sea-ice deformation from numerical simulations with 4.5, 9, and 18 km horizontal grid spacing and a viscous-plastic (VP) sea-ice rheology are compared with synthetic aperture radar (SAR) satellite observations (RGPS, RADARSAT Geophysical Processor System) for the time period 1996-2008. All three simulations can reproduce the large-scale ice deformation patterns, but small-scale sea-ice deformations and linear kinematic features (LKFs) are not adequately reproduced. The mean sea-ice total deformation rate is about 40 % lower in all model solutions than in the satellite observations, especially in the seasonal sea-ice zone. A decrease in model grid spacing, however, produces a higher density and more localized ice deformation features. The 4.5 km simulation produces some linear kinematic features, but not with the right frequency. The dependence on length scale and probability density functions (PDFs) of absolute divergence and shear for all three model solutions show a power-law scaling behavior similar to RGPS observations, contrary to what was found in some previous studies. Overall, the 4.5 km simulation produces the most realistic divergence, vorticity, and shear when compared with RGPS data. This study provides an evaluation of high and coarse-resolution viscous-plastic sea-ice simulations based on spatial distribution, time series, and power-law scaling metrics.

  5. Unveiling Galaxy Bias via the Halo Model, KiDS and GAMA

    NASA Astrophysics Data System (ADS)

    Dvornik, Andrej; Hoekstra, Henk; Kuijken, Konrad; Schneider, Peter; Amon, Alexandra; Nakajima, Reiko; Viola, Massimo; Choi, Ami; Erben, Thomas; Farrow, Daniel J.; Heymans, Catherine; Hildebrandt, Hendrik; Sifón, Cristóbal; Wang, Lingyu

    2018-06-01

    We measure the projected galaxy clustering and galaxy-galaxy lensing signals using the Galaxy And Mass Assembly (GAMA) survey and Kilo-Degree Survey (KiDS) to study galaxy bias. We use the concept of non-linear and stochastic galaxy biasing in the framework of halo occupation statistics to constrain the parameters of the halo occupation statistics and to unveil the origin of galaxy biasing. The bias function Γgm(rp), where rp is the projected comoving separation, is evaluated using the analytical halo model from which the scale dependence of Γgm(rp), and the origin of the non-linearity and stochasticity in halo occupation models can be inferred. Our observations unveil the physical reason for the non-linearity and stochasticity, further explored using hydrodynamical simulations, with the stochasticity mostly originating from the non-Poissonian behaviour of satellite galaxies in the dark matter haloes and their spatial distribution, which does not follow the spatial distribution of dark matter in the halo. The observed non-linearity is mostly due to the presence of the central galaxies, as was noted from previous theoretical work on the same topic. We also see that overall, more massive galaxies reveal a stronger scale dependence, and out to a larger radius. Our results show that a wealth of information about galaxy bias is hidden in halo occupation models. These models should therefore be used to determine the influence of galaxy bias in cosmological studies.

  6. Gravitational field of static p -branes in linearized ghost-free gravity

    NASA Astrophysics Data System (ADS)

    Boos, Jens; Frolov, Valeri P.; Zelnikov, Andrei

    2018-04-01

    We study the gravitational field of static p -branes in D -dimensional Minkowski space in the framework of linearized ghost-free (GF) gravity. The concrete models of GF gravity we consider are parametrized by the nonlocal form factors exp (-□/μ2) and exp (□2/μ4) , where μ-1 is the scale of nonlocality. We show that the singular behavior of the gravitational field of p -branes in general relativity is cured by short-range modifications introduced by the nonlocalities, and we derive exact expressions of the regularized gravitational fields, whose geometry can be written as a warped metric. For large distances compared to the scale of nonlocality, μ r →∞ , our solutions approach those found in linearized general relativity.

  7. A Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Parsakhoo, Zahra; Shao, Yaping

    2017-04-01

    Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).

  8. Modeling and Testing Dark Energy and Gravity with Galaxy Cluster Data

    NASA Astrophysics Data System (ADS)

    Rapetti, David; Cataneo, Matteo; Heneka, Caroline; Mantz, Adam; Allen, Steven W.; Von Der Linden, Anja; Schmidt, Fabian; Lombriser, Lucas; Li, Baojiu; Applegate, Douglas; Kelly, Patrick; Morris, Glenn

    2018-06-01

    The abundance of galaxy clusters is a powerful probe to constrain the properties of dark energy and gravity at large scales. We employed a self-consistent analysis that includes survey, observable-mass scaling relations and weak gravitational lensing data to obtain constraints on f(R) gravity, which are an order of magnitude tighter than the best previously achieved, as well as on cold dark energy of negligible sound speed. The latter implies clustering of the dark energy fluid at all scales, allowing us to measure the effects of dark energy perturbations at cluster scales. For this study, we recalibrated the halo mass function using the following non-linear characteristic quantities: the spherical collapse threshold, the virial overdensity and an additional mass contribution for cold dark energy. We also presented a new modeling of the f(R) gravity halo mass function that incorporates novel corrections to capture key non-linear effects of the Chameleon screening mechanism, as found in high resolution N-body simulations. All these results permit us to predict, as I will also exemplify, and eventually obtain the next generation of cluster constraints on such models, and provide us with frameworks that can also be applied to other proposed dark energy and modified gravity models using cluster abundance observations.

  9. Changing Mental Representations Using Related Physical Models: The Effects of Analyzing Number Lines on Learner Internal Scale of Numerical Magnitude

    ERIC Educational Resources Information Center

    Bengtson, Barbara J.

    2013-01-01

    Understanding the linear relationship of numbers is essential for doing practical and abstract mathematics throughout education and everyday life. There is evidence that number line activities increase learners' number sense, improving the linearity of mental number line representations (Siegler & Ramani, 2009). Mental representations of…

  10. Testing the consistency of three-point halo clustering in Fourier and configuration space

    NASA Astrophysics Data System (ADS)

    Hoffmann, K.; Gaztañaga, E.; Scoccimarro, R.; Crocce, M.

    2018-05-01

    We compare reduced three-point correlations Q of matter, haloes (as proxies for galaxies) and their cross-correlations, measured in a total simulated volume of ˜100 (h-1 Gpc)3, to predictions from leading order perturbation theory on a large range of scales in configuration space. Predictions for haloes are based on the non-local bias model, employing linear (b1) and non-linear (c2, g2) bias parameters, which have been constrained previously from the bispectrum in Fourier space. We also study predictions from two other bias models, one local (g2 = 0) and one in which c2 and g2 are determined by b1 via approximately universal relations. Overall, measurements and predictions agree when Q is derived for triangles with (r1r2r3)1/3 ≳60 h-1 Mpc, where r1 - 3 are the sizes of the triangle legs. Predictions for Qmatter, based on the linear power spectrum, show significant deviations from the measurements at the BAO scale (given our small measurement errors), which strongly decrease when adding a damping term or using the non-linear power spectrum, as expected. Predictions for Qhalo agree best with measurements at large scales when considering non-local contributions. The universal bias model works well for haloes and might therefore be also useful for tightening constraints on b1 from Q in galaxy surveys. Such constraints are independent of the amplitude of matter density fluctuation (σ8) and hence break the degeneracy between b1 and σ8, present in galaxy two-point correlations.

  11. The ABC model: a non-hydrostatic toy model for use in convective-scale data assimilation investigations

    NASA Astrophysics Data System (ADS)

    Petrie, Ruth Elizabeth; Bannister, Ross Noel; Priestley Cullen, Michael John

    2017-12-01

    In developing methods for convective-scale data assimilation (DA), it is necessary to consider the full range of motions governed by the compressible Navier-Stokes equations (including non-hydrostatic and ageostrophic flow). These equations describe motion on a wide range of timescales with non-linear coupling. For the purpose of developing new DA techniques that suit the convective-scale problem, it is helpful to use so-called toy models that are easy to run and contain the same types of motion as the full equation set. Such a model needs to permit hydrostatic and geostrophic balance at large scales but allow imbalance at small scales, and in particular, it needs to exhibit intermittent convection-like behaviour. Existing toy models are not always sufficient for investigating these issues. A simplified system of intermediate complexity derived from the Euler equations is presented, which supports dispersive gravity and acoustic modes. In this system, the separation of timescales can be greatly reduced by changing the physical parameters. Unlike in existing toy models, this allows the acoustic modes to be treated explicitly and hence inexpensively. In addition, the non-linear coupling induced by the equation of state is simplified. This means that the gravity and acoustic modes are less coupled than in conventional models. A vertical slice formulation is used which contains only dry dynamics. The model is shown to give physically reasonable results, and convective behaviour is generated by localised compressible effects. This model provides an affordable and flexible framework within which some of the complex issues of convective-scale DA can later be investigated. The model is called the ABC model after the three tunable parameters introduced: A (the pure gravity wave frequency), B (the modulation of the divergent term in the continuity equation), and C (defining the compressibility).

  12. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  13. Massively parallel and linear-scaling algorithm for second-order Moller–Plesset perturbation theory applied to the study of supramolecular wires

    DOE PAGES

    Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...

    2016-11-16

    Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less

  14. Non-linear modeling of RF in fusion grade plasmas

    NASA Astrophysics Data System (ADS)

    Austin, Travis; Smithe, David; Hakim, Ammar; Jenkins, Thomas

    2011-10-01

    We are seeking to model nonlinear effects, particularly parametric decay instability in the vicinity of the edge plasma and RF launchers, which is thought to be a potential parasitic loss mechanism. We will use time-domain approaches which treat the full spectrum of modes. Two approaches are being tested for feasibility, a non-linear delta-f particle approach, and a higher order many-fluid closure approach. Our particle approach builds on extensive previous work demonstrating the ability to model IBW waves (one of the PDI daughter waves) with a linear delta-f particle model. Here we report on the performance of such simulations when the linear constraint is relaxed, and in particular on the ability of the low-noise loading scheme, specially developed for RF and ion-time scale physics, to operate and maintain low noise in the non-linear regime. Similarly, a novel high-order closure of the fluid equations is necessary to model the IBW and higher harmonics. We will report on the benchmarking of the fluid closure, and its ability to model the anticipated pump and daughter waves in a PDI scenario. This research supported by US DOE Grant # DE-SC0006242.

  15. A model for managing sources of groundwater pollution

    USGS Publications Warehouse

    Gorelick, Steven M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.

  16. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298

  17. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.

  18. Generalized scaling relationships on transition metals: Influence of adsorbate-coadsorbate interactions

    NASA Astrophysics Data System (ADS)

    Majumdar, Paulami; Greeley, Jeffrey

    2018-04-01

    Linear scaling relations of adsorbate energies across a range of catalytic surfaces have emerged as a central interpretive paradigm in heterogeneous catalysis. They are, however, typically developed for low adsorbate coverages which are not always representative of realistic heterogeneous catalytic environments. Herein, we present generalized linear scaling relations on transition metals that explicitly consider adsorbate-coadsorbate interactions at variable coverages. The slopes of these scaling relations do not follow the simple bond counting principles that govern scaling on transition metals at lower coverages. The deviations from bond counting are explained using a pairwise interaction model wherein the interaction parameter determines the slope of the scaling relationship on a given metal at variable coadsorbate coverages, and the slope across different metals at fixed coadsorbate coverage is approximated by adding a coverage-dependent correction to the standard bond counting contribution. The analysis provides a compact explanation for coverage-dependent deviations from bond counting in scaling relationships and suggests a useful strategy for incorporation of coverage effects into catalytic trends studies.

  19. Linear Relationship between Resilience, Learning Approaches, and Coping Strategies to Predict Achievement in Undergraduate Students

    PubMed Central

    de la Fuente, Jesús; Fernández-Cabezas, María; Cambil, Matilde; Vera, Manuel M.; González-Torres, Maria Carmen; Artuch-Garde, Raquel

    2017-01-01

    The aim of the present research was to analyze the linear relationship between resilience (meta-motivational variable), learning approaches (meta-cognitive variables), strategies for coping with academic stress (meta-emotional variable) and academic achievement, necessary in the context of university academic stress. A total of 656 students from a southern university in Spain completed different questionnaires: a resiliency scale, a coping strategies scale, and a study process questionnaire. Correlations and structural modeling were used for data analyses. There was a positive and significant linear association showing a relationship of association and prediction of resilience to the deep learning approach, and problem-centered coping strategies. In a complementary way, these variables positively and significantly predicted the academic achievement of university students. These results enabled a linear relationship of association and consistent and differential prediction to be established among the variables studied. Implications for future research are set out. PMID:28713298

  20. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1981-01-01

    Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.

  1. Tests of peak flow scaling in simulated self-similar river networks

    USGS Publications Warehouse

    Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.

    2001-01-01

    The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.

  2. f(R) gravity on non-linear scales: the post-Friedmann expansion and the vector potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, D.B.; Bruni, M.; Koyama, K.

    2015-07-01

    Many modified gravity theories are under consideration in cosmology as the source of the accelerated expansion of the universe and linear perturbation theory, valid on the largest scales, has been examined in many of these models. However, smaller non-linear scales offer a richer phenomenology with which to constrain modified gravity theories. Here, we consider the Hu-Sawicki form of f(R) gravity and apply the post-Friedmann approach to derive the leading order equations for non-linear scales, i.e. the equations valid in the Newtonian-like regime. We reproduce the standard equations for the scalar field, gravitational slip and the modified Poisson equation in amore » coherent framework. In addition, we derive the equation for the leading order correction to the Newtonian regime, the vector potential. We measure this vector potential from f(R) N-body simulations at redshift zero and one, for two values of the f{sub R{sub 0}} parameter. We find that the vector potential at redshift zero in f(R) gravity can be close to 50% larger than in GR on small scales for |f{sub R{sub 0}}|=1.289 × 10{sup −5}, although this is less for larger scales, earlier times and smaller values of the f{sub R{sub 0}} parameter. Similarly to in GR, the small amplitude of this vector potential suggests that the Newtonian approximation is highly accurate for f(R) gravity, and also that the non-linear cosmological behaviour of f(R) gravity can be completely described by just the scalar potentials and the f(R) field.« less

  3. Reconstruction of real-space linear matter power spectrum from multipoles of BOSS DR12 results

    NASA Astrophysics Data System (ADS)

    Lee, Seokcheon

    2018-02-01

    Recently, the power spectrum (PS) multipoles using the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12) sample are analyzed [1]. The based model for the analysis is the so-called TNS quasi-linear model and the analysis provides the multipoles up to the hexadecapole [2]. Thus, one might be able to recover the real-space linear matter PS by using the combinations of multipoles to investigate the cosmology [3]. We provide the analytic form of the ratio of quadrupole (hexadecapole) to monopole moments of the quasi-linear PS including the Fingers-of-God (FoG) effect to recover the real-space PS in the linear regime. One expects that observed values of the ratios of multipoles should be consistent with those of the linear theory at large scales. Thus, we compare the ratios of multipoles of the linear theory, including the FoG effect with the measured values. From these, we recover the linear matter power spectra in real-space. These recovered power spectra are consistent with the linear matter power spectra.

  4. Efficient parallel simulation of CO2 geologic sequestration insaline aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Doughty, Christine; Wu, Yu-Shu

    2007-01-01

    An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The newmore » parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.« less

  5. Temporal Stability of Soil Moisture and Radar Backscatter Observed by the Advanced Synthetic Aperture Radar (ASAR)

    PubMed Central

    Wagner, Wolfgang; Pathe, Carsten; Doubkova, Marcela; Sabel, Daniel; Bartsch, Annett; Hasenauer, Stefan; Blöschl, Günter; Scipal, Klaus; Martínez-Fernández, José; Löw, Alexander

    2008-01-01

    The high spatio-temporal variability of soil moisture is the result of atmospheric forcing and redistribution processes related to terrain, soil, and vegetation characteristics. Despite this high variability, many field studies have shown that in the temporal domain soil moisture measured at specific locations is correlated to the mean soil moisture content over an area. Since the measurements taken by Synthetic Aperture Radar (SAR) instruments are very sensitive to soil moisture it is hypothesized that the temporally stable soil moisture patterns are reflected in the radar backscatter measurements. To verify this hypothesis 73 Wide Swath (WS) images have been acquired by the ENVISAT Advanced Synthetic Aperture Radar (ASAR) over the REMEDHUS soil moisture network located in the Duero basin, Spain. It is found that a time-invariant linear relationship is well suited for relating local scale (pixel) and regional scale (50 km) backscatter. The observed linear model coefficients can be estimated by considering the scattering properties of the terrain and vegetation and the soil moisture scaling properties. For both linear model coefficients, the relative error between observed and modelled values is less than 5 % and the coefficient of determination (R2) is 86 %. The results are of relevance for interpreting and downscaling coarse resolution soil moisture data retrieved from active (METOP ASCAT) and passive (SMOS, AMSR-E) instruments. PMID:27879759

  6. Infragravity waves on fringing reefs in the tropical Pacific: Dynamic setup

    NASA Astrophysics Data System (ADS)

    Becker, J. M.; Merrifield, M. A.; Yoon, H.

    2016-05-01

    Cross-shore pressure and current observations from four fringing reefs of lengths ranging from 135 to 420 m reveal energetic low-frequency (˜0.001-0.05 Hz) motions. The spatial structure and temporal amplitudes of an empirical orthogonal function analysis of the pressure measurements suggest the dominant low-frequency variability is modal. Incoming and outgoing linear flux estimates also support partially standing modes on the reef flat during energetic events. A cross-covariance analysis suggests that breakpoint forcing excites these partially standing modes, similar to previous findings at other steep reefs. The dynamics of Symonds et al. (1982) with damping are applied to a step reef, with forcing obtained by extending a point break model of Vetter et al. (2010) for breaking wave setup to the low-frequency band using the shoaled envelope of the incident free surface elevation. A one parameter, linear analytical model for the reef flat free surface elevation is presented, which describes between 75% and 97% of the variance of the observed low-frequency shoreline significant wave height for all reefs considered over a range of conditions. The linear model contains a single dimensionless parameter that is the ratio of the inertial to dissipative time scales, and the observations from this study exhibit more low-frequency variability when the dissipative time scale is greater than the inertial time scale for the steep reefs considered.

  7. Quantitative genetic properties of four measures of deformity in yellowtail kingfish Seriola lalandi Valenciennes, 1833.

    PubMed

    Nguyen, N H; Whatmore, P; Miller, A; Knibb, W

    2016-02-01

    The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.

  8. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  9. The relations between conscientiousness and mental health in a North-European and a West-Asian culture.

    PubMed

    Farahani, Mohammad-Naghy; Kormi-Nouri, Reza; De Raad, Boele

    2017-07-04

    The relationship between conscientiousness, mental health and mental illness has been an issue for the last two decades. By using a dual model of mental health, the present study examined a non-linear relationship between conscientiousness and healthy or non-healthy symptoms in two different cultures. Participants in this study were 296 Iranian and 310 Swedish university students (18-24 years of age). We used two different conscientiousness scales; the 12-item conscientiousness subscale of the NEO/FFI as an imported (etic) scale, and a 10-item Iranian conscientiousness scale as an indigenous (emic) and culture-dependent scale. In both conscientiousness scales, multivariate analysis of variance showed that conscientiousness differentiated among four mental health groups (languishing, troubled, symptomatic and flourishing), although languishing and troubled individuals were less conscientious than flourishing and symptomatic individuals. Furthermore, the non-healthy symptomatic individuals were more conscientiousness than flourishing individuals. The results showed no significant differences between the two cultures in terms of the four mental health categories. It was concluded that the relationship between conscientiousness and mental health/mental illness is more a non-linear relationship than a linear one.

  10. Scaling properties of ballistic nano-transistors

    PubMed Central

    2011-01-01

    Recently, we have suggested a scale-invariant model for a nano-transistor. In agreement with experiments a close-to-linear thresh-old trace was found in the calculated ID - VD-traces separating the regimes of classically allowed transport and tunneling transport. In this conference contribution, the relevant physical quantities in our model and its range of applicability are discussed in more detail. Extending the temperature range of our studies it is shown that a close-to-linear thresh-old trace results at room temperatures as well. In qualitative agreement with the experiments the ID - VG-traces for small drain voltages show thermally activated transport below the threshold gate voltage. In contrast, at large drain voltages the gate-voltage dependence is weaker. As can be expected in our relatively simple model, the theoretical drain current is larger than the experimental one by a little less than a decade. PMID:21711899

  11. Testing Linear Temporal Logic Formulae on Finite Execution Traces

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)

    2001-01-01

    We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.

  12. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    PubMed

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  13. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  14. Physics with e{sup +}e{sup -} Linear Colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barklow, Timothy L

    2003-05-05

    We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less

  15. A Sequential Ensemble Prediction System at Convection Permitting Scales

    NASA Astrophysics Data System (ADS)

    Milan, M.; Simmer, C.

    2012-04-01

    A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.

  16. An Open-Source Galaxy Redshift Survey Simulator for next-generation Large Scale Structure Surveys

    NASA Astrophysics Data System (ADS)

    Seijak, Uros

    Galaxy redshift surveys produce three-dimensional maps of the galaxy distribution. On large scales these maps trace the underlying matter fluctuations in a relatively simple manner, so that the properties of the primordial fluctuations along with the overall expansion history and growth of perturbations can be extracted. The BAO standard ruler method to measure the expansion history of the universe using galaxy redshift surveys is thought to be robust to observational artifacts and understood theoretically with high precision. These same surveys can offer a host of additional information, including a measurement of the growth rate of large scale structure through redshift space distortions, the possibility of measuring the sum of neutrino masses, tighter constraints on the expansion history through the Alcock-Paczynski effect, and constraints on the scale-dependence and non-Gaussianity of the primordial fluctuations. Extracting this broadband clustering information hinges on both our ability to minimize and subtract observational systematics to the observed galaxy power spectrum, and our ability to model the broadband behavior of the observed galaxy power spectrum with exquisite precision. Rapid development on both fronts is required to capitalize on WFIRST's data set. We propose to develop an open-source computational toolbox that will propel development in both areas by connecting large scale structure modeling and instrument and survey modeling with the statistical inference process. We will use the proposed simulator to both tailor perturbation theory and fully non-linear models of the broadband clustering of WFIRST galaxies and discover novel observables in the non-linear regime that are robust to observational systematics and able to distinguish between a wide range of spatial and dynamic biasing models for the WFIRST galaxy redshift survey sources. We have demonstrated the utility of this approach in a pilot study of the SDSS-III BOSS galaxies, in which we improved the redshift space distortion growth rate measurement precision by a factor of 2.5 using customized clustering statistics in the non-linear regime that were immunized against observational systematics. We look forward to addressing the unique challenges of modeling and empirically characterizing the WFIRST galaxies and observational systematics.

  17. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    NASA Astrophysics Data System (ADS)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.

  18. Boosting Bayesian parameter inference of stochastic differential equation models with methods from statistical physics

    NASA Astrophysics Data System (ADS)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.

  19. A Search for Mountain Waves in MLS Stratospheric Limb Radiances from the Winter Northern Hemisphere: Data Analysis and Global Mountain Wave Modeling

    DTIC Science & Technology

    2004-02-11

    the general circulation of the middle atmosphere, Philos. Trans. R. Soc. London, Ser. A, 323, 693–705. Anton , H. (2000), Elementary Linear Algebra ...Because the saturated radiances may depend slightly on tangent height as the limb path length decreases, a linear trend (described by parameters a and b...track days and interpolated onto the same limb-track orbits. The color bar scale for radiance variance is linear . (b) Digital elevations of northern

  20. Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.

    ERIC Educational Resources Information Center

    Poole, Keith T.

    1990-01-01

    A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…

  1. -> Air entrainment and bubble statistics in three-dimensional breaking waves

    NASA Astrophysics Data System (ADS)

    Deike, L.; Popinet, S.; Melville, W. K.

    2016-02-01

    Wave breaking in the ocean is of fundamental importance for quantifying wave dissipation and air-sea interaction, including gas and momentum exchange, and for improving air-sea flux parametrizations for weather and climate models. Here we investigate air entrainment and bubble statistics in three-dimensional breaking waves through direct numerical simulations of the two-phase air-water flow using the Open Source solver Gerris. As in previous 2D simulations, the dissipation due to breaking is found to be in good agreement with previous experimental observations and inertial-scaling arguments. For radii larger than the Hinze scale, the bubble size distribution is found to follow a power law of the radius, r-10/3 and to scale linearly with the time dependent turbulent dissipation rate during the active breaking stage. The time-averaged bubble size distribution is found to follow the same power law of the radius and to scale linearly with the wave dissipation rate per unit length of breaking crest. We propose a phenomenological turbulent bubble break-up model that describes the numerical results and existing experimental results.

  2. Modelling of Dictyostelium discoideum movement in a linear gradient of chemoattractant.

    PubMed

    Eidi, Zahra; Mohammad-Rafiee, Farshid; Khorrami, Mohammad; Gholami, Azam

    2017-11-15

    Chemotaxis is a ubiquitous biological phenomenon in which cells detect a spatial gradient of chemoattractant, and then move towards the source. Here we present a position-dependent advection-diffusion model that quantitatively describes the statistical features of the chemotactic motion of the social amoeba Dictyostelium discoideum in a linear gradient of cAMP (cyclic adenosine monophosphate). We fit the model to experimental trajectories that are recorded in a microfluidic setup with stationary cAMP gradients and extract the diffusion and drift coefficients in the gradient direction. Our analysis shows that for the majority of gradients, both coefficients decrease over time and become negative as the cells crawl up the gradient. The extracted model parameters also show that besides the expected drift in the direction of the chemoattractant gradient, we observe a nonlinear dependency of the corresponding variance on time, which can be explained by the model. Furthermore, the results of the model show that the non-linear term in the mean squared displacement of the cell trajectories can dominate the linear term on large time scales.

  3. The development of global GRAPES 4DVAR

    NASA Astrophysics Data System (ADS)

    Liu, Yongzhu

    2017-04-01

    Four-dimensional variation data assimilation (4DVAR) has given a great contribution to the improvement of NWP system over the past twenty years. Therefore, our strategy is to develop an operational global 4D-Var system from the outset. The aim at the paper is to introduce the development of the global GRAPES four-dimensional variation data assimilation (4DVAR) using incremental analysis schemes and to presents results of a comparison between 4DVAR using 6-hour assimilation window and simplified physics during the minimization with three-dimensional variation data assimilation (3DVAR). The dynamical cores of the tangent-linear and adjoint models are developed directly based on the non-hydrostatic forecast model. In addition, the standard correctness checks have been performed. As well as the development adjoint codes, most of our work is focused on improving the computational efficiency since the bulk of the computational cost of 4D-Var is in the integration of the tangent-linear and adjoint models. In terms of tangent-linear model, the wall-clock time is reduced to about 1.2 times as much as one of nonlinear model through the optimizing of the software framework. The significant computational cost savings on adjoint model result from the removing the redundant recompilations of model trajectories. It is encouraging that the wall-clock time of adjoint model is less than 1.5 times as much as one of nonlinear model. The current difficulty is that the numerical scheme used within the linear model is based on strategically on the numeric of the corresponding nonlinear model. Further computational acceleration should be expected from the improvement on nonlinear numerical algorithm. A series of linearized physical parameterization schemes has been developed to improve the representation of perturbed fields in the linear model. It consists of horizontal and vertical diffusion, sub-grid scale orographic gravity wave drag, large-scale condensation and cumulus convection schemes. We also found the straightforward linearization based on the nonlinear physical scheme might lead to significant growing of spurious unstable perturbations. It is essential to simplify the linear physics with respect to the non-linear schemes. The improvement on the perturbed fields in the tangent-linear model is visible with the linear physics included, especially at the low level. GRAPES variation data assimilation system adopts the incremental approach. The work is ongoing to develop a pre-operational 4DVAR suite with 0.25° outer loop resolution and multiple outer-loops configurations. One 4DVAR analysis using 6-hour assimilation windows can be finished within 40-minutes when using the available conventional and satellite data. In summary, it was found that the analysis over the northern, southern hemispheres, tropical region and East Asian area of GRAPES 4DVAR performed better than GRAPES 3DVAR for one month experiments. Moreover, the forecast results show that northern and southern extra-tropical scores for GRAPES 4DVAR are already better than GRAPES 3DVAR, but the tropical performance needs further investigations. Therefore, the subsequent main improvements will aim to enhance its computational efficiency and accuracy in 2017. The global GRAPES 4DVAR is planned for operation in 2018.

  4. Tropical precipitation extremes: Response to SST-induced warming in aquaplanet simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Teixeira, João.

    2017-04-01

    Scaling of tropical precipitation extremes in response to warming is studied in aquaplanet experiments using the global Weather Research and Forecasting (WRF) model. We show how the scaling of precipitation extremes is highly sensitive to spatial and temporal averaging: while instantaneous grid point extreme precipitation scales more strongly than the percentage increase (˜7% K-1) predicted by the Clausius-Clapeyron (CC) relationship, extremes for zonally and temporally averaged precipitation follow a slight sub-CC scaling, in agreement with results from Climate Model Intercomparison Project (CMIP) models. The scaling depends crucially on the employed convection parameterization. This is particularly true when grid point instantaneous extremes are considered. These results highlight how understanding the response of precipitation extremes to warming requires consideration of dynamic changes in addition to the thermodynamic response. Changes in grid-scale precipitation, unlike those in convective-scale precipitation, scale linearly with the resolved flow. Hence, dynamic changes include changes in both large-scale and convective-scale motions.

  5. Magnetotransport of proton-irradiated BaFe 2As 2 and BaFe 1.985Co 0.015As 2 single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moseley, D. A.; Yates, K. A.; Peng, N.

    2015-02-17

    In this paper, we study the magnetotransport properties of the ferropnictide crystals BaFe 2As 2 and BaFe 1.985Co 0.015As 2. These materials exhibit a high field linear magnetoresistance that has been attributed to the quantum linear magnetoresistance model. In this model, the linear magnetoresistance is dependent on the concentration of scattering centers in the material. By using proton-beam irradiation to change the defect scattering density, we find that the dependence of the magnitude of the linear magnetoresistance on scattering quite clearly contravenes this prediction. Finally, a number of other scaling trends in the magnetoresistance and high field Hall data aremore » observed and discussed.« less

  6. Is the Jeffreys' scale a reliable tool for Bayesian model comparison in cosmology?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesseris, Savvas; García-Bellido, Juan, E-mail: savvas.nesseris@uam.es, E-mail: juan.garciabellido@uam.es

    2013-08-01

    We are entering an era where progress in cosmology is driven by data, and alternative models will have to be compared and ruled out according to some consistent criterium. The most conservative and widely used approach is Bayesian model comparison. In this paper we explicitly calculate the Bayes factors for all models that are linear with respect to their parameters. We do this in order to test the so called Jeffreys' scale and determine analytically how accurate its predictions are in a simple case where we fully understand and can calculate everything analytically. We also discuss the case of nestedmore » models, e.g. one with M{sub 1} and another with M{sub 2} superset of M{sub 1} parameters and we derive analytic expressions for both the Bayes factor and the figure of Merit, defined as the inverse area of the model parameter's confidence contours. With all this machinery and the use of an explicit example we demonstrate that the threshold nature of Jeffreys' scale is not a ''one size fits all'' reliable tool for model comparison and that it may lead to biased conclusions. Furthermore, we discuss the importance of choosing the right basis in the context of models that are linear with respect to their parameters and how that basis affects the parameter estimation and the derived constraints.« less

  7. A consistent two-mutation model of bone cancer for two data sets of radium-injected beagles.

    PubMed

    Bijwaard, H; Brugmans, M J P; Leenhouts, H P

    2002-09-01

    A two-mutation carcinogenesis model has been applied to model osteosarcoma incidence in two data sets of beagles injected with 226Ra. Taking age-specific retention into account, the following results have been obtained: (1) a consistent and well-fitting solution for all age and dose groups, (2) mutation rates that are linearly dependent on dose rate, with an exponential decrease for the second mutation at high dose rates, (3) a linear-quadratic dose-effect relationship, which indicates that care should be taken when extrapolating linearly, (4) highest cumulative incidences for injection at young adult age, and highest risks for injection doses of a few kBq kg(-1) at these ages, and (5) when scaled appropriately, the beagle model compares fairly well with a description for radium dial painters, suggesting that a consistent model description of bone cancer induction in beagles and humans may be possible.

  8. Partially linearized external models to active-space coupled-cluster through connected hextuple excitations.

    PubMed

    Xu, Enhua; Ten-No, Seiichiro L

    2018-06-05

    Partially linearized external models to active-space coupled-cluster through hextuple excitations, for example, CC{SDtqph} L , CCSD{tqph} L , and CCSD{tqph} hyb, are implemented and compared with the full active-space CCSDtqph. The computational scaling of CCSDtqph coincides with that for the standard coupled-cluster singles and doubles (CCSD), yet with a much large prefactor. The approximate schemes to linearize the external excitations higher than doubles are significantly cheaper than the full CCSDtqph model. These models are applied to investigate the bond dissociation energies of diatomic molecules (HF, F 2 , CuH, and CuF), and the potential energy surfaces of the bond dissociation processes of HF, CuH, H 2 O, and C 2 H 4 . Among the approximate models, CCSD{tqph} hyb provides very accurate descriptions compared with CCSDtqph for all of the tested systems. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  9. The Lyα forest and the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Meiksin, Avery

    2016-10-01

    The accurate description of the properties of the Lyman-α forest is a spectacular success of the Cold Dark Matter theory of cosmological structure formation. After a brief review of early models, it is shown how numerical simulations have demonstrated the Lyman-α forest emerges from the cosmic web in the quasi-linear regime of overdensity. The quasi-linear nature of the structures allows accurate modeling, providing constraints on cosmological models over a unique range of scales and enabling the Lyman-α forest to serve as a bridge to the more complex problem of galaxy formation.

  10. Feedback linearization based control of a variable air volume air conditioning system for cooling applications.

    PubMed

    Thosar, Archana; Patra, Amit; Bhattacharyya, Souvik

    2008-07-01

    Design of a nonlinear control system for a Variable Air Volume Air Conditioning (VAVAC) plant through feedback linearization is presented in this article. VAVAC systems attempt to reduce building energy consumption while maintaining the primary role of air conditioning. The temperature of the space is maintained at a constant level by establishing a balance between the cooling load generated in the space and the air supply delivered to meet the load. The dynamic model of a VAVAC plant is derived and formulated as a MIMO bilinear system. Feedback linearization is applied for decoupling and linearization of the nonlinear model. Simulation results for a laboratory scale plant are presented to demonstrate the potential of keeping comfort and maintaining energy optimal performance by this methodology. Results obtained with a conventional PI controller and a feedback linearizing controller are compared and the superiority of the proposed approach is clearly established.

  11. Wave kinetics of random fibre lasers

    PubMed Central

    Churkin, D V.; Kolokolov, I V.; Podivilov, E V.; Vatnik, I D.; Nikulin, M A.; Vergeles, S S.; Terekhov, I S.; Lebedev, V V.; Falkovich, G.; Babin, S A.; Turitsyn, S K.

    2015-01-01

    Traditional wave kinetics describes the slow evolution of systems with many degrees of freedom to equilibrium via numerous weak non-linear interactions and fails for very important class of dissipative (active) optical systems with cyclic gain and losses, such as lasers with non-linear intracavity dynamics. Here we introduce a conceptually new class of cyclic wave systems, characterized by non-uniform double-scale dynamics with strong periodic changes of the energy spectrum and slow evolution from cycle to cycle to a statistically steady state. Taking a practically important example—random fibre laser—we show that a model describing such a system is close to integrable non-linear Schrödinger equation and needs a new formalism of wave kinetics, developed here. We derive a non-linear kinetic theory of the laser spectrum, generalizing the seminal linear model of Schawlow and Townes. Experimental results agree with our theory. The work has implications for describing kinetics of cyclical systems beyond photonics. PMID:25645177

  12. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…

  13. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  14. Assessment of online monitoring strategies for measuring N2O emissions from full-scale wastewater treatment systems.

    PubMed

    Marques, Ricardo; Rodriguez-Caballero, A; Oehmen, Adrian; Pijuan, Maite

    2016-08-01

    Clark-Type nitrous oxide (N2O) sensors are routinely used to measure dissolved N2O concentrations in wastewater treatment plants (WWTPs), but have never before been applied to assess gas-phase N2O emissions in full-scale WWTPs. In this study, a full-scale N2O gas sensor was tested and validated for online gas measurements, and assessed with respect to its linearity, temperature dependence, signal saturation and drift prior to full-scale application. The sensor was linear at the concentrations tested (0-422.3, 0-50 and 0-10 ppmv N2O) and had a linear response up to 2750 ppmv N2O. An exponential correlation between temperature and sensor signal was described and predicted using a double exponential equation while the drift did not have a significant influence on the signal. The N2O gas sensor was used for online N2O monitoring in a full-scale sequencing batch reactor (SBR) treating domestic wastewater and results were compared with those obtained by a commercial online gas analyser. Emissions were successfully described by the sensor, being even more accurate than the values given by the commercial analyser at N2O concentrations above 500 ppmv. Data from this gas N2O sensor was also used to validate two models to predict N2O emissions from dissolved N2O measurements, one based on oxygen transfer rate and the other based on superficial velocity of the gas bubble. Using the first model, predictions for N2O emissions agreed by 98.7% with the measured by the gas sensor, while 87.0% similarity was obtained with the second model. This is the first study showing a reliable estimation of gas emissions based on dissolved N2O online data in a full-scale wastewater treatment facility. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DOE PAGES

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian; ...

    2015-02-25

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourlymore » and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.« less

  16. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourlymore » and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.« less

  17. Enhanced Spectral Anisotropies Near the Proton-Cyclotron Scale: Possible Two-Component Structure in Hall-FLR MHD Turbulence Simulations

    NASA Technical Reports Server (NTRS)

    Ghosh, Sanjoy; Goldstein, Melvyn L.

    2011-01-01

    Recent analysis of the magnetic correlation function of solar wind fluctuations at 1 AU suggests the existence of two-component structure near the proton-cyclotron scale. Here we use two-and-one-half dimensional and three-dimensional compressible MHD models to look for two-component structure adjacent the proton-cyclotron scale. Our MHD system incorporates both Hall and Finite Larmor Radius (FLR) terms. We find that strong spectral anisotropies appear adjacent the proton-cyclotron scales depending on selections of initial condition and plasma beta. These anisotropies are enhancements on top of related anisotropies that appear in standard MHD turbulence in the presence of a mean magnetic field and are suggestive of one turbulence component along the inertial scales and another component adjacent the dissipative scales. We compute the relative strengths of linear and nonlinear accelerations on the velocity and magnetic fields to gauge the relative influence of terms that drive the system with wave-like (linear) versus turbulent (nonlinear) dynamics.

  18. Analytic Methods for Adjusting Subjective Rating Schemes.

    ERIC Educational Resources Information Center

    Cooper, Richard V. L.; Nelson, Gary R.

    Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…

  19. Mapping nonlinear receptive field structure in primate retina at single cone resolution

    PubMed Central

    Li, Peter H; Greschner, Martin; Gunning, Deborah E; Mathieson, Keith; Sher, Alexander; Litke, Alan M; Paninski, Liam

    2015-01-01

    The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits. DOI: http://dx.doi.org/10.7554/eLife.05241.001 PMID:26517879

  20. A Scaling Model for the Anthropocene Climate Variability with Projections to 2100

    NASA Astrophysics Data System (ADS)

    Hébert, Raphael; Lovejoy, Shaun

    2017-04-01

    The determination of the climate sensitivity to radiative forcing is a fundamental climate science problem with important policy implications. We use a scaling model, with a limited set of parameters, which can directly calculate the forced globally-average surface air temperature response to anthropogenic and natural forcings. At timescales larger than an inner scale τ, which we determine as the ocean-atmosphere coupling scale at around 2 years, the global system responds, approximately, linearly, so that the variability may be decomposed into additive forced and internal components. The Ruelle response theory extends the classical linear response theory for small perturbations to systems far from equilibrium. Our model thus relates radiative forcings to a forced temperature response by convolution with a suitable Green's function, or climate response function. Motivated by scaling symmetries which allow for long range dependence, we assume a general scaling form, a scaling climate response function (SCRF) which is able to produce a wide range of responses: a power-law truncated at τ. This allows us to analytically calculate the climate sensitivity at different time scales, yielding a one-to-one relation from the transient climate response to the equilibrium climate sensitivity which are estimated, respectively, as 1.6+0.3-0.2K and 2.4+1.3-0.6K at the 90 % confidence level. The model parameters are estimated within a Bayesian framework, with a fractional Gaussian noise error model as the internal variability, from forcing series, instrumental surface temperature datasets and CMIP5 GCMs Representative Concentration Pathways (RCP) scenario runs. This observation based model is robust and projections for the coming century are made following the RCP scenario 2.6, 4.5 and 8.5, yielding in the year 2100, respectively : 1.5 +0.3)_{-0.2K, 2.3 ± 0.4 K and 4.0 ± 0.6 K at the 90 % confidence level. For comparison, the associated projections from a CMIP5 multi-model ensemble(MME) (32 models) are: 1.7 ± 0.8 K, 2.6 ± 0.8 K and 4.8 ± 1.3 K. Therefore, our projection uncertainty is less than half the structural uncertainty of this CMIP5 MME.

  1. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  2. Comparison of statistical models for analyzing wheat yield time series.

    PubMed

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.

  3. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.

  4. Large-scale structure in mimetic Horndeski gravity

    NASA Astrophysics Data System (ADS)

    Arroja, Frederico; Okumura, Teppei; Bartolo, Nicola; Karmakar, Purnendu; Matarrese, Sabino

    2018-05-01

    In this paper, we propose to use the mimetic Horndeski model as a model for the dark universe. Both cold dark matter (CDM) and dark energy (DE) phenomena are described by a single component, the mimetic field. In linear theory, we show that this component effectively behaves like a perfect fluid with zero sound speed and clusters on all scales. For the simpler mimetic cubic Horndeski model, if the background expansion history is chosen to be identical to a perfect fluid DE (PFDE) then the mimetic model predicts the same power spectrum of the Newtonian potential as the PFDE model with zero sound speed. In particular, if the background is chosen to be the same as that of LCDM, then also in this case the power spectrum of the Newtonian potential in the mimetic model becomes indistinguishable from the power spectrum in LCDM on linear scales. A different conclusion may be found in the case of non-adiabatic perturbations. We also discuss the distinguishability, using power spectrum measurements from LCDM N-body simulations as a proxy for future observations, between these mimetic models and other popular models of DE. For instance, we find that if the background has an equation of state equal to ‑0.95 then we will be able to distinguish the mimetic model from the PFDE model with unity sound speed. On the other hand, it will be hard to do this distinction with respect to the LCDM model.

  5. Non-linear scale interactions in a forced turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Duvvuri, Subrahmanyam; McKeon, Beverley

    2015-11-01

    A strong phase-organizing influence exerted by a single synthetic large-scale spatio-temporal mode on directly-coupled (through triadic interactions) small scales in a turbulent boundary layer forced by a spatially-impulsive dynamic wall-roughness patch was previously demonstrated by the authors (J. Fluid Mech. 2015, vol. 767, R4). The experimental set-up was later enhanced to allow for simultaneous forcing of multiple scales in the flow. Results and analysis are presented from a new set of novel experiments where two distinct large scales are forced in the flow by a dynamic wall-roughness patch. The internal non-linear forcing of two other scales with triadic consistency to the artificially forced large scales, corresponding to sum and difference in wavenumbers, is dominated by the latter. This allows for a forcing-response (input-output) type analysis of the two triadic scales, and naturally lends itself to a resolvent operator based model (e.g. McKeon & Sharma, J. Fluid Mech. 2010, vol. 658, pp. 336-382) of the governing Navier-Stokes equations. The support of AFOSR (grant #FA 9550-12-1-0469, program manager D. Smith) is gratefully acknowledged.

  6. Validation of the Omni Scale of Perceived Exertion in a sample of Spanish-speaking youth from the USA.

    PubMed

    Suminski, Richard R; Robertson, Robert J; Goss, Fredric L; Olvera, Norma

    2008-08-01

    Whether the translation of verbal descriptors from English to Spanish affects the validity of the Children's OMNI Scale of Perceived Exertion is not known, so the validity of a Spanish version of the OMNI was examined with 32 boys and 36 girls (9 to 12 years old) for whom Spanish was the primary language. Oxygen consumption, ventilation, respiratory rate, respiratory exchange ratio, heart rate, and ratings of perceived exertion for the overall body (RPE-O) were measured during an incremental treadmill test. All response values displayed significant linear increases across test stages. The linear regression analyses indicated RPE-O values were distributed as positive linear functions of oxygen consumption, ventilation, respiratory rate, respiratory exchange ratio, heart rate, and percent of maximal oxygen consumption. All regression models were statistically significant. The Spanish OMNI Scale is valid for estimating exercise effort during walking and running amongst Hispanic youth whose primary language is Spanish.

  7. How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?

    DOE PAGES

    Hagos, Samson; Ruby Leung, L.; Zhao, Chun; ...

    2018-02-10

    Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less

  8. How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson; Ruby Leung, L.; Zhao, Chun

    Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less

  9. Development of control strategies for safe microburst penetration: A progress report

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1987-01-01

    A single-engine, propeller-driven, general-aviation model was incorporated into the nonlinear simulation and into the linear analysis of root loci and frequency response. Full-scale wind tunnel data provided its aerodynamic model, and the thrust model included the airspeed dependent effects of power and propeller efficiency. Also, the parameters of the Jet Transport model were changed to correspond more closely to the Boeing 727. In order to study their effects on steady-state repsonse to vertical wind inputs, altitude and total specific energy (air-relative and inertial) feedback capabilities were added to the nonlinear and linear models. Multiloop system design goals were defined. Attempts were made to develop controllers which achieved these goals.

  10. Non-linear, non-monotonic effect of nano-scale roughness on particle deposition in absence of an energy barrier: Experiments and modeling

    PubMed Central

    Jin, Chao; Glawdel, Tomasz; Ren, Carolyn L.; Emelko, Monica B.

    2015-01-01

    Deposition of colloidal- and nano-scale particles on surfaces is critical to numerous natural and engineered environmental, health, and industrial applications ranging from drinking water treatment to semi-conductor manufacturing. Nano-scale surface roughness-induced hydrodynamic impacts on particle deposition were evaluated in the absence of an energy barrier to deposition in a parallel plate system. A non-linear, non-monotonic relationship between deposition surface roughness and particle deposition flux was observed and a critical roughness size associated with minimum deposition flux or “sag effect” was identified. This effect was more significant for nanoparticles (<1 μm) than for colloids and was numerically simulated using a Convective-Diffusion model and experimentally validated. Inclusion of flow field and hydrodynamic retardation effects explained particle deposition profiles better than when only the Derjaguin-Landau-Verwey-Overbeek (DLVO) force was considered. This work provides 1) a first comprehensive framework for describing the hydrodynamic impacts of nano-scale surface roughness on particle deposition by unifying hydrodynamic forces (using the most current approaches for describing flow field profiles and hydrodynamic retardation effects) with appropriately modified expressions for DLVO interaction energies, and gravity forces in one model and 2) a foundation for further describing the impacts of more complicated scales of deposition surface roughness on particle deposition. PMID:26658159

  11. Non-linear, non-monotonic effect of nano-scale roughness on particle deposition in absence of an energy barrier: Experiments and modeling

    NASA Astrophysics Data System (ADS)

    Jin, Chao; Glawdel, Tomasz; Ren, Carolyn L.; Emelko, Monica B.

    2015-12-01

    Deposition of colloidal- and nano-scale particles on surfaces is critical to numerous natural and engineered environmental, health, and industrial applications ranging from drinking water treatment to semi-conductor manufacturing. Nano-scale surface roughness-induced hydrodynamic impacts on particle deposition were evaluated in the absence of an energy barrier to deposition in a parallel plate system. A non-linear, non-monotonic relationship between deposition surface roughness and particle deposition flux was observed and a critical roughness size associated with minimum deposition flux or “sag effect” was identified. This effect was more significant for nanoparticles (<1 μm) than for colloids and was numerically simulated using a Convective-Diffusion model and experimentally validated. Inclusion of flow field and hydrodynamic retardation effects explained particle deposition profiles better than when only the Derjaguin-Landau-Verwey-Overbeek (DLVO) force was considered. This work provides 1) a first comprehensive framework for describing the hydrodynamic impacts of nano-scale surface roughness on particle deposition by unifying hydrodynamic forces (using the most current approaches for describing flow field profiles and hydrodynamic retardation effects) with appropriately modified expressions for DLVO interaction energies, and gravity forces in one model and 2) a foundation for further describing the impacts of more complicated scales of deposition surface roughness on particle deposition.

  12. Counter-intuitive features of the dynamic topography unveiled by tectonically realistic 3D numerical models of mantle-lithosphere interactions

    NASA Astrophysics Data System (ADS)

    Burov, Evgueni; Gerya, Taras

    2013-04-01

    It has been long assumed that the dynamic topography associated with mantle-lithosphere interactions should be characterized by long-wavelength features (> 1000 km) correlating with morphology of mantle flow and expanding beyond the scale of tectonic processes. For example, debates on the existence of mantle plumes largely originate from interpretations of expected signatures of plume-induced topography that are compared to the predictions of analytical and numerical models of plume- or mantle-lithosphere interactions (MLI). Yet, most of the large-scale models treat the lithosphere as a homogeneous stagnant layer. We show that in continents, the dynamic topography is strongly affected by rheological properties and layered structure of the lithosphere. For that we reconcile mantle- and tectonic-scale models by introducing a tectonically realistic continental plate model in 3D large-scale plume-mantle-lithosphere interaction context. This model accounts for stratified structure of continental lithosphere, ductile and frictional (Mohr-Coulomb) plastic properties and thermodynamically consistent density variations. The experiments reveal a number of important differences from the predictions of the conventional models. In particular, plate bending, mechanical decoupling of crustal and mantle layers and intra-plate tension-compression instabilities result in transient topographic signatures such as alternating small-scale surface features that could be misinterpreted in terms of regional tectonics. Actually thick ductile lower crustal layer absorbs most of the "direct" dynamic topography and the features produced at surface are mostly controlled by the mechanical instabilities in the upper and intermediate crustal layers produced by MLI-induced shear and bending at Moho and LAB. Moreover, the 3D models predict anisotropic response of the lithosphere even in case of isotropic solicitations by axisymmetric mantle upwellings such as plumes. In particular, in presence of small (i.e. insufficient to produce solely any significant deformation) uniaxial extensional tectonic stress field, the plume-produced surface and LAB features have anisotropic linear shapes perpendicular to the far-field tectonic forces, typical for continental rifts. Compressional field results in singular sub-linear folds above the plume head, perpendicular to the direction of compression. Small bi-axial tectonic stress fields (compression in one direction and extension in the orthogonal direction) result in oblique, almost linear segmented normal or inverse faults with strike-slip components (or visa verse , strike-slip faults with normal or inverse components)

  13. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  14. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  15. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  16. Tensor scale: An analytic approach with efficient computation and applications☆

    PubMed Central

    Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura

    2015-01-01

    Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148

  17. Measuring the Power Spectrum with Peculiar Velocities

    NASA Astrophysics Data System (ADS)

    Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-01-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  18. Hints on the nature of dark matter from the properties of Milky Way satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderhalden, Donnino; Diemand, Juerg; Schneider, Aurel

    2013-03-01

    The nature of dark matter is still unknown and one of the most fundamental scientific mysteries. Although successfully describing large scales, the standard cold dark matter model (CDM) exhibits possible shortcomings on galactic and sub-galactic scales. It is exactly at these highly non-linear scales where strong astrophysical constraints can be set on the nature of the dark matter particle. While observations of the Lyman-α forest probe the matter power spectrum in the mildly non-linear regime, satellite galaxies of the Milky Way provide an excellent laboratory as a test of the underlying cosmology on much smaller scales. Here we present resultsmore » from a set of high resolution simulations of a Milky Way sized dark matter halo in eight distinct cosmologies: CDM, warm dark matter (WDM) with a particle mass of 2 keV and six different cold plus warm dark matter (C+WDM) models, varying the fraction, f{sub wdm}, and the mass, m{sub wdm}, of the warm component. We used three different observational tests based on Milky Way satellite observations: the total satellite abundance, their radial distribution and their mass profile. We show that the requirement of simultaneously satisfying all three constraints sets very strong limits on the nature of dark matter. This shows the power of a multi-dimensional small scale approach in ruling out models which would be still allowed by large scale observations.« less

  19. A Comparative Study of a 1/4-Scale Gulfstream G550 Aircraft Nose Gear Model

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Neuhart, Dan H.; Zawodny, Nikolas S.; Liu, Fei; Yardibi, Tarik; Cattafesta, Louis; Van de Ven, Thomas

    2009-01-01

    A series of fluid dynamic and aeroacoustic wind tunnel experiments are performed at the University of Florida Aeroacoustic Flow Facility and the NASA-Langley Basic Aerodynamic Research Tunnel Facility on a high-fidelity -scale model of Gulfstream G550 aircraft nose gear. The primary objectives of this study are to obtain a comprehensive aeroacoustic dataset for a nose landing gear and to provide a clearer understanding of landing gear contributions to overall airframe noise of commercial aircraft during landing configurations. Data measurement and analysis consist of mean and fluctuating model surface pressure, noise source localization maps using a large-aperture microphone directional array, and the determination of far field noise level spectra using a linear array of free field microphones. A total of 24 test runs are performed, consisting of four model assembly configurations, each of which is subjected to three test section speeds, in two different test section orientations. The different model assembly configurations vary in complexity from a fully-dressed to a partially-dressed geometry. The two model orientations provide flyover and sideline views from the perspective of a phased acoustic array for noise source localization via beamforming. Results show that the torque arm section of the model exhibits the highest rms pressures for all model configurations, which is also evidenced in the sideline view noise source maps for the partially-dressed model geometries. Analysis of acoustic spectra data from the linear array microphones shows a slight decrease in sound pressure levels at mid to high frequencies for the partially-dressed cavity open model configuration. In addition, far field sound pressure level spectra scale approximately with the 6th power of velocity and do not exhibit traditional Strouhal number scaling behavior.

  20. Time Hierarchies and Model Reduction in Canonical Non-linear Models

    PubMed Central

    Löwe, Hannes; Kremling, Andreas; Marin-Sanguino, Alberto

    2016-01-01

    The time-scale hierarchies of a very general class of models in differential equations is analyzed. Classical methods for model reduction and time-scale analysis have been adapted to this formalism and a complementary method is proposed. A unified theoretical treatment shows how the structure of the system can be much better understood by inspection of two sets of singular values: one related to the stoichiometric structure of the system and another to its kinetics. The methods are exemplified first through a toy model, then a large synthetic network and finally with numeric simulations of three classical benchmark models of real biological systems. PMID:27708665

  1. Scaling, soil moisture and evapotranspiration in runoff models

    NASA Technical Reports Server (NTRS)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.

  2. Large-scale linear rankSVM.

    PubMed

    Lee, Ching-Pei; Lin, Chih-Jen

    2014-04-01

    Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use.

  3. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  4. Assessing variance components in multilevel linear models using approximate Bayes factors: A case study of ethnic disparities in birthweight

    PubMed Central

    Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.

    2013-01-01

    Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430

  5. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  6. Measuring aging rates of mice subjected to caloric restriction and genetic disruption of growth hormone signaling

    PubMed Central

    Koopman, Jacob J.E.; van Heemst, Diana; van Bodegom, David; Bonkowski, Michael S.; Sun, Liou Y.; Bartke, Andrzej

    2016-01-01

    Caloric restriction and genetic disruption of growth hormone signaling have been shown to counteract aging in mice. The effects of these interventions on aging are examined through age-dependent survival or through the increase in age-dependent mortality rates on a logarithmic scale fitted to the Gompertz model. However, these methods have limitations that impede a fully comprehensive disclosure of these effects. Here we examine the effects of these interventions on murine aging through the increase in age-dependent mortality rates on a linear scale without fitting them to a model like the Gompertz model. Whereas these interventions negligibly and non-consistently affected the aging rates when examined through the age-dependent mortality rates on a logarithmic scale, they caused the aging rates to increase at higher ages and to higher levels when examined through the age-dependent mortality rates on a linear scale. These results add to the debate whether these interventions postpone or slow aging and to the understanding of the mechanisms by which they affect aging. Since different methods yield different results, it is worthwhile to compare their results in future research to obtain further insights into the effects of dietary, genetic, and other interventions on the aging of mice and other species. PMID:26959761

  7. Measuring aging rates of mice subjected to caloric restriction and genetic disruption of growth hormone signaling.

    PubMed

    Koopman, Jacob J E; van Heemst, Diana; van Bodegom, David; Bonkowski, Michael S; Sun, Liou Y; Bartke, Andrzej

    2016-03-01

    Caloric restriction and genetic disruption of growth hormone signaling have been shown to counteract aging in mice. The effects of these interventions on aging are examined through age-dependent survival or through the increase in age-dependent mortality rates on a logarithmic scale fitted to the Gompertz model. However, these methods have limitations that impede a fully comprehensive disclosure of these effects. Here we examine the effects of these interventions on murine aging through the increase in age-dependent mortality rates on a linear scale without fitting them to a model like the Gompertz model. Whereas these interventions negligibly and non-consistently affected the aging rates when examined through the age-dependent mortality rates on a logarithmic scale, they caused the aging rates to increase at higher ages and to higher levels when examined through the age-dependent mortality rates on a linear scale. These results add to the debate whether these interventions postpone or slow aging and to the understanding of the mechanisms by which they affect aging. Since different methods yield different results, it is worthwhile to compare their results in future research to obtain further insights into the effects of dietary, genetic, and other interventions on the aging of mice and other species.

  8. Comparison between a Weibull proportional hazards model and a linear model for predicting the genetic merit of US Jersey sires for daughter longevity.

    PubMed

    Caraviello, D Z; Weigel, K A; Gianola, D

    2004-05-01

    Predicted transmitting abilities (PTA) of US Jersey sires for daughter longevity were calculated using a Weibull proportional hazards sire model and compared with predictions from a conventional linear animal model. Culling data from 268,008 Jersey cows with first calving from 1981 to 2000 were used. The proportional hazards model included time-dependent effects of herd-year-season contemporary group and parity by stage of lactation interaction, as well as time-independent effects of sire and age at first calving. Sire variances and parameters of the Weibull distribution were estimated, providing heritability estimates of 4.7% on the log scale and 18.0% on the original scale. The PTA of each sire was expressed as the expected risk of culling relative to daughters of an average sire. Risk ratios (RR) ranged from 0.7 to 1.3, indicating that the risk of culling for daughters of the best sires was 30% lower than for daughters of average sires and nearly 50% lower than than for daughters of the poorest sires. Sire PTA from the proportional hazards model were compared with PTA from a linear model similar to that used for routine national genetic evaluation of length of productive life (PL) using cross-validation in independent samples of herds. Models were compared using logistic regression of daughters' stayability to second, third, fourth, or fifth lactation on their sires' PTA values, with alternative approaches for weighting the contribution of each sire. Models were also compared using logistic regression of daughters' stayability to 36, 48, 60, 72, and 84 mo of life. The proportional hazards model generally yielded more accurate predictions according to these criteria, but differences in predictive ability between methods were smaller when using a Kullback-Leibler distance than with other approaches. Results of this study suggest that survival analysis methodology may provide more accurate predictions of genetic merit for longevity than conventional linear models.

  9. A parametrisation of modified gravity on nonlinear cosmological scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombriser, Lucas, E-mail: llo@roe.ac.uk

    2016-11-01

    Viable modifications of gravity on cosmological scales predominantly rely on screening mechanisms to recover Einstein's Theory of General Relativity in the Solar System, where it has been well tested. A parametrisation of the effects of such modifications in the spherical collapse model is presented here for the use of modelling the modified nonlinear cosmological structure. The formalism allows an embedding of the different screening mechanisms operating in scalar-tensor theories through large values of the gravitational potential or its first or second derivatives as well as of linear suppression effects or more general transitions between modified and Einstein gravity limits. Eachmore » screening or suppression mechanism is parametrised by a time, mass, and environment dependent screening scale, an effective modified gravitational coupling in the fully unscreened limit that can be matched to linear theory, the exponent of a power-law radial profile of the screened coupling, determined by derivatives, symmetries, and potentials in the scalar field equation, and an interpolation rate between the screened and unscreened limits. Along with generalised perturbative methods, the parametrisation may be used to formulate a nonlinear extension to the linear parametrised post-Friedmannian framework to enable generalised tests of gravity with the wealth of observations from the nonlinear cosmological regime.« less

  10. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  11. Cascade model for fluvial geomorphology

    NASA Technical Reports Server (NTRS)

    Newman, W. I.; Turcotte, D. L.

    1990-01-01

    Erosional landscapes are generally scale invariant and fractal. Spectral studies provide quantitative confirmation of this statement. Linear theories of erosion will not generate scale-invariant topography. In order to explain the fractal behavior of landscapes a modified Fourier series has been introduced that is the basis for a renormalization approach. A nonlinear dynamical model has been introduced for the decay of the modified Fourier series coefficients that yield a fractal spectra. It is argued that a physical basis for this approach is that a fractal (or nearly fractal) distribution of storms (floods) continually renews erosional features on all scales.

  12. Large-scale structure in superfluid Chaplygin gas cosmology

    NASA Astrophysics Data System (ADS)

    Yang, Rongjia

    2014-03-01

    We investigate the growth of the large-scale structure in the superfluid Chaplygin gas (SCG) model. Both linear and nonlinear growth, such as σ8 and the skewness S3, are discussed. We find the growth factor of SCG reduces to the Einstein-de Sitter case at early times while it differs from the cosmological constant model (ΛCDM) case in the large a limit. We also find there will be more stricture growth on large scales in the SCG scenario than in ΛCDM and the variations of σ8 and S3 between SCG and ΛCDM cannot be discriminated.

  13. Variation objective analyses for cyclone studies

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.; Kidder, S. Q.; Ochs, H. T.

    1985-01-01

    The objectives were to: (1) develop an objective analysis technique that will maximize the information content of data available from diverse sources, with particular emphasis on the incorporation of observations from satellites with those from more traditional immersion techniques; and (2) to develop a diagnosis of the state of the synoptic scale atmosphere on a much finer scale over a much broader region than is presently possible to permit studies of the interactions and energy transfers between global, synoptic and regional scale atmospheric processes. The variational objective analysis model consists of the two horizontal momentum equations, the hydrostatic equation, and the integrated continuity equation for a dry hydrostatic atmosphere. Preliminary tests of the model with the SESMAE I data set are underway for 12 GMT 10 April 1979. At this stage of purpose of the analysis is not the diagnosis of atmospheric structures but rather the validation of the model. Model runs for rawinsonde data and with the precision modulus weights set to force most of the adjustment of the wind field to the mass field have produced 90 to 95 percent reductions in the imbalance of the initial data after only 4-cycles through the Euler-Lagrange equations. Sensitivity tests for linear stability of the 11 Euler-Lagrange equations that make up the VASP Model 1 indicate that there will be a lower limit to the scales of motion that can be resolved by this method. Linear stability criteria are violated where there is large horizontal wind shear near the upper tropospheric jet.

  14. Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.

    PubMed

    Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon

    2015-05-01

    The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.

  15. Perturbation theory for cosmologies with nonlinear structure

    NASA Astrophysics Data System (ADS)

    Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy

    2017-11-01

    The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.

  16. Normal forms for reduced stochastic climate models

    PubMed Central

    Majda, Andrew J.; Franzke, Christian; Crommelin, Daan

    2009-01-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943

  17. The linear relationship between the Vulnerable Elders Survey-13 score and mortality in an Asian population of community-dwelling older persons.

    PubMed

    Wang, Jye; Lin, Wender; Chang, Ling-Hui

    2018-01-01

    The Vulnerable Elders Survey-13 (VES-13) has been used as a screening tool to identify vulnerable community-dwelling older persons for more in-depth assessment and targeted interventions. Although many studies supported its use in different populations, few have addressed Asian populations. The optimal scaling system for the VES-13 in predicting health outcomes also has not been adequately tested. This study (1) assesses the applicability of the VES-13 to predict the mortality of community-dwelling older persons in Taiwan, (2) identifies the best scaling system for the VES-13 in predicting mortality using generalized additive models (GAMs), and (3) determines whether including covariates, such as socio-demographic factors and common geriatric syndromes, improves model fitting. This retrospective longitudinal cohort study analyzed the data of 2184 community-dwelling persons 65 years old or older from the 2003 wave of the national-wide Taiwan Longitudinal Study on Aging. Cox proportional hazards models and Generalized Additive Models (GAMs) were used. The VES-13 significantly predicted the mortality of Taiwan's community-dwelling elders. A one-point increase in the VES-13 score raised the risk of death by 26% (hazard ratio, 1.26; 95% confidence interval, 1.21-1.32). The hazard ratio of death increased linearly with each additional VES-13 score point, suggesting that using a continuous scale is appropriate. Inclusion of socio-demographic factors and geriatric syndromes improved the model-fitting. The VES-13 is appropriate for an Asian population. VES-13 scores linearly predict the mortality of this population. Adjusting the weighting of the physical activity items may improve the performance of the VES-13. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Network diffusion accurately models the relationship between structural and functional brain connectivity networks

    PubMed Central

    Abdelnour, Farras; Voss, Henning U.; Raj, Ashish

    2014-01-01

    The relationship between anatomic connectivity of large-scale brain networks and their functional connectivity is of immense importance and an area of active research. Previous attempts have required complex simulations which model the dynamics of each cortical region, and explore the coupling between regions as derived by anatomic connections. While much insight is gained from these non-linear simulations, they can be computationally taxing tools for predicting functional from anatomic connectivities. Little attention has been paid to linear models. Here we show that a properly designed linear model appears to be superior to previous non-linear approaches in capturing the brain’s long-range second order correlation structure that governs the relationship between anatomic and functional connectivities. We derive a linear network of brain dynamics based on graph diffusion, whereby the diffusing quantity undergoes a random walk on a graph. We test our model using subjects who underwent diffusion MRI and resting state fMRI. The network diffusion model applied to the structural networks largely predicts the correlation structures derived from their fMRI data, to a greater extent than other approaches. The utility of the proposed approach is that it can routinely be used to infer functional correlation from anatomic connectivity. And since it is linear, anatomic connectivity can also be inferred from functional data. The success of our model confirms the linearity of ensemble average signals in the brain, and implies that their long-range correlation structure may percolate within the brain via purely mechanistic processes enacted on its structural connectivity pathways. PMID:24384152

  19. Investigation of scale effects in the TRF determined by VLBI

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The improvement of the International Terrestrial Reference Frame (ITRF) is of great significance for Earth sciences and one of the major tasks in geodesy. The translation, rotation and the scale-factor, as well as their linear rates, are solved in a 14-parameter transformation between individual frames of each space geodetic technique and the combined frame. In ITRF2008, as well as in the current release ITRF2014, the scale-factor is provided by Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) in equal shares. Since VLBI measures extremely precise group delays that are transformed to baseline lengths by the velocity of light, a natural constant, VLBI is the most suitable method for providing the scale. The aim of the current work is to identify possible shortcomings in the VLBI scale contribution to ITRF2008. For developing recommendations for an enhanced estimation, scale effects in the Terrestrial Reference Frame (TRF) determined with VLBI are considered in detail and compared to ITRF2008. In contrast to station coordinates, where the scale is defined by a geocentric position vector, pointing from the origin of the reference frame to the station, baselines are not related to the origin. They are describing the absolute scale independently from the datum. The more accurate a baseline length, and consequently the scale, is estimated by VLBI, the better the scale contribution to the ITRF. Considering time series of baseline length between different stations, a non-linear periodic signal can clearly be recognized, caused by seasonal effects at observation sites. Modeling these seasonal effects and subtracting them from the original data enhances the repeatability of single baselines significantly. Other effects influencing the scale strongly, are jumps in the time series of baseline length, mainly evoked by major earthquakes. Co- and post-seismic effects can be identified in the data, having a non-linear character likewise. Modeling the non-linear motion or completely excluding affected stations is another important step for an improved scale determination. In addition to the investigation of single baseline repeatabilities also the spatial transformation, which is performed for determining parameters of the ITRF2008, are considered. Since the reliability of the resulting transformation parameters is higher the more identical points are used in the transformation, an approach where all possible stations are used as control points is comprehensible. Experiments that examine the scale-factor and its spatial behavior between control points in ITRF2008 and coordinates determined by VLBI only showed that the network geometry has a large influence on the outcome as well. Introducing an unequally distributed network for the datum configuration, the correlations between translation parameters and the scale-factor can become remarkably high. Only a homogeneous spatial distribution of participating stations yields a maximally uncorrelated scale-factor that can be interpreted independent from other parameters. In the current release of the ITRF, the ITRF2014, for the first time, non-linear effects in the time series of station coordinates are taken into account. The present work shows the importance and the right direction of the modification of the ITRF calculation. But also further improvements were found which lead to an enhanced scale determination.

  20. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  1. [Scale Relativity Theory in living beings morphogenesis: fratal, determinism and chance].

    PubMed

    Chaline, J

    2012-10-01

    The Scale Relativity Theory has many biological applications from linear to non-linear and, from classical mechanics to quantum mechanics. Self-similar laws have been used as model for the description of a huge number of biological systems. Theses laws may explain the origin of basal life structures. Log-periodic behaviors of acceleration or deceleration can be applied to branching macroevolution, to the time sequences of major evolutionary leaps. The existence of such a law does not mean that the role of chance in evolution is reduced, but instead that randomness and contingency may occur within a framework which may itself be structured in a partly statistical way. The scale relativity theory can open new perspectives in evolution. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  2. Scaling up stomatal conductance from leaf to canopy using a dual-leaf model for estimating crop evapotranspiration.

    PubMed

    Ding, Risheng; Kang, Shaozhong; Du, Taisheng; Hao, Xinmei; Zhang, Yanqun

    2014-01-01

    The dual-source Shuttleworth-Wallace model has been widely used to estimate and partition crop evapotranspiration (λET). Canopy stomatal conductance (Gsc), an essential parameter of the model, is often calculated by scaling up leaf stomatal conductance, considering the canopy as one single leaf in a so-called "big-leaf" model. However, Gsc can be overestimated or underestimated depending on leaf area index level in the big-leaf model, due to a non-linear stomatal response to light. A dual-leaf model, scaling up Gsc from leaf to canopy, was developed in this study. The non-linear stomata-light relationship was incorporated by dividing the canopy into sunlit and shaded fractions and calculating each fraction separately according to absorbed irradiances. The model includes: (1) the absorbed irradiance, determined by separately integrating the sunlit and shaded leaves with consideration of both beam and diffuse radiation; (2) leaf area for the sunlit and shaded fractions; and (3) a leaf conductance model that accounts for the response of stomata to PAR, vapor pressure deficit and available soil water. In contrast to the significant errors of Gsc in the big-leaf model, the predicted Gsc using the dual-leaf model had a high degree of data-model agreement; the slope of the linear regression between daytime predictions and measurements was 1.01 (R2 = 0.98), with RMSE of 0.6120 mm s-1 for four clear-sky days in different growth stages. The estimates of half-hourly λET using the dual-source dual-leaf model (DSDL) agreed well with measurements and the error was within 5% during two growing seasons of maize with differing hydrometeorological and management strategies. Moreover, the estimates of soil evaporation using the DSDL model closely matched actual measurements. Our results indicate that the DSDL model can produce more accurate estimation of Gsc and λET, compared to the big-leaf model, and thus is an effective alternative approach for estimating and partitioning λET.

  3. Scaling Up Stomatal Conductance from Leaf to Canopy Using a Dual-Leaf Model for Estimating Crop Evapotranspiration

    PubMed Central

    Ding, Risheng; Kang, Shaozhong; Du, Taisheng; Hao, Xinmei; Zhang, Yanqun

    2014-01-01

    The dual-source Shuttleworth-Wallace model has been widely used to estimate and partition crop evapotranspiration (λET). Canopy stomatal conductance (Gsc), an essential parameter of the model, is often calculated by scaling up leaf stomatal conductance, considering the canopy as one single leaf in a so-called “big-leaf” model. However, Gsc can be overestimated or underestimated depending on leaf area index level in the big-leaf model, due to a non-linear stomatal response to light. A dual-leaf model, scaling up Gsc from leaf to canopy, was developed in this study. The non-linear stomata-light relationship was incorporated by dividing the canopy into sunlit and shaded fractions and calculating each fraction separately according to absorbed irradiances. The model includes: (1) the absorbed irradiance, determined by separately integrating the sunlit and shaded leaves with consideration of both beam and diffuse radiation; (2) leaf area for the sunlit and shaded fractions; and (3) a leaf conductance model that accounts for the response of stomata to PAR, vapor pressure deficit and available soil water. In contrast to the significant errors of Gsc in the big-leaf model, the predicted Gsc using the dual-leaf model had a high degree of data-model agreement; the slope of the linear regression between daytime predictions and measurements was 1.01 (R2 = 0.98), with RMSE of 0.6120 mm s−1 for four clear-sky days in different growth stages. The estimates of half-hourly λET using the dual-source dual-leaf model (DSDL) agreed well with measurements and the error was within 5% during two growing seasons of maize with differing hydrometeorological and management strategies. Moreover, the estimates of soil evaporation using the DSDL model closely matched actual measurements. Our results indicate that the DSDL model can produce more accurate estimation of Gsc and λET, compared to the big-leaf model, and thus is an effective alternative approach for estimating and partitioning λET. PMID:24752329

  4. Newtonian self-gravitating system in a relativistic huge void universe model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishikawa, Ryusuke; Nakao, Ken-ichi; Yoo, Chul-Moon, E-mail: ryusuke@sci.osaka-cu.ac.jp, E-mail: knakao@sci.osaka-cu.ac.jp, E-mail: yoo@gravity.phys.nagoya-u.ac.jp

    We consider a test of the Copernican Principle through observations of the large-scale structures, and for this purpose we study the self-gravitating system in a relativistic huge void universe model which does not invoke the Copernican Principle. If we focus on the the weakly self-gravitating and slowly evolving system whose spatial extent is much smaller than the scale of the cosmological horizon in the homogeneous and isotropic background universe model, the cosmological Newtonian approximation is available. Also in the huge void universe model, the same kind of approximation as the cosmological Newtonian approximation is available for the analysis of themore » perturbations contained in a region whose spatial size is much smaller than the scale of the huge void: the effects of the huge void are taken into account in a perturbative manner by using the Fermi-normal coordinates. By using this approximation, we derive the equations of motion for the weakly self-gravitating perturbations whose elements have relative velocities much smaller than the speed of light, and show the derived equations can be significantly different from those in the homogeneous and isotropic universe model, due to the anisotropic volume expansion in the huge void. We linearize the derived equations of motion and solve them. The solutions show that the behaviors of linear density perturbations are very different from those in the homogeneous and isotropic universe model.« less

  5. Plasmon mass scale and quantum fluctuations of classical fields on a real time lattice

    NASA Astrophysics Data System (ADS)

    Kurkela, Aleksi; Lappi, Tuomas; Peuron, Jarkko

    2018-03-01

    Classical real-time lattice simulations play an important role in understanding non-equilibrium phenomena in gauge theories and are used in particular to model the prethermal evolution of heavy-ion collisions. Above the Debye scale the classical Yang-Mills (CYM) theory can be matched smoothly to kinetic theory. First we study the limits of the quasiparticle picture of the CYM fields by determining the plasmon mass of the system using 3 different methods. Then we argue that one needs a numerical calculation of a system of classical gauge fields and small linearized fluctuations, which correspond to quantum fluctuations, in a way that keeps the separation between the two manifest. We demonstrate and test an implementation of an algorithm with the linearized fluctuation showing that the linearization indeed works and that the Gauss's law is conserved.

  6. Effect of ploidy on scale-cover pattern in linear ornamental (koi) common carp Cyprinus carpio.

    PubMed

    Gomelsky, B; Schneider, K J; Glennon, R P; Plouffe, D A

    2012-09-01

    The effect of ploidy on scale-cover pattern in linear ornamental (koi) common carp Cyprinus carpio was investigated. To obtain diploid and triploid linear fish, eggs taken from a leather C. carpio female (genotype ssNn) and sperm taken from a scaled C. carpio male (genotype SSnn) were used for the production of control (no shock) and heat-shocked progeny. In heat-shocked progeny, the 2 min heat shock (40° C) was applied 6 min after insemination. Diploid linear fish (genotype SsNn) demonstrated a scale-cover pattern typical for this category with one even row of scales along lateral line and few scales located near operculum and at bases of fins. The majority (97%) of triploid linear fish (genotype SssNnn) exhibited non-typical scale patterns which were characterized by the appearance of additional scales on the body. The extent of additional scales in triploid linear fish was variable; some fish had large scales, which covered almost the entire body. Apparently, the observed difference in scale-cover pattern between triploid and diploid linear fish was caused by different phenotypic expression of gene N/n. Due to incomplete dominance of allele N, triploids Nnn demonstrate less profound reduction of scale cover compared with diploids Nn. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  7. The impact of using area-averaged land surface properties —topography, vegetation condition, soil wetness—in calculations of intermediate scale (approximately 10 km 2) surface-atmosphere heat and moisture fluxes

    NASA Astrophysics Data System (ADS)

    Sellers, Piers J.; Heiser, Mark D.; Hall, Forrest G.; Verma, Shashi B.; Desjardins, Raymond L.; Schuepp, Peter M.; Ian MacPherson, J.

    1997-03-01

    It is commonly assumed that biophysically based soil-vegetation-atmosphere transfer (SVAT) models are scale-invariant with respect to the initial boundary conditions of topography, vegetation condition and soil moisture. In practice, SVAT models that have been developed and tested at the local scale (a few meters or a few tens of meters) are applied almost unmodified within general circulation models (GCMs) of the atmosphere, which have grid areas of 50-500 km 2. This study, which draws much of its substantive material from the papers of Sellers et al. (1992c, J. Geophys. Res., 97(D17): 19033-19060) and Sellers et al. (1995, J. Geophys. Res., 100(D12): 25607-25629), explores the validity of doing this. The work makes use of the FIFE-89 data set which was collected over a 2 km × 15 km grassland area in Kansas. The site was characterized by high variability in soil moisture and vegetation condition during the late growing season of 1989. The area also has moderate topography. The 2 km × 15 km 'testbed' area was divided into 68 × 501 pixels of 30 m × 30 m spatial resolution, each of which could be assigned topographic, vegetation condition and soil moisture parameters from satellite and in situ observations gathered in FIFE-89. One or more of these surface fields was area-averaged in a series of simulation runs to determine the impact of using large-area means of these initial or boundary conditions on the area-integrated (aggregated) surface fluxes. The results of the study can be summarized as follows: 1. analyses and some of the simulations indicated that the relationships describing the effects of moderate topography on the surface radiation budget are near-linear and thus largely scale-invariant. The relationships linking the simple ratio vegetation index ( SR), the canopy conductance parameter (▽ F) and the canopy transpiration flux are also near-linear and similarly scale-invariant to first order. Because of this, it appears that simple area-averaging operations can be applied to these fields with relatively little impact on the calculated surface heat flux. 2. The relationships linking surface and root-zone soil wetness to the soil surface and canopy transpiration rates are non-linear. However, simulation results and observations indicate that soil moisture variability decreases significantly as an area dries out, which partially cancels out the effects of these non-linear functions.In conclusion, it appears that simple averages of topographic slope and vegetation parameters can be used to calculate surface energy and heat fluxes over a wide range of spatial scales, from a few meters up to many kilometers at least for grassland sites and areas with moderate topography. Although the relationships between soil moisture and evapotranspiration are non-linear for intermediate soil wetnesses, the dynamics of soil drying act to progressively reduce soil moisture variability and thus the impacts of these non-linearities on the area-averaged surface fluxes. These findings indicate that we may be able to use mean values of topography, vegetation condition and soil moisture to calculate the surface-atmosphere fluxes of energy, heat and moisture at larger length scales, to within an acceptable accuracy for climate modeling work. However, further tests over areas with different vegetation types, soils and more extreme topography are required to improve our confidence in this approach.

  8. Non-Linear Modeling of Growth Prerequisites in a Finnish Polytechnic Institution of Higher Education

    ERIC Educational Resources Information Center

    Nokelainen, Petri; Ruohotie, Pekka

    2009-01-01

    Purpose: This study aims to examine the factors of growth-oriented atmosphere in a Finnish polytechnic institution of higher education with categorical exploratory factor analysis, multidimensional scaling and Bayesian unsupervised model-based visualization. Design/methodology/approach: This study was designed to examine employee perceptions of…

  9. Landscape scale mapping of forest inventory data by nearest neighbor classification

    Treesearch

    Andrew Lister

    2009-01-01

    One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...

  10. Nonadiabatic effects in ultracold molecules via anomalous linear and quadratic Zeeman shifts.

    PubMed

    McGuyer, B H; Osborn, C B; McDonald, M; Reinaudi, G; Skomorowski, W; Moszynski, R; Zelevinsky, T

    2013-12-13

    Anomalously large linear and quadratic Zeeman shifts are measured for weakly bound ultracold 88Sr2 molecules near the intercombination-line asymptote. Nonadiabatic Coriolis coupling and the nature of long-range molecular potentials explain how this effect arises and scales roughly cubically with the size of the molecule. The linear shifts yield nonadiabatic mixing angles of the molecular states. The quadratic shifts are sensitive to nearby opposite f-parity states and exhibit fourth-order corrections, providing a stringent test of a state-of-the-art ab initio model.

  11. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    NASA Astrophysics Data System (ADS)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  12. Deformed Palmprint Matching Based on Stable Regions.

    PubMed

    Wu, Xiangqian; Zhao, Qiushi

    2015-12-01

    Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.

  13. Stable clustering and the resolution of dissipationless cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Benhaiem, David; Joyce, Michael; Sylos Labini, Francesco

    2017-10-01

    The determination of the resolution of cosmological N-body simulations, I.e. the range of scales in which quantities measured in them represent accurately the continuum limit, is an important open question. We address it here using scale-free models, for which self-similarity provides a powerful tool to control resolution. Such models also provide a robust testing ground for the so-called stable clustering approximation, which gives simple predictions for them. Studying large N-body simulations of such models with different force smoothing, we find that these two issues are in fact very closely related: our conclusion is that the accuracy of two-point statistics in the non-linear regime starts to degrade strongly around the scale at which their behaviour deviates from that predicted by the stable clustering hypothesis. Physically the association of the two scales is in fact simple to understand: stable clustering fails to be a good approximation when there are strong interactions of structures (in particular merging) and it is precisely such non-linear processes which are sensitive to fluctuations at the smaller scales affected by discretization. Resolution may be further degraded if the short distance gravitational smoothing scale is larger than the scale to which stable clustering can propagate. We examine in detail the very different conclusions of studies by Smith et al. and Widrow et al. and find that the strong deviations from stable clustering reported by these works are the results of over-optimistic assumptions about scales resolved accurately by the measured power spectra, and the reliance on Fourier space analysis. We emphasize the much poorer resolution obtained with the power spectrum compared to the two-point correlation function.

  14. Blob-Spring Model for the Dynamics of Ring Polymer in Obstacle Environment

    NASA Astrophysics Data System (ADS)

    Lele, Ashish K.; Iyer, Balaji V. S.; Juvekar, Vinay A.

    2008-07-01

    The dynamical behavior of cyclic macromolecules in a fixed obstacle (FO) environment is very different than the behavior of linear chains in the same topological environment; while the latter relax by a snake-like reptational motion from their chain ends the former can relax only by contour length fluctuations since they are endless. Duke, Obukhov and Rubinstein proposed a scaling model (the DOR model) to interpret the dynamical scaling exponents shown by Monte Carlo simulations of rings in a FO environment. We present a model (blob-spring model) to describe the dynamics of flexible and non-concatenated ring polymer in FO environment based on a theoretical formulation developed for the dynamics of an unentangled fractal polymer. We argue that the perpetual evolution of ring perimeter by the motion of contour segments results in an extra frictional load. Our model predicts self-similar dynamics with scaling exponents for the molecular weight dependence of diffusion coefficient and relaxation times that are in agreement with the scaling model proposed by Obukhov et al.

  15. Lorentz symmetry violation and UHECR experiments

    NASA Astrophysics Data System (ADS)

    Gonzalez-Mestres, L.

    2001-08-01

    Lorentz symmetry violation (LSV) at Planck scale can be tested through ultra-high energy cosmic rays (UHECR). We discuss deformed Lorentz symmetry (DLS) and energy non-conservation (ENC) patterns where the effective LSV parameter varies like the square of the momentum scale (e.g. quadratically de-formed relativistic kinematics, QDRK). In such patterns, a ≈ 106 LSV at Planck scale would be enough to produce observable effects on the properties of cosmic rays at the ≈ 1020 eV scale: absence of GZK cutoff, stability of unstable particles, lower interaction rates, kinematical failure of any parton model and of standard formulae for Lorentz contraction and time dilation... Its phenomeno-logical implications are compatible with existing data. Precise signatures are discussed in several patterns. If the effective LSV or ENC parameter is taken to vary linearly with the momentum scale (e.g. linearly deformed relativistic kinematics, LDRK), contradictions seem to arise with UHECR data. Conse-quences are important for UHECR and high-energy gamma-ray exper iments, as well as for high-energy cosmic rays and gravitational waves.

  16. Exploiting the atmosphere's memory for monthly, seasonal and interannual temperature forecasting using Scaling LInear Macroweather Model (SLIMM)

    NASA Astrophysics Data System (ADS)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2016-04-01

    Traditionally, most of the models for prediction of the atmosphere behavior in the macroweather and climate regimes follow a deterministic approach. However, modern ensemble forecasting systems using stochastic parameterizations are in fact deterministic/ stochastic hybrids that combine both elements to yield a statistical distribution of future atmospheric states. Nevertheless, the result is both highly complex (both numerically and theoretically) as well as being theoretically eclectic. In principle, it should be advantageous to exploit higher level turbulence type scaling laws. Concretely, in the case for the Global Circulation Models (GCM's), due to sensitive dependence on initial conditions, there is a deterministic predictability limit of the order of 10 days. When these models are coupled with ocean, cryosphere and other process models to make long range, climate forecasts, the high frequency "weather" is treated as a driving noise in the integration of the modelling equations. Following Hasselman, 1976, this has led to stochastic models that directly generate the noise, and model the low frequencies using systems of integer ordered linear ordinary differential equations, the most well-known are the Linear Inverse Models (LIM). For annual global scale forecasts, they are somewhat superior to the GCM's and have been presented as a benchmark for surface temperature forecasts with horizons up to decades. A key limitation for the LIM approach is that it assumes that the temperature has only short range (exponential) decorrelations. In contrast, an increasing body of evidence shows that - as with the models - the atmosphere respects a scale invariance symmetry leading to power laws with potentially enormous memories so that LIM greatly underestimates the memory of the system. In this talk we show that, due to the relatively low macroweather intermittency, the simplest scaling models - fractional Gaussian noise - can be used for making greatly improved forecasts. The corresponding space-time model (the ScaLIng Macroweather Model (SLIMM) is thus only multifractal in space where the spatial intermittency is associated with different climate zones. SLIMM exploits the power law (scaling) behavior in time of the temperature field and uses the long historical memory of the temperature series to improve the skill. The only model parameter is the fluctuation scaling exponent, H (usually in the range -0.5 - 0), which is directly related to the skill and can be obtained from the data. The results predicted analytically by the model have been tested by performing actual hindcasts in different 5° x 5° regions covering the planet using ERA-Interim, 20CRv2 and NCEP/NCAR reanalysis as reference datasets. We report maps of theoretical skill predicted by the model and we compare it with actual skill based on hindcasts for monthly, seasonal and annual resolutions. We also present maps of calibrated probability hindcasts with their respective validations. Comparisons between our results using SLIMM, some other stochastic autoregressive model, and hindcasts from the Canadian Seasonal to Interannual Prediction System (CanSIPS) and the National Centers for Environmental Prediction (NCEP)'s model CFSv2, are also shown. For seasonal temperature forecasts, SLIMM outperforms the GCM based forecasts in over 90% of the earth's surface. SLIMM forecasts can be accessed online through the site: http://www.to_be_announced.mcgill.ca.

  17. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  18. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  19. Patient and Societal Value Functions for the Testing Morbidities Index

    PubMed Central

    Swan, John Shannon; Kong, Chung Yin; Lee, Janie M.; Akinyemi, Omosalewa; Halpern, Elkan F.; Lee, Pablo; Vavinskiy, Sergey; Williams, Olubunmi; Zoltick, Emilie S.; Donelan, Karen

    2013-01-01

    Background We developed preference-based and summated scale scoring for the Testing Morbidities Index (TMI) classification, which addresses short-term effects on quality of life from diagnostic testing before, during and after a testing procedure. Methods The two TMI value functions utilize multiattribute value techniques; one is patient-based and the other has a societal perspective. 206 breast biopsy patients and 466 (societal) subjects informed the models. Due to a lack of standard short-term methods for this application, we utilized the visual analog scale (VAS). Waiting trade-off (WTO) tolls provided an additional option for linear transformation of the TMI. We randomized participants to one of three surveys: the first derived weights for generic testing morbidity attributes and levels of severity with the VAS; a second developed VAS values and WTO tolls for linear transformation of the TMI to a death-healthy scale; the third addressed initial validation in a specific test (breast biopsy). 188 patients and 425 community subjects participated in initial validation, comparing direct VAS and WTO values to the TMI. Alternative TMI scoring as a non-preference summated scale was included, given evidence of construct and content validity. Results The patient model can use an additive function, while the societal model is multiplicative. Direct VAS and the VAS-scaled TMI were correlated across modeling groups (r=0.45 to 0.62) and agreement was comparable to the value function validation of the Health Utilities Index 2. Mean Absolute Difference (MAD) calculations showed a range of 0.07–0.10 in patients and 0.11–0.17 in subjects. MAD for direct WTO tolls compared to the WTO-scaled TMI varied closely around one quality-adjusted life day. Conclusions The TMI shows initial promise in measuring short-term testing-related health states. PMID:23689044

  20. Power spectrum for the small-scale Universe

    NASA Astrophysics Data System (ADS)

    Widrow, Lawrence M.; Elahi, Pascal J.; Thacker, Robert J.; Richardson, Mark; Scannapieco, Evan

    2009-08-01

    The first objects to arise in a cold dark matter (CDM) universe present a daunting challenge for models of structure formation. In the ultra small-scale limit, CDM structures form nearly simultaneously across a wide range of scales. Hierarchical clustering no longer provides a guiding principle for theoretical analyses and the computation time required to carry out credible simulations becomes prohibitively high. To gain insight into this problem, we perform high-resolution (N = 7203-15843) simulations of an Einstein-de Sitter cosmology where the initial power spectrum is P(k) ~ kn, with -2.5 <= n <= - 1. Self-similar scaling is established for n = -1 and -2 more convincingly than in previous, lower resolution simulations and for the first time, self-similar scaling is established for an n = -2.25 simulation. However, finite box-size effects induce departures from self-similar scaling in our n = -2.5 simulation. We compare our results with the predictions for the power spectrum from (one-loop) perturbation theory and demonstrate that the renormalization group approach suggested by McDonald improves perturbation theory's ability to predict the power spectrum in the quasi-linear regime. In the non-linear regime, our power spectra differ significantly from the widely used fitting formulae of Peacock & Dodds and Smith et al. and a new fitting formula is presented. Implications of our results for the stable clustering hypothesis versus halo model debate are discussed. Our power spectra are inconsistent with predictions of the stable clustering hypothesis in the high-k limit and lend credence to the halo model. Nevertheless, the fitting formula advocated in this paper is purely empirical and not derived from a specific formulation of the halo model.

  1. Study of the observational compatibility of an inhomogeneous cosmology with linear expansion according to SNe Ia

    NASA Astrophysics Data System (ADS)

    Monjo, R.

    2017-11-01

    Most of current cosmological theories are built combining an isotropic and homogeneous manifold with a scale factor that depends on time. If one supposes a hyperconical universe with linear expansion, an inhomogeneous metric can be obtained by an appropriate transformation that preserves the proper time. This model locally tends to a flat Friedman-Robertson-Walker metric with linear expansion. The objective of this work is to analyze the observational compatibility of the inhomogeneous metric considered. For this purpose, the corresponding luminosity distance was obtained and was compared with the observations of 580 SNe Ia, taken from the Supernova Cosmology Project. The best fit of the hyperconical model obtains χ02=562 , the same value as the standard Λ CDM model. Finally, a possible relationship is found between both theories.

  2. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less

  3. Stochastic Dynamic Mixed-Integer Programming (SD-MIP)

    DTIC Science & Technology

    2015-05-05

    stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g

  4. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domènech, Guillem; Hiramatsu, Takashi; Lin, Chunshan

    We consider a cosmological model in which the tensor mode becomes massive during inflation, and study the Cosmic Microwave Background (CMB) temperature and polarization bispectra arising from the mixing between the scalar mode and the massive tensor mode during inflation. The model assumes the existence of a preferred spatial frame during inflation. The local Lorentz invariance is already broken in cosmology due to the existence of a preferred rest frame. The existence of a preferred spatial frame further breaks the remaining local SO(3) invariance and in particular gives rise to a mass in the tensor mode. At linear perturbation level,more » we minimize our model so that the vector mode remains non-dynamical, while the scalar mode is the same as the one in single-field slow-roll inflation. At non-linear perturbation level, this inflationary massive graviton phase leads to a sizeable scalar-scalar-tensor coupling, much greater than the scalar-scalar-scalar one, as opposed to the conventional case. This scalar-scalar-tensor interaction imprints a scale dependent feature in the CMB temperature and polarization bispectra. Very intriguingly, we find a surprizing similarity between the predicted scale dependence and the scale-dependent non-Gaussianities at low multipoles hinted in the WMAP and Planck results.« less

  6. Rasch-built Overall Disability Scale (R-ODS) for immune-mediated peripheral neuropathies.

    PubMed

    van Nes, S I; Vanhoutte, E K; van Doorn, P A; Hermans, M; Bakkers, M; Kuitwaard, K; Faber, C G; Merkies, I S J

    2011-01-25

    To develop a patient-based, linearly weighted scale that captures activity and social participation limitations in patients with Guillain-Barré syndrome (GBS), chronic inflammatory demyelinating polyradiculoneuropathy (CIDP), and gammopathy-related polyneuropathy (MGUSP). A preliminary Rasch-built Overall Disability Scale (R-ODS) containing 146 activity and participation items was constructed, based on the WHO International Classification of Functioning, Disability and Health, literature search, and patient interviews. The preliminary R-ODS was assessed twice (interval: 2-4 weeks; test-retest reliability studies) in 294 patients who experienced GBS in the past (n = 174) or currently have stable CIDP (n = 80) or MGUSP (n = 40). Data were analyzed using the Rasch unidimensional measurement model (RUMM2020). The preliminary R-ODS did not meet the Rasch model expectations. Based on disordered thresholds, misfit statistics, item bias, and local dependency, items were systematically removed to improve the model fit, regularly controlling the class intervals and model statistics. Finally, we succeeded in constructing a 24-item scale that fulfilled all Rasch requirements. "Reading a newspaper/book" and "eating" were the 2 easiest items; "standing for hours" and "running" were the most difficult ones. Good validity and reliability were obtained. The R-ODS is a linearly weighted scale that specifically captures activity and social participation limitations in patients with GBS, CIDP, and MGUSP. Compared to the Overall Disability Sum Score, the R-ODS represents a wider range of item difficulties, thereby better targeting patients with different ability levels. If responsive, the R-ODS will be valuable for future clinical trials and follow-up studies in these conditions.

  7. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    PubMed

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  8. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression

    PubMed Central

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-01-01

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale. PMID:28397745

  9. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  10. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  11. Linear study of the precessional fishbone instability

    NASA Astrophysics Data System (ADS)

    Idouakass, M.; Faganello, M.; Berk, H. L.; Garbet, X.; Benkadda, S.

    2016-10-01

    The precessional fishbone instability is an m = n = 1 internal kink mode destabilized by a population of trapped energetic particles. The linear phase of this instability is studied here, analytically and numerically, with a simplified model. This model uses the reduced magneto-hydrodynamics equations for the bulk plasma and the Vlasov equation for a population of energetic particles with a radially decreasing density. A threshold condition for the instability is found, as well as a linear growth rate and frequency. It is shown that the mode frequency is given by the precession frequency of the deeply trapped energetic particles at the position of strongest radial gradient. The growth rate is shown to scale with the energetic particle density and particle energy while it is decreased by continuum damping.

  12. Psychometric Testing of the FACES III with Rural Adolescents

    ERIC Educational Resources Information Center

    Ide, Bette; Dingmann, Colleen; Cuevas, Elizabeth; Meehan, Maurita

    2010-01-01

    This study tests the validity and reliability of the Family Adaptability and Cohesion Scale III (FACES III) in two samples of rural adolescents. The underlying theory is the linear 3-D circumplex model. The FACES III was administered to 1,632 adolescents in Grades 7 through 12 in two counties in a rural western state. The FACES III Scale and the…

  13. Design and Parametric Study of the Magnetic Sensor for Position Detection in Linear Motor Based on Nonlinear Parametric Model Order Reduction

    PubMed Central

    Paul, Sarbajit; Chang, Junghwan

    2017-01-01

    This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension. PMID:28671580

  14. Design and Parametric Study of the Magnetic Sensor for Position Detection in Linear Motor Based on Nonlinear Parametric model order reduction.

    PubMed

    Paul, Sarbajit; Chang, Junghwan

    2017-07-01

    This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.

  15. Comparison of Statistical Models for Analyzing Wheat Yield Time Series

    PubMed Central

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280

  16. CMB and matter power spectra with non-linear dark-sector interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marttens, R.F. vom; Casarini, L.; Zimdahl, W.

    2017-01-01

    An interaction between dark matter and dark energy, proportional to the product of their energy densities, results in a scaling behavior of the ratio of these densities with respect to the scale factor of the Robertson-Walker metric. This gives rise to a class of cosmological models which deviate from the standard model in an analytically tractable way. In particular, it becomes possible to quantify the role of potential dark-energy perturbations. We investigate the impact of this interaction on the structure formation process. Using the (modified) CAMB code we obtain the CMB spectrum as well as the linear matter power spectrum.more » It is shown that the strong degeneracy in the parameter space present in the background analysis is considerably reduced by considering Planck data. Our analysis is compatible with the ΛCDM model at the 2σ confidence level with a slightly preferred direction of the energy flow from dark matter to dark energy.« less

  17. Development of WRF-CO2 4DVAR Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Zheng, T.; French, N. H. F.

    2016-12-01

    Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.

  18. Scaling effect of fraction of vegetation cover retrieved by algorithms based on linear mixture model

    NASA Astrophysics Data System (ADS)

    Obata, Kenta; Miura, Munenori; Yoshioka, Hiroki

    2010-08-01

    Differences in spatial resolution among sensors have been a source of error among satellite data products, known as a scaling effect. This study investigates the mechanism of the scaling effect on fraction of vegetation cover retrieved by a linear mixture model which employs NDVI as one of the constraints. The scaling effect is induced by the differences in texture, and the differences between the true endmember spectra and the endmember spectra assumed during retrievals. A mechanism of the scaling effect was analyzed by focusing on the monotonic behavior of spatially averaged FVC as a function of spatial resolution. The number of endmember is limited into two to proceed the investigation analytically. Although the spatially-averaged NDVI varies monotonically along with spatial resolution, the corresponding FVC values does not always vary monotonically. The conditions under which the averaged FVC varies monotonically for a certain sequence of spatial resolutions, were derived analytically. The increasing and decreasing trend of monotonic behavior can be predicted from the true and assumed endmember spectra of vegetation and non-vegetation classes regardless the distributions of the vegetation class within a fixed area. The results imply that the scaling effect on FVC is more complicated than that on NDVI, since, unlike NDVI, FVC becomes non-monotonic under a certain condition determined by the true and assumed endmember spectra.

  19. Concurrent validity of persian version of wechsler intelligence scale for children - fourth edition and cognitive assessment system in patients with learning disorder.

    PubMed

    Rostami, Reza; Sadeghi, Vahid; Zarei, Jamileh; Haddadi, Parvaneh; Mohazzab-Torabi, Saman; Salamati, Payman

    2013-04-01

    The aim of this study was to compare the Persian version of the wechsler intelligence scale for children - fourth edition (WISC-IV) and cognitive assessment system (CAS) tests, to determine the correlation between their scales and to evaluate the probable concurrent validity of these tests in patients with learning disorders. One-hundered-sixty-two children with learning disorder who were presented at Atieh Comprehensive Psychiatry Center were selected in a consecutive non-randomized order. All of the patients were assessed based on WISC-IV and CAS scores questionnaires. Pearson correlation coefficient was used to analyze the correlation between the data and to assess the concurrent validity of the two tests. Linear regression was used for statistical modeling. The type one error was considered 5% in maximum. There was a strong correlation between total score of WISC-IV test and total score of CAS test in the patients (r=0.75, P<0.001). The correlations among the other scales were mostly high and all of them were statistically significant (P<0.001). A linear regression model was obtained (α = 0.51, β = 0.81 and P<0.001). There is an acceptable correlation between the WISC-IV scales and CAS test in children with learning disorders. A concurrent validity is established between the two tests and their scales.

  20. Concurrent Validity of Persian Version of Wechsler Intelligence Scale for Children - Fourth Edition and Cognitive Assessment System in Patients with Learning Disorder

    PubMed Central

    Rostami, Reza; Sadeghi, Vahid; Zarei, Jamileh; Haddadi, Parvaneh; Mohazzab-Torabi, Saman; Salamati, Payman

    2013-01-01

    Objective The aim of this study was to compare the Persian version of the wechsler intelligence scale for children - fourth edition (WISC-IV) and cognitive assessment system (CAS) tests, to determine the correlation between their scales and to evaluate the probable concurrent validity of these tests in patients with learning disorders. Methods One-hundered-sixty-two children with learning disorder who were presented at Atieh Comprehensive Psychiatry Center were selected in a consecutive non-randomized order. All of the patients were assessed based on WISC-IV and CAS scores questionnaires. Pearson correlation coefficient was used to analyze the correlation between the data and to assess the concurrent validity of the two tests. Linear regression was used for statistical modeling. The type one error was considered 5% in maximum. Findings There was a strong correlation between total score of WISC-IV test and total score of CAS test in the patients (r=0.75, P<0.001). The correlations among the other scales were mostly high and all of them were statistically significant (P<0.001). A linear regression model was obtained (α = 0.51, β = 0.81 and P<0.001). Conclusion There is an acceptable correlation between the WISC-IV scales and CAS test in children with learning disorders. A concurrent validity is established between the two tests and their scales. PMID:23724180

  1. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  2. Linear and non-linear bias: predictions versus measurements

    NASA Astrophysics Data System (ADS)

    Hoffmann, K.; Bel, J.; Gaztañaga, E.

    2017-02-01

    We study the linear and non-linear bias parameters which determine the mapping between the distributions of galaxies and the full matter density fields, comparing different measurements and predictions. Associating galaxies with dark matter haloes in the Marenostrum Institut de Ciències de l'Espai (MICE) Grand Challenge N-body simulation, we directly measure the bias parameters by comparing the smoothed density fluctuations of haloes and matter in the same region at different positions as a function of smoothing scale. Alternatively, we measure the bias parameters by matching the probability distributions of halo and matter density fluctuations, which can be applied to observations. These direct bias measurements are compared to corresponding measurements from two-point and different third-order correlations, as well as predictions from the peak-background model, which we presented in previous papers using the same data. We find an overall variation of the linear bias measurements and predictions of ˜5 per cent with respect to results from two-point correlations for different halo samples with masses between ˜1012and1015 h-1 M⊙ at the redshifts z = 0.0 and 0.5. Variations between the second- and third-order bias parameters from the different methods show larger variations, but with consistent trends in mass and redshift. The various bias measurements reveal a tight relation between the linear and the quadratic bias parameters, which is consistent with results from the literature based on simulations with different cosmologies. Such a universal relation might improve constraints on cosmological models, derived from second-order clustering statistics at small scales or higher order clustering statistics.

  3. Providing a Spatial Context for Crop Insurance in Ethiopia: Multiscale Comparisons of Vegetation Metrics in Tigray

    NASA Astrophysics Data System (ADS)

    Mann, B. F.; Small, C.

    2014-12-01

    Weather-based index insurance projects are rapidly expanding across the developing world. Many of these projects use satellite-based observations to detect extreme weather events, which inform and trigger payouts to smallholder farmers. While most index insurance programs use precipitation measurements to determine payouts, the use of remotely sensed observations of vegetation is currently being explored. In order to use vegetation indices as a basis for payouts, it is necessary to establish a consistent relationship between the vegetation index and the health and abundance of agriculture on the ground. The accuracy with which remotely sensed vegetation indices can detect changes in agriculture depends on both the spatial scale of the agriculture and the spatial resolution of the sensor. This study analyzes the relationship between meter and decameter scale vegetation fraction estimates derived from linear spectral mixture models with a more commonly used vegetation index (NDVI, EVI) at hectometer spatial scales. In addition, the analysis incorporates land cover/land use field observations collected in Tigray Ethiopia in July 2013. . It also tests the flexibility and utility of a standardized spectral mixture model in which land cover is represented as continuous fields of rock and soil substrate (S), vegetation (V) and dark surfaces (D; water, shadow). This analysis found strong linear relationships with vegetation metrics at 1.6-meter, 30-meter and 250-meter resolutions across spectrally diverse subsets of Tigray, Ethiopia and significantly correlated relationships using the Spearman's rho statistic. The observed linear scaling has positive implications for future use of moderate resolution vegetation indices in similar landscapes; especially index insurance projects that are scaling up across the developing world using remotely-sensed environmental information.

  4. Towards a Comprehensive Model of Jet Noise Using an Acoustic Analogy and Steady RANS Solutions

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    An acoustic analogy is developed to predict the noise from jet flows. It contains two source models that independently predict the noise from turbulence and shock wave shear layer interactions. The acoustic analogy is based on the Euler equations and separates the sources from propagation. Propagation effects are taken into account by calculating the vector Green's function of the linearized Euler equations. The sources are modeled following the work of Tam and Auriault, Morris and Boluriaan, and Morris and Miller. A statistical model of the two-point cross-correlation of the velocity fluctuations is used to describe the turbulence. The acoustic analogy attempts to take into account the correct scaling of the sources for a wide range of nozzle pressure and temperature ratios. It does not make assumptions regarding fine- or large-scale turbulent noise sources, self- or shear-noise, or convective amplification. The acoustic analogy is partially informed by three-dimensional steady Reynolds-Averaged Navier-Stokes solutions that include the nozzle geometry. The predictions are compared with experiments of jets operating subsonically through supersonically and at unheated and heated temperatures. Predictions generally capture the scaling of both mixing noise and BBSAN for the conditions examined, but some discrepancies remain that are due to the accuracy of the steady RANS turbulence model closure, the equivalent sources, and the use of a simplified vector Green's function solver of the linearized Euler equations.

  5. The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?

    PubMed

    Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping

    2013-01-01

    Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.

  6. Transport of Cryptosporidium parvum Oocysts in Charge Heterogeneous Porous Media: Microfluidics Experiment and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Meng, X.; Guo, Z.; Zhang, C.; Nguyen, T. H.; Hu, D.; Ji, J.; Yang, X.

    2017-12-01

    Colloidal attachment on charge heterogeneous grains has significant environmental implications for transport of hazardous colloids, such as pathogens, in the aquifer, where iron, manganese, and aluminium oxide minerals are the major source of surface charge heterogeneity of the aquifer grains. A patchwise surface charge model is often used to describe the surface charge heterogeneity of the grains. In the patchwise model, the colloidal attachment efficiency is linearly correlated with the fraction of the favorable patches (θ=λ(θf - θu)+θu). However, our previous microfluidic study showed that the attachment efficiency of oocysts of Cryptosporidium parvum, a waterborne protozoan parasite, was not linear correlated with the fraction of the favorable patches (λ). In this study, we developed a pore scale model to simulate colloidal transport and attachment on charge heterogeneous grains. The flow field was simulated using the LBM method and colloidal transport and attachment were simulated using the Lagrange particle tracking method. The pore scale model was calibrated with experimental results of colloidal and oocyst transport in microfluidic devices and was then used to simulate oocyst transport in charge heterogeneous porous media under a variety of environmental relative conditions, i.e. the fraction of favorable patchwise, ionic strength, and pH. The results of the pore scale simulations were used to evaluate the effect of surface charge heterogeneity on upscaling of oocyst transport from pore to continuum scale and to develop an applicable correlation between colloidal attachment efficiency and the fraction of the favorable patches.

  7. Neutron star dynamics under time-dependent external torques

    NASA Astrophysics Data System (ADS)

    Gügercinoǧlu, Erbil; Alpar, M. Ali

    2017-11-01

    The two-component model describes neutron star dynamics incorporating the response of the superfluid interior. Conventional solutions and applications involve constant external torques, as appropriate for radio pulsars on dynamical time-scales. We present the general solution of two-component dynamics under arbitrary time-dependent external torques, with internal torques that are linear in the rotation rates, or with the extremely non-linear internal torques due to vortex creep. The two-component model incorporating the response of linear or non-linear internal torques can now be applied not only to radio pulsars but also to magnetars and to neutron stars in binary systems, with strong observed variability and noise in the spin-down or spin-up rates. Our results allow the extraction of the time-dependent external torques from the observed spin-down (or spin-up) time series, \\dot{Ω }(t). Applications are discussed.

  8. On the use of finite difference matrix-vector products in Newton-Krylov solvers for implicit climate dynamics with spectral elements

    DOE PAGES

    Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.

    2015-01-01

    Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less

  9. Sharp inflaton potentials and bi-spectra: effects of smoothening the discontinuity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Jérôme; Sriramkumar, L.; Hazra, Dhiraj Kumar, E-mail: jmartin@iap.fr, E-mail: sriram@physics.iitm.ac.in, E-mail: dhiraj@apctp.org

    Sharp shapes in the inflaton potentials often lead to short departures from slow roll which, in turn, result in deviations from scale invariance in the scalar power spectrum. Typically, in such situations, the scalar power spectrum exhibits a burst of features associated with modes that leave the Hubble radius either immediately before or during the epoch of fast roll. Moreover, one also finds that the power spectrum turns scale invariant at smaller scales corresponding to modes that leave the Hubble radius at later stages, when slow roll has been restored. In other words, the imprints of brief departures from slowmore » roll, arising out of sharp shapes in the inflaton potential, are usually of a finite width in the scalar power spectrum. Intuitively, one may imagine that the scalar bi-spectrum too may exhibit a similar behavior, i.e. a restoration of scale invariance at small scales, when slow roll has been reestablished. However, in the case of the Starobinsky model (viz. the model described by a linear inflaton potential with a sudden change in its slope) involving the canonical scalar field, it has been found that, a rather sharp, though short, departure from slow roll can leave a lasting and significant imprint on the bi-spectrum. The bi-spectrum in this case is found to grow linearly with the wavenumber at small scales, a behavior which is clearly unphysical. In this work, we study the effects of smoothening the discontinuity in the Starobinsky model on the scalar bi-spectrum. Focusing on the equilateral limit, we analytically show that, for smoother potentials, the bi-spectrum indeed turns scale invariant at suitably large wavenumbers. We also confirm the analytical results numerically using our newly developed code BINGO. We conclude with a few comments on certain related points.« less

  10. Edge-defined film-fed growth of thin silicon sheets

    NASA Technical Reports Server (NTRS)

    Ettouney, H. M.; Kalejs, J. P.

    1984-01-01

    Finite element analysis was used on two length scales to understand crystal growth of thin silicon sheets. Thermal-capillary models of entire ribbon growth systems were developed. Microscopic modeling of morphological structure of melt/solid interfaces beyond the point of linear instability was carried out. The application to silicon system is discussed.

  11. Estimating the impact of mineral aerosols on crop yields in food insecure regions using statistical crop models

    NASA Astrophysics Data System (ADS)

    Hoffman, A.; Forest, C. E.; Kemanian, A.

    2016-12-01

    A significant number of food-insecure nations exist in regions of the world where dust plays a large role in the climate system. While the impacts of common climate variables (e.g. temperature, precipitation, ozone, and carbon dioxide) on crop yields are relatively well understood, the impact of mineral aerosols on yields have not yet been thoroughly investigated. This research aims to develop the data and tools to progress our understanding of mineral aerosol impacts on crop yields. Suspended dust affects crop yields by altering the amount and type of radiation reaching the plant, modifying local temperature and precipitation. While dust events (i.e. dust storms) affect crop yields by depleting the soil of nutrients or by defoliation via particle abrasion. The impact of dust on yields is modeled statistically because we are uncertain which impacts will dominate the response on national and regional scales considered in this study. Multiple linear regression is used in a number of large-scale statistical crop modeling studies to estimate yield responses to various climate variables. In alignment with previous work, we develop linear crop models, but build upon this simple method of regression with machine-learning techniques (e.g. random forests) to identify important statistical predictors and isolate how dust affects yields on the scales of interest. To perform this analysis, we develop a crop-climate dataset for maize, soybean, groundnut, sorghum, rice, and wheat for the regions of West Africa, East Africa, South Africa, and the Sahel. Random forest regression models consistently model historic crop yields better than the linear models. In several instances, the random forest models accurately capture the temperature and precipitation threshold behavior in crops. Additionally, improving agricultural technology has caused a well-documented positive trend that dominates time series of global and regional yields. This trend is often removed before regression with traditional crop models, but likely at the cost of removing climate information. Our random forest models consistently discover the positive trend without removing any additional data. The application of random forests as a statistical crop model provides insight into understanding the impact of dust on yields in marginal food producing regions.

  12. Clipping the cosmos: the bias and bispectrum of large scale structure.

    PubMed

    Simpson, Fergus; James, J Berian; Heavens, Alan F; Heymans, Catherine

    2011-12-30

    A large fraction of the information collected by cosmological surveys is simply discarded to avoid length scales which are difficult to model theoretically. We introduce a new technique which enables the extraction of useful information from the bispectrum of galaxies well beyond the conventional limits of perturbation theory. Our results strongly suggest that this method increases the range of scales where the relation between the bispectrum and power spectrum in tree-level perturbation theory may be applied, from k(max) ∼ 0.1 to ∼0.7 hMpc(-1). This leads to correspondingly large improvements in the determination of galaxy bias. Since the clipped matter power spectrum closely follows the linear power spectrum, there is the potential to use this technique to probe the growth rate of linear perturbations and confront theories of modified gravity with observation.

  13. A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs

    PubMed Central

    Liu, Wan-Yu; Chou, Chun-Hung

    2014-01-01

    This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990

  14. A model-free characterization of recurrences in stationary time series

    NASA Astrophysics Data System (ADS)

    Chicheportiche, Rémy; Chakraborti, Anirban

    2017-05-01

    Study of recurrences in earthquakes, climate, financial time-series, etc. is crucial to better forecast disasters and limit their consequences. Most of the previous phenomenological studies of recurrences have involved only a long-ranged autocorrelation function, and ignored the multi-scaling properties induced by potential higher order dependencies. We argue that copulas is a natural model-free framework to study non-linear dependencies in time series and related concepts like recurrences. Consequently, we arrive at the facts that (i) non-linear dependences do impact both the statistics and dynamics of recurrence times, and (ii) the scaling arguments for the unconditional distribution may not be applicable. Hence, fitting and/or simulating the intertemporal distribution of recurrence intervals is very much system specific, and cannot actually benefit from universal features, in contrast to the previous claims. This has important implications in epilepsy prognosis and financial risk management applications.

  15. Computational modes and the Machenauer N.L.N.M.I. of the GLAS 4th order model. [NonLinear Normal Mode Initialization in numerical weather forecasting

    NASA Technical Reports Server (NTRS)

    Navon, I. M.; Bloom, S.; Takacs, L. L.

    1985-01-01

    An attempt was made to use the GLAS global 4th order shallow water equations to perform a Machenhauer nonlinear normal mode initialization (NLNMI) for the external vertical mode. A new algorithm was defined for identifying and filtering out computational modes which affect the convergence of the Machenhauer iterative procedure. The computational modes and zonal waves were linearly initialized and gravitational modes were nonlinearly initialized. The Machenhauer NLNMI was insensitive to the absence of high zonal wave numbers. The effects of the Machenhauer scheme were evaluated by performing 24 hr integrations with nondissipative and dissipative explicit time integration models. The NLNMI was found to be inferior to the Rasch (1984) pseudo-secant technique for obtaining convergence when the time scales of nonlinear forcing were much smaller than the time scales expected from the natural frequency of the mode.

  16. Trajectory Reconstruction and Uncertainty Analysis Using Mars Science Laboratory Pre-Flight Scale Model Aeroballistic Testing

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark

    2013-01-01

    As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.

  17. Growth trajectories of mathematics achievement: Longitudinal tracking of student academic progress.

    PubMed

    Mok, Magdalena M C; McInerney, Dennis M; Zhu, Jinxin; Or, Anthony

    2015-06-01

    A number of methods to investigate growth have been reported in the literature, including hierarchical linear modelling (HLM), latent growth modelling (LGM), and multidimensional scaling applied to longitudinal profile analysis (LPAMS). This study aimed at modelling the mathematics growth of students over a span of 6 years from Grade 3 to Grade 9. The sample comprised secondary longitudinal data collected in three waves from n = 866 Hong Kong students when they were in Grade 3, Grade 6, and Grade 9. Mathematics achievement was measured thrice on a vertical scale linked with anchor items. Linear and nonlinear latent growth models were used to assess students' growth. Gender differences were also examined. A nonlinear latent growth curve with a decelerated rate had a good fit to the data. Initial achievement and growth rate were negatively correlated. No gender difference was found. Mathematics growth from Grade 6 to Grade 9 was slower than that from Grade 3 to Grade 6. Students with lower initial achievement improved at a faster rate than those who started at a higher level. Gender did not affect growth rate. © 2014 The British Psychological Society.

  18. Nonlinear price impact from linear models

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2017-12-01

    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  19. Performance limitations of bilateral force reflection imposed by operator dynamic characteristics

    NASA Technical Reports Server (NTRS)

    Chapel, Jim D.

    1989-01-01

    A linearized, single-axis model is presented for bilateral force reflection which facilitates investigation into the effects of manipulator, operator, and task dynamics, as well as time delay and gain scaling. Structural similarities are noted between this model and impedance control. Stability results based upon this model impose requirements upon operator dynamic characteristics as functions of system time delay and environmental stiffness. An experimental characterization reveals the limited capabilities of the human operator to meet these requirements. A procedure is presented for determining the force reflection gain scaling required to provide stability and acceptable operator workload. This procedure is applied to a system with dynamics typical of a space manipulator, and the required gain scaling is presented as a function of environmental stiffness.

  20. Numerical tests of local scale invariance in ageing q-state Potts models

    NASA Astrophysics Data System (ADS)

    Lorenz, E.; Janke, W.

    2007-01-01

    Much effort has been spent over the last years to achieve a coherent theoretical description of ageing as a non-linear dynamics process. Long supposed to be a consequence of the slow dynamics of glassy systems only, ageing phenomena could also be identified in the phase-ordering kinetics of simple ferromagnets. As a phenomenological approach Henkel et al. developed a group of local scale transformations under which two-time autocorrelation and response functions should transform covariantly. This work is to extend previous numerical tests of the predicted scaling functions for the Ising model by Monte Carlo simulations of two-dimensional q-state Potts models with q=3 and 8, which, in equilibrium, undergo temperature-driven phase transitions of second and first order, respectively.

  1. Modelling Ocean Dissipation in Icy Satellites: A Comparison of Linear and Quadratic Friction

    NASA Astrophysics Data System (ADS)

    Hay, H.; Matsuyama, I.

    2015-12-01

    Although subsurface oceans are confirmed in Europa, Ganymede, Callisto, and strongly suspected in Enceladus and Titan, the exact mechanism required to heat and maintain these liquid reservoirs over Solar System history remains a mystery. Radiogenic heating can supply enough energy for large satellites whereas tidal dissipation provides the best explanation for the presence of oceans in small icy satellites. The amount of thermal energy actually contributed to the interiors of these icy satellites through oceanic tidal dissipation is largely unquantified. Presented here is a numerical model that builds upon previous work for quantifying tidally dissipated energy in the subsurface oceans of the icy satellites. Recent semi-analytical models (Tyler, 2008 and Matsuyama, 2014) have solved the Laplace Tidal Equations to estimate the time averaged energy flux over an orbital period in icy satellite oceans, neglecting the presence of a solid icy shell. These models are only able to consider linear Rayleigh friction. The numerical model presented here is compared to one of these semi-analytical models, finding excellent agreement between velocity and displacement solutions for all three terms to the tidal potential. Time averaged energy flux is within 2-6% of the analytical values. Quadratic (bottom) friction is then incorporated into the model, replacing linear friction. This approach is commonly applied to terrestrial ocean dissipation studies where dissipation scales nonlinearly with velocity. A suite of simulations are also run for the quadratic friction case which are then compared to and analysed against recent scaling laws developed by Chen and Nimmo (2013).

  2. Galaxy Clustering, Photometric Redshifts and Diagnosis of Systematics in the DES Science Verification Data

    DOE PAGES

    Crocce, M.

    2015-12-09

    We study the clustering of galaxies detected at i < 22.5 in the Science Verification observations of the Dark Energy Survey (DES). Two-point correlation functions are measured using 2.3 × 106 galaxies over a contiguous 116 deg 2 region in five bins of photometric redshift width Δz = 0.2 in the range 0.2 < z < 1.2. The impact of photometric redshift errors is assessed by comparing results using a template-based photo-zalgorithm (BPZ) to a machine-learning algorithm (TPZ). A companion paper presents maps of several observational variables (e.g. seeing, sky brightness) which could modulate the galaxy density. Here we characterizemore » and mitigate systematic errors on the measured clustering which arise from these observational variables, in addition to others such as Galactic dust and stellar contamination. After correcting for systematic effects, we then measure galaxy bias over a broad range of linear scales relative to mass clustering predicted from the Planck Λ cold dark matter model, finding agreement with the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) measurements with χ 2 of 4.0 (8.7) with 5 degrees of freedom for the TPZ (BPZ) redshifts. Furthermore, we test a ‘linear bias’ model, in which the galaxy clustering is a fixed multiple of the predicted non-linear dark matter clustering. The precision of the data allows us to determine that the linear bias model describes the observed galaxy clustering to 2.5 percent accuracy down to scales at least 4–10 times smaller than those on which linear theory is expected to be sufficient.« less

  3. Galaxy Clustering, Photometric Redshifts and Diagnosis of Systematics in the DES Science Verification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crocce, M.

    We study the clustering of galaxies detected at i < 22.5 in the Science Verification observations of the Dark Energy Survey (DES). Two-point correlation functions are measured using 2.3 × 106 galaxies over a contiguous 116 deg 2 region in five bins of photometric redshift width Δz = 0.2 in the range 0.2 < z < 1.2. The impact of photometric redshift errors is assessed by comparing results using a template-based photo-zalgorithm (BPZ) to a machine-learning algorithm (TPZ). A companion paper presents maps of several observational variables (e.g. seeing, sky brightness) which could modulate the galaxy density. Here we characterizemore » and mitigate systematic errors on the measured clustering which arise from these observational variables, in addition to others such as Galactic dust and stellar contamination. After correcting for systematic effects, we then measure galaxy bias over a broad range of linear scales relative to mass clustering predicted from the Planck Λ cold dark matter model, finding agreement with the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) measurements with χ 2 of 4.0 (8.7) with 5 degrees of freedom for the TPZ (BPZ) redshifts. Furthermore, we test a ‘linear bias’ model, in which the galaxy clustering is a fixed multiple of the predicted non-linear dark matter clustering. The precision of the data allows us to determine that the linear bias model describes the observed galaxy clustering to 2.5 percent accuracy down to scales at least 4–10 times smaller than those on which linear theory is expected to be sufficient.« less

  4. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  5. Robust control of combustion instabilities

    NASA Astrophysics Data System (ADS)

    Hong, Boe-Shong

    Several interactive dynamical subsystems, each of which has its own time-scale and physical significance, are decomposed to build a feedback-controlled combustion- fluid robust dynamics. On the fast-time scale, the phenomenon of combustion instability is corresponding to the internal feedback of two subsystems: acoustic dynamics and flame dynamics, which are parametrically dependent on the slow-time-scale mean-flow dynamics controlled for global performance by a mean-flow controller. This dissertation constructs such a control system, through modeling, analysis and synthesis, to deal with model uncertainties, environmental noises and time- varying mean-flow operation. Conservation law is decomposed as fast-time acoustic dynamics and slow-time mean-flow dynamics, served for synthesizing LPV (linear parameter varying)- L2-gain robust control law, in which a robust observer is embedded for estimating and controlling the internal status, while achieving trade- offs among robustness, performances and operation. The robust controller is formulated as two LPV-type Linear Matrix Inequalities (LMIs), whose numerical solver is developed by finite-element method. Some important issues related to physical understanding and engineering application are discussed in simulated results of the control system.

  6. Using CellML with OpenCMISS to Simulate Multi-Scale Physiology

    PubMed Central

    Nickerson, David P.; Ladd, David; Hussan, Jagir R.; Safaei, Soroush; Suresh, Vinod; Hunter, Peter J.; Bradley, Christopher P.

    2014-01-01

    OpenCMISS is an open-source modeling environment aimed, in particular, at the solution of bioengineering problems. OpenCMISS consists of two main parts: a computational library (OpenCMISS-Iron) and a field manipulation and visualization library (OpenCMISS-Zinc). OpenCMISS is designed for the solution of coupled multi-scale, multi-physics problems in a general-purpose parallel environment. CellML is an XML format designed to encode biophysically based systems of ordinary differential equations and both linear and non-linear algebraic equations. A primary design goal of CellML is to allow mathematical models to be encoded in a modular and reusable format to aid reproducibility and interoperability of modeling studies. In OpenCMISS, we make use of CellML models to enable users to configure various aspects of their multi-scale physiological models. This avoids the need for users to be familiar with the OpenCMISS internal code in order to perform customized computational experiments. Examples of this are: cellular electrophysiology models embedded in tissue electrical propagation models; material constitutive relationships for mechanical growth and deformation simulations; time-varying boundary conditions for various problem domains; and fluid constitutive relationships and lumped-parameter models. In this paper, we provide implementation details describing how CellML models are integrated into multi-scale physiological models in OpenCMISS. The external interface OpenCMISS presents to users is also described, including specific examples exemplifying the extensibility and usability these tools provide the physiological modeling and simulation community. We conclude with some thoughts on future extension of OpenCMISS to make use of other community developed information standards, such as FieldML, SED-ML, and BioSignalML. Plans for the integration of accelerator code (graphical processing unit and field programmable gate array) generated from CellML models is also discussed. PMID:25601911

  7. Resonant sterile neutrino dark matter in the local and high-z Universe

    NASA Astrophysics Data System (ADS)

    Bozek, Brandon; Boylan-Kolchin, Michael; Horiuchi, Shunsaku; Garrison-Kimmel, Shea; Abazajian, Kevork; Bullock, James S.

    2016-06-01

    Sterile neutrinos comprise an entire class of dark matter models that, depending on their production mechanism, can be hot, warm, or cold dark matter (CDM). We simulate the Local Group and representative volumes of the Universe in a variety of sterile neutrino models, all of which are consistent with the possible existence of a radiative decay line at ˜3.5 keV. We compare models of production via resonances in the presence of a lepton asymmetry (suggested by Shi & Fuller 1999) to `thermal' models. We find that properties in the highly non-linear regime - e.g. counts of satellites and internal properties of haloes and subhaloes - are insensitive to the precise fall-off in power with wavenumber, indicating that non-linear evolution essentially washes away differences in the initial (linear) matter power spectrum. In the quasi-linear regime at higher redshifts, however, quantitative differences in the 3D matter power spectra remain, raising the possibility that such models can be tested with future observations of the Lyman-α forest. While many of the sterile neutrino models largely eliminate multiple small-scale issues within the CDM paradigm, we show that these models may be ruled out in the near future via discoveries of additional dwarf satellites in the Local Group.

  8. Basin-scale estimates of oceanic primary production by remote sensing - The North Atlantic

    NASA Technical Reports Server (NTRS)

    Platt, Trevor; Caverhill, Carla; Sathyendranath, Shubha

    1991-01-01

    The monthly averaged CZCS data for 1979 are used to estimate annual primary production at ocean basin scales in the North Atlantic. The principal supplementary data used were 873 vertical profiles of chlorophyll and 248 sets of parameters derived from photosynthesis-light experiments. Four different procedures were tested for calculation of primary production. The spectral model with nonuniform biomass was considered as the benchmark for comparison against the other three models. The less complete models gave results that differed by as much as 50 percent from the benchmark. Vertically uniform models tended to underestimate primary production by about 20 percent compared to the nonuniform models. At horizontal scale, the differences between spectral and nonspectral models were negligible. The linear correlation between biomass and estimated production was poor outside the tropics, suggesting caution against the indiscriminate use of biomass as a proxy variable for primary production.

  9. Comparing kinetic Monte Carlo and thin-film modeling of transversal instabilities of ridges on patterned substrates

    NASA Astrophysics Data System (ADS)

    Tewes, Walter; Buller, Oleg; Heuer, Andreas; Thiele, Uwe; Gurevich, Svetlana V.

    2017-03-01

    We employ kinetic Monte Carlo (KMC) simulations and a thin-film continuum model to comparatively study the transversal (i.e., Plateau-Rayleigh) instability of ridges formed by molecules on pre-patterned substrates. It is demonstrated that the evolution of the occurring instability qualitatively agrees between the two models for a single ridge as well as for two weakly interacting ridges. In particular, it is shown for both models that the instability occurs on well defined length and time scales which are, for the KMC model, significantly larger than the intrinsic scales of thermodynamic fluctuations. This is further evidenced by the similarity of dispersion relations characterizing the linear instability modes.

  10. Nonlinearities of heart rate variability in animal models of impaired cardiac control: contribution of different time scales.

    PubMed

    Silva, Luiz Eduardo Virgilio; Lataro, Renata Maria; Castania, Jaci Airton; Silva, Carlos Alberto Aguiar; Salgado, Helio Cesar; Fazan, Rubens; Porta, Alberto

    2017-08-01

    Heart rate variability (HRV) has been extensively explored by traditional linear approaches (e.g., spectral analysis); however, several studies have pointed to the presence of nonlinear features in HRV, suggesting that linear tools might fail to account for the complexity of the HRV dynamics. Even though the prevalent notion is that HRV is nonlinear, the actual presence of nonlinear features is rarely verified. In this study, the presence of nonlinear dynamics was checked as a function of time scales in three experimental models of rats with different impairment of the cardiac control: namely, rats with heart failure (HF), spontaneously hypertensive rats (SHRs), and sinoaortic denervated (SAD) rats. Multiscale entropy (MSE) and refined MSE (RMSE) were chosen as the discriminating statistic for the surrogate test utilized to detect nonlinearity. Nonlinear dynamics is less present in HF animals at both short and long time scales compared with controls. A similar finding was found in SHR only at short time scales. SAD increased the presence of nonlinear dynamics exclusively at short time scales. Those findings suggest that a working baroreflex contributes to linearize HRV and to reduce the likelihood to observe nonlinear components of the cardiac control at short time scales. In addition, an increased sympathetic modulation seems to be a source of nonlinear dynamics at long time scales. Testing nonlinear dynamics as a function of the time scales can provide a characterization of the cardiac control complementary to more traditional markers in time, frequency, and information domains. NEW & NOTEWORTHY Although heart rate variability (HRV) dynamics is widely assumed to be nonlinear, nonlinearity tests are rarely used to check this hypothesis. By adopting multiscale entropy (MSE) and refined MSE (RMSE) as the discriminating statistic for the nonlinearity test, we show that nonlinear dynamics varies with time scale and the type of cardiac dysfunction. Moreover, as complexity metrics and nonlinearities provide complementary information, we strongly recommend using the test for nonlinearity as an additional index to characterize HRV. Copyright © 2017 the American Physiological Society.

  11. Comparison of the linear bias models in the light of the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Papageorgiou, A.; Basilakos, S.; Plionis, M.

    2018-05-01

    The evolution of the linear and scale independent bias, based on the most popular dark matter bias models within the Λ cold dark matter (ΛCDM) cosmology, is confronted to that of the Dark Energy Survey (DES) luminous red galaxies (LRGs). Applying a χ2 minimization procedure between models and data, we find that all the considered linear bias models reproduce well the LRG bias data. The differences among the bias models are absorbed in the predicted mass of the dark-matter halo in which LRGs live and which ranges between ˜6 × 1012 and 1.4 × 1013 h-1 M⊙, for the different bias models. Similar results, reaching however a maximum value of ˜2 × 1013 h-1 M⊙, are found by confronting the SDSS (2SLAQ) Large Red Galaxies clustering with theoretical clustering models, which also include the evolution of bias. This later analysis also provides a value of Ωm = 0.30 ± 0.01, which is in excellent agreement with recent joint analyses of different cosmological probes and the reanalysis of the Planck data.

  12. Weak lensing shear and aperture mass from linear to non-linear scales

    NASA Astrophysics Data System (ADS)

    Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.

    2004-05-01

    We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.

  13. A Multi-Scale Integrated Approach to Representing Watershed Systems: Significance and Challenges

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ivanov, V. Y.; Katopodes, N.

    2013-12-01

    A range of processes associated with supplying services and goods to human society originate at the watershed level. Predicting watershed response to forcing conditions has been of high interest to many practical societal problems, however, remains challenging due to two significant properties of the watershed systems, i.e., connectivity and non-linearity. Connectivity implies that disturbances arising at any larger scale will necessarily propagate and affect local-scale processes; their local effects consequently influence other processes, and often convey nonlinear relationships. Physically-based, process-scale modeling is needed to approach the understanding and proper assessment of non-linear effects between the watershed processes. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion and sediment transport, tRIBS-OFM-HRM (Triangulated irregular network - based Real time Integrated Basin Simulator-Overland Flow Model-Hairsine and Rose Model). This coupled model offers the advantage of exploring the hydrological effects of watershed physical factors such as topography, vegetation, and soil, as well as their feedback mechanisms. Several examples investigating the effects of vegetation on flow movement, the role of soil's substrate on sediment dynamics, and the driving role of topography on morphological processes are illustrated. We show how this comprehensive modeling tool can help understand interconnections and nonlinearities of the physical system, e.g., how vegetation affects hydraulic resistance depending on slope, vegetation cover fraction, discharge, and bed roughness condition; how the soil's substrate condition impacts erosion processes with an non-unique characteristic at the scale of a zero-order catchment; and how topographic changes affect spatial variations of morphologic variables. Due to feedback and compensatory nature of mechanisms operating in different watershed compartments, our conclusion is that a key to representing watershed systems lies in an integrated, interdisciplinary approach, whereby a physically-based model is used for assessments/evaluations associated with future changes in landuse, climate, and ecosystems.

  14. Customization of a generic 3D model of the distal femur using diagnostic radiographs.

    PubMed

    Schmutz, B; Reynolds, K J; Slavotinek, J P

    2008-01-01

    A method for the customization of a generic 3D model of the distal femur is presented. The customization method involves two steps: acquisition of calibrated orthogonal planar radiographs; and linear scaling of the generic model based on the width of a subject's femoral condyles as measured on the planar radiographs. Planar radiographs of seven intact lower cadaver limbs were obtained. The customized generic models were validated by comparing their surface geometry with that of CT-reconstructed reference models. The overall mean error was 1.2 mm. The results demonstrate that uniform scaling as a first step in the customization process produced a base model of accuracy comparable to other models reported in the literature.

  15. Interpreting linear support vector machine models with heat map molecule coloring

    PubMed Central

    2011-01-01

    Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031

  16. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  17. Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora

    2016-02-05

    Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less

  18. A Flight Dynamics Model for a Multi-Actuated Flexible Rocket Vehicle

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2011-01-01

    A comprehensive set of motion equations for a multi-actuated flight vehicle is presented. The dynamics are derived from a vector approach that generalizes the classical linear perturbation equations for flexible launch vehicles into a coupled three-dimensional model. The effects of nozzle and aerosurface inertial coupling, sloshing propellant, and elasticity are incorporated without restrictions on the position, orientation, or number of model elements. The present formulation is well suited to matrix implementation for large-scale linear stability and sensitivity analysis and is also shown to be extensible to nonlinear time-domain simulation through the application of a special form of Lagrange s equations in quasi-coordinates. The model is validated through frequency-domain response comparison with a high-fidelity planar implementation.

  19. Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.

    DTIC Science & Technology

    1980-11-01

    this research. I, 3 iv U TABLE OF CONTENTS *Chapter Page *-1. INTRODUCTION .................................................. 1 2. EIGENSTRUCTTJRE...Components ..... o....... 16 2.4. Ordering of State Variables.. ......... ........ 20 2.5. Example - 8th Order Power System Model................ 22 3 ...results. In Chapter 3 we consider the time scale decomposition of singularly perturbed systems. For this problem (1.1) takes the form 12 + u (1.4) 2

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk

    We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shownmore » to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.« less

  1. LASSIM-A network inference toolbox for genome-wide mechanistic modeling.

    PubMed

    Magnusson, Rasmus; Mariotti, Guido Pio; Köpsén, Mattias; Lövfors, William; Gawel, Danuta R; Jörnsten, Rebecka; Linde, Jörg; Nordling, Torbjörn E M; Nyman, Elin; Schulze, Sylvie; Nestor, Colm E; Zhang, Huan; Cedersund, Gunnar; Benson, Mikael; Tjärnberg, Andreas; Gustafsson, Mika

    2017-06-01

    Recent technological advancements have made time-resolved, quantitative, multi-omics data available for many model systems, which could be integrated for systems pharmacokinetic use. Here, we present large-scale simulation modeling (LASSIM), which is a novel mathematical tool for performing large-scale inference using mechanistically defined ordinary differential equations (ODE) for gene regulatory networks (GRNs). LASSIM integrates structural knowledge about regulatory interactions and non-linear equations with multiple steady state and dynamic response expression datasets. The rationale behind LASSIM is that biological GRNs can be simplified using a limited subset of core genes that are assumed to regulate all other gene transcription events in the network. The LASSIM method is implemented as a general-purpose toolbox using the PyGMO Python package to make the most of multicore computers and high performance clusters, and is available at https://gitlab.com/Gustafsson-lab/lassim. As a method, LASSIM works in two steps, where it first infers a non-linear ODE system of the pre-specified core gene expression. Second, LASSIM in parallel optimizes the parameters that model the regulation of peripheral genes by core system genes. We showed the usefulness of this method by applying LASSIM to infer a large-scale non-linear model of naïve Th2 cell differentiation, made possible by integrating Th2 specific bindings, time-series together with six public and six novel siRNA-mediated knock-down experiments. ChIP-seq showed significant overlap for all tested transcription factors. Next, we performed novel time-series measurements of total T-cells during differentiation towards Th2 and verified that our LASSIM model could monitor those data significantly better than comparable models that used the same Th2 bindings. In summary, the LASSIM toolbox opens the door to a new type of model-based data analysis that combines the strengths of reliable mechanistic models with truly systems-level data. We demonstrate the power of this approach by inferring a mechanistically motivated, genome-wide model of the Th2 transcription regulatory system, which plays an important role in several immune related diseases.

  2. Large scale structures in the kinetic gravity braiding model that can be unbraided

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Yamamoto, Kazuhiro, E-mail: rampei@theo.phys.sci.hiroshima-u.ac.jp, E-mail: kazuhiro@hiroshima-u.ac.jp

    2011-04-01

    We study cosmological consequences of a kinetic gravity braiding model, which is proposed as an alternative to the dark energy model. The kinetic braiding model we study is characterized by a parameter n, which corresponds to the original galileon cosmological model for n = 1. We find that the background expansion of the universe of the kinetic braiding model is the same as the Dvali-Turner's model, which reduces to that of the standard cold dark matter model with a cosmological constant (ΛCDM model) for n equal to infinity. We also find that the evolution of the linear cosmological perturbation inmore » the kinetic braiding model reduces to that of the ΛCDM model for n = ∞. Then, we focus our study on the growth history of the linear density perturbation as well as the spherical collapse in the nonlinear regime of the density perturbations, which might be important in order to distinguish between the kinetic braiding model and the ΛCDM model when n is finite. The theoretical prediction for the large scale structure is confronted with the multipole power spectrum of the luminous red galaxy sample of the Sloan Digital Sky survey. We also discuss future prospects of constraining the kinetic braiding model using a future redshift survey like the WFMOS/SuMIRe PFS survey as well as the cluster redshift distribution in the South Pole Telescope survey.« less

  3. Forced and intrinsic variability in the response to increased wind stress of an idealized Southern Ocean

    NASA Astrophysics Data System (ADS)

    Wilson, Chris; Hughes, Chris W.; Blundell, Jeffrey R.

    2015-01-01

    use ensemble runs of a three layer, quasi-geostrophic idealized Southern Ocean model to explore the roles of forced and intrinsic variability in response to a linear increase of wind stress imposed over a 30 year period. We find no increase of eastward circumpolar volume transport in response to the increased wind stress. A large part of the resulting time series can be explained by a response in which the eddy kinetic energy is linearly proportional to the wind stress with a possible time lag, but no statistically significant lag is found. However, this simple relationship is not the whole story: several intrinsic time scales also influence the response. We find an e-folding time scale for growth of small perturbations of 1-2 weeks. The energy budget for intrinsic variability at periods shorter than a year is dominated by exchange between kinetic and potential energy. At longer time scales, we find an intrinsic mode with period in the region of 15 years, which is dominated by changes in potential energy and frictional dissipation in a manner consistent with that seen by Hogg and Blundell (2006). A similar mode influences the response to changing wind stress. This influence, robust to perturbations, is different from the supposed linear relationship between wind stress and eddy kinetic energy, and persists for 5-10 years in this model, suggestive of a forced oscillatory mode with period of around 15 years. If present in the real ocean, such a mode would imply a degree of predictability of Southern Ocean dynamics on multiyear time scales.

  4. Networked dynamical systems with linear coupling: synchronisation patterns, coherence and other behaviours.

    PubMed

    Judd, Kevin

    2013-12-01

    Many physical and biochemical systems are well modelled as a network of identical non-linear dynamical elements with linear coupling between them. An important question is how network structure affects chaotic dynamics, for example, by patterns of synchronisation and coherence. It is shown that small networks can be characterised precisely into patterns of exact synchronisation and large networks characterised by partial synchronisation at the local and global scale. Exact synchronisation modes are explained using tools of symmetry groups and invariance, and partial synchronisation is explained by finite-time shadowing of exact synchronisation modes.

  5. Non-Susceptible Landslide Areas in Italy and in the Mediterranean Region

    NASA Astrophysics Data System (ADS)

    Alvioli, Massimiliano; Ardizzone, Francesca; Guzzetti, Fausto; Marchesini, Ivan; Rossi, Mauro

    2014-05-01

    Landslide susceptibility is the likelihood of a landslide occurring in a given area. Over the past three decades, researchers, and planning and environmental organisations have worked to assess landslide susceptibility at different geographical scales, and to produce maps portraying landslide susceptibility zonation. Little effort was made to determine where landslides are not expected, where susceptibility is null, or negligible. This is surprising because planners and decision makers are also interesting in knowing where landslides are not foreseen, or cannot occur in an area. We propose a method for the definition of non-susceptible landslide areas, at the synoptic scale. We applied the method in Italy and to the territory surrounding the Mediterranean Sea and we produced two synoptic-scale maps showing areas where landslides are not expected in Italy and in the Mediterranean area. To construct the method we used digital terrain elevation and landslide information. The digital terrain consisted in the 3-arc-second SRTM DEM, the landslide information was obtained for 13 areas in Italy where landslide inventory maps were available to us. We tested three different models to determine the non-susceptible landslide areas, including a linear model (LR), a quantile linear model (QLR), and a quantile non-linear model (QNL). Model performances have been evaluated using independent landslide information represented by the Italian Landslide Inventory (Inventario Fenomeni Franosi in Italia - IFFI). Best results were obtained using the QNL model. The corresponding zonation of non- susceptible landslide areas was intersected in a GIS with geographical census data for Italy. The results show that the 57.5% of the population of Italy (in 2001) was located in areas where landslide susceptibility was expected to be null or negligible, while the remaining 42.5% in areas where some landslide susceptibility was significant or not negligible. We applied the QNL model to the landmasses surrounding the Mediterranean Sea, and we tested the synoptic non- susceptibility zonation using independent landslide information for three study areas in Spain. Results proved that the QNL model was capable of determining where landslide susceptibility is expected to be negligible in the Mediterranean area. We expect our results to be applicable in similar study areas, facilitating the identification of non-susceptible and susceptible landslide areas, at the synoptic scale.

  6. Scalability of the muscular action in a parametric 3D model of the index finger.

    PubMed

    Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio

    2008-01-01

    A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.

  7. One-equation near-wall turbulence modeling with the aid of direct simulation data

    NASA Technical Reports Server (NTRS)

    Rodi, W.; Mansour, N. N.

    1990-01-01

    The length scales appearing in the relations for the eddy viscosity and dissipation rate in one-equation models were evaluated from direct numerical simulation data for developed channel and boundary-layer flow at two Reynolds numbers each. To prepare the ground for the evaluation, the distribution of the most relevant mean-flow and turbulence quantities is presented and discussed with respect to Reynolds-number influence and to differences between channel and boundary-layer flow. An alternative model is also examined in which bar-(v'(exp 2))(exp 1/2) is used as velocity scale instead of k(exp 1/2). With this velocity scale, the length scales now appearing in the model follow very closely a linear relationship near the wall so that no damping is necessary. For the determination of bar-v'(exp 2) in the context of a one-equation model, a correlation is provided between bar-(v'(exp 2))/k and bar-(u'v')/k.

  8. Power Laws, Scale Invariance and the Generalized Frobenius Series:

    NASA Astrophysics Data System (ADS)

    Visser, Matt; Yunes, Nicolas

    We present a self-contained formalism for calculating the background solution, the linearized solutions and a class of generalized Frobenius-like solutions to a system of scale-invariant differential equations. We first cast the scale-invariant model into its equidimensional and autonomous forms, find its fixed points, and then obtain power-law background solutions. After linearizing about these fixed points, we find a second linearized solution, which provides a distinct collection of power laws characterizing the deviations from the fixed point. We prove that generically there will be a region surrounding the fixed point in which the complete general solution can be represented as a generalized Frobenius-like power series with exponents that are integer multiples of the exponents arising in the linearized problem. While discussions of the linearized system are common, and one can often find a discussion of power-series with integer exponents, power series with irrational (indeed complex) exponents are much rarer in the extant literature. The Frobenius-like series we encounter can be viewed as a variant of the rarely-discussed Liapunov expansion theorem (not to be confused with the more commonly encountered Liapunov functions and Liapunov exponents). As specific examples we apply these ideas to Newtonian and relativistic isothermal stars and construct two separate power series with the overlapping radius of convergence. The second of these power series solutions represents an expansion around "spatial infinity," and in realistic models it is this second power series that gives information about the stellar core, and the damped oscillations in core mass and core radius as the central pressure goes to infinity. The power-series solutions we obtain extend classical results; as exemplified for instance by the work of Lane, Emden, and Chandrasekhar in the Newtonian case, and that of Harrison, Thorne, Wakano, and Wheeler in the relativistic case. We also indicate how to extend these ideas to situations where fixed points may not exist — either due to "monotone" flow or due to the presence of limit cycles. Monotone flow generically leads to logarithmic deviations from scaling, while limit cycles generally lead to discrete self-similar solutions.

  9. Unique attributes of cyanobacterial metabolism revealed by improved genome-scale metabolic modeling and essential gene analysis

    DOE PAGES

    Broddrick, Jared T.; Rubin, Benjamin E.; Welkie, David G.; ...

    2016-12-20

    The model cyanobacterium, Synechococcus elongatus PCC 7942, is a genetically tractable obligate phototroph that is being developed for the bioproduction of high-value chemicals. Genome-scale models (GEMs) have been successfully used to assess and engineer cellular metabolism; however, GEMs of phototrophic metabolism have been limited by the lack of experimental datasets for model validation and the challenges of incorporating photon uptake. In this paper, we develop a GEM of metabolism in S. elongatus using random barcode transposon site sequencing (RB-TnSeq) essential gene and physiological data specific to photoautotrophic metabolism. The model explicitly describes photon absorption and accounts for shading, resulting inmore » the characteristic linear growth curve of photoautotrophs. GEM predictions of gene essentiality were compared with data obtained from recent dense-transposon mutagenesis experiments. This dataset allowed major improvements to the accuracy of the model. Furthermore, discrepancies between GEM predictions and the in vivo dataset revealed biological characteristics, such as the importance of a truncated, linear TCA pathway, low flux toward amino acid synthesis from photorespiration, and knowledge gaps within nucleotide metabolism. Finally, coupling of strong experimental support and photoautotrophic modeling methods thus resulted in a highly accurate model of S. elongatus metabolism that highlights previously unknown areas of S. elongatus biology.« less

  10. Unique attributes of cyanobacterial metabolism revealed by improved genome-scale metabolic modeling and essential gene analysis

    PubMed Central

    Broddrick, Jared T.; Rubin, Benjamin E.; Welkie, David G.; Du, Niu; Mih, Nathan; Diamond, Spencer; Lee, Jenny J.; Golden, Susan S.; Palsson, Bernhard O.

    2016-01-01

    The model cyanobacterium, Synechococcus elongatus PCC 7942, is a genetically tractable obligate phototroph that is being developed for the bioproduction of high-value chemicals. Genome-scale models (GEMs) have been successfully used to assess and engineer cellular metabolism; however, GEMs of phototrophic metabolism have been limited by the lack of experimental datasets for model validation and the challenges of incorporating photon uptake. Here, we develop a GEM of metabolism in S. elongatus using random barcode transposon site sequencing (RB-TnSeq) essential gene and physiological data specific to photoautotrophic metabolism. The model explicitly describes photon absorption and accounts for shading, resulting in the characteristic linear growth curve of photoautotrophs. GEM predictions of gene essentiality were compared with data obtained from recent dense-transposon mutagenesis experiments. This dataset allowed major improvements to the accuracy of the model. Furthermore, discrepancies between GEM predictions and the in vivo dataset revealed biological characteristics, such as the importance of a truncated, linear TCA pathway, low flux toward amino acid synthesis from photorespiration, and knowledge gaps within nucleotide metabolism. Coupling of strong experimental support and photoautotrophic modeling methods thus resulted in a highly accurate model of S. elongatus metabolism that highlights previously unknown areas of S. elongatus biology. PMID:27911809

  11. Unique attributes of cyanobacterial metabolism revealed by improved genome-scale metabolic modeling and essential gene analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broddrick, Jared T.; Rubin, Benjamin E.; Welkie, David G.

    The model cyanobacterium, Synechococcus elongatus PCC 7942, is a genetically tractable obligate phototroph that is being developed for the bioproduction of high-value chemicals. Genome-scale models (GEMs) have been successfully used to assess and engineer cellular metabolism; however, GEMs of phototrophic metabolism have been limited by the lack of experimental datasets for model validation and the challenges of incorporating photon uptake. In this paper, we develop a GEM of metabolism in S. elongatus using random barcode transposon site sequencing (RB-TnSeq) essential gene and physiological data specific to photoautotrophic metabolism. The model explicitly describes photon absorption and accounts for shading, resulting inmore » the characteristic linear growth curve of photoautotrophs. GEM predictions of gene essentiality were compared with data obtained from recent dense-transposon mutagenesis experiments. This dataset allowed major improvements to the accuracy of the model. Furthermore, discrepancies between GEM predictions and the in vivo dataset revealed biological characteristics, such as the importance of a truncated, linear TCA pathway, low flux toward amino acid synthesis from photorespiration, and knowledge gaps within nucleotide metabolism. Finally, coupling of strong experimental support and photoautotrophic modeling methods thus resulted in a highly accurate model of S. elongatus metabolism that highlights previously unknown areas of S. elongatus biology.« less

  12. Parametric resonance in the early Universe—a fitting analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es

    Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less

  13. Skin Friction Reduction Through Large-Scale Forcing

    NASA Astrophysics Data System (ADS)

    Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer

    2017-11-01

    Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.

  14. Magnitude and sign of long-range correlated time series: Decomposition and surrogate signal generation.

    PubMed

    Gómez-Extremera, Manuel; Carpena, Pedro; Ivanov, Plamen Ch; Bernaola-Galván, Pedro A

    2016-04-01

    We systematically study the scaling properties of the magnitude and sign of the fluctuations in correlated time series, which is a simple and useful approach to distinguish between systems with different dynamical properties but the same linear correlations. First, we decompose artificial long-range power-law linearly correlated time series into magnitude and sign series derived from the consecutive increments in the original series, and we study their correlation properties. We find analytical expressions for the correlation exponent of the sign series as a function of the exponent of the original series. Such expressions are necessary for modeling surrogate time series with desired scaling properties. Next, we study linear and nonlinear correlation properties of series composed as products of independent magnitude and sign series. These surrogate series can be considered as a zero-order approximation to the analysis of the coupling of magnitude and sign in real data, a problem still open in many fields. We find analytical results for the scaling behavior of the composed series as a function of the correlation exponents of the magnitude and sign series used in the composition, and we determine the ranges of magnitude and sign correlation exponents leading to either single scaling or to crossover behaviors. Finally, we obtain how the linear and nonlinear properties of the composed series depend on the correlation exponents of their magnitude and sign series. Based on this information we propose a method to generate surrogate series with controlled correlation exponent and multifractal spectrum.

  15. General job stress: a unidimensional measure and its non-linear relations with outcome variables.

    PubMed

    Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley

    2012-04-01

    This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Topology of large-scale structure in seeded hot dark matter models

    NASA Technical Reports Server (NTRS)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  17. Scaling dimensions in spectroscopy of soil and vegetation

    NASA Astrophysics Data System (ADS)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.

  18. Acoustic Treatment Design Scaling Methods. Volume 2; Advanced Treatment Impedance Models for High Frequency Ranges

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.; Yu, J.; Kwan, H. W.

    1999-01-01

    The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.

    Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less

  20. Models for short-wave instability in inviscid shear flows

    NASA Astrophysics Data System (ADS)

    Grimshaw, Roger

    1999-11-01

    The generation of instability in an invsicid fluid occurs by a resonance between two wave modes, where here the resonance occurs by a coincidence of phase speeds for a finite, non-zero wavenumber. We show that in the weakly nonlinear limit, the appropriate model consists of two coupled equations for the envelopes of the wave modes, in which the nonlinear terms are balanced with low-order cross-coupling linear dispersive terms rather than the more familiar high-order terms which arise in the nonlinear Schrodinger equation, for instance. We will show that this system may either contain gap solitons as solutions in the linearly stable case, or wave breakdown in the linearly unstable case. In this latter circumstance, the system either exhibits wave collapse in finite time, or disintegration into fine-scale structures.

  1. Spatial and Temporal Scaling of Thermal Infrared Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Goel, Narendra S.

    1995-01-01

    Although remote sensing has a central role to play in the acquisition of synoptic data obtained at multiple spatial and temporal scales to facilitate our understanding of local and regional processes as they influence the global climate, the use of thermal infrared (TIR) remote sensing data in this capacity has received only minimal attention. This results from some fundamental challenges that are associated with employing TIR data collected at different space and time scales, either with the same or different sensing systems, and also from other problems that arise in applying a multiple scaled approach to the measurement of surface temperatures. In this paper, we describe some of the more important problems associated with using TIR remote sensing data obtained at different spatial and temporal scales, examine why these problems appear as impediments to using multiple scaled TIR data, and provide some suggestions for future research activities that may address these problems. We elucidate the fundamental concept of scale as it relates to remote sensing and explore how space and time relationships affect TIR data from a problem-dependency perspective. We also describe how linearity and non-linearity observation versus parameter relationships affect the quantitative analysis of TIR data. Some insight is given on how the atmosphere between target and sensor influences the accurate measurement of surface temperatures and how these effects will be compounded in analyzing multiple scaled TIR data. Last, we describe some of the challenges in modeling TIR data obtained at different space and time scales and discuss how multiple scaled TIR data can be used to provide new and important information for measuring and modeling land-atmosphere energy balance processes.

  2. Inclusion of Linearized Moist Physics in Nasa's Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald; Gelaro, Ronaldo; Kim, Jong G.

    2013-01-01

    Inclusion of moist physics in the linearized version of a weather forecast model is beneficial in terms of variational data assimilation. Further, it improves the capability of important tools, such as adjoint-based observation impacts and sensitivity studies. A linearized version of the relaxed Arakawa-Schubert (RAS) convection scheme has been developed and tested in NASA's Goddard Earth Observing System data assimilation tools. A previous study of the RAS scheme showed it to exhibit reasonable linearity and stability. This motivates the development of a linearization of a near-exact version of the RAS scheme. Linearized large-scale condensation is included through simple conversion of supersaturation into precipitation. The linearization of moist physics is validated against the full nonlinear model for 6- and 24-h intervals, relevant to variational data assimilation and observation impacts, respectively. For a small number of profiles, sudden large growth in the perturbation trajectory is encountered. Efficient filtering of these profiles is achieved by diagnosis of steep gradients in a reduced version of the operator of the tangent linear model. With filtering turned on, the inclusion of linearized moist physics increases the correlation between the nonlinear perturbation trajectory and the linear approximation of the perturbation trajectory. A month-long observation impact experiment is performed and the effect of including moist physics on the impacts is discussed. Impacts from moist-sensitive instruments and channels are increased. The effect of including moist physics is examined for adjoint sensitivity studies. A case study examining an intensifying Northern Hemisphere Atlantic storm is presented. The results show a significant sensitivity with respect to moisture.

  3. Cosmological structure formation in Decaying Dark Matter models

    NASA Astrophysics Data System (ADS)

    Cheng, Dalong; Chu, M.-C.; Tang, Jiayu

    2015-07-01

    The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.

  4. How does non-linear dynamics affect the baryon acoustic oscillation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu

    2014-02-01

    We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less

  5. Stochastic models for atomic clocks

    NASA Technical Reports Server (NTRS)

    Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.

    1983-01-01

    For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.

  6. Analog Microcontroller Model for an Energy Harvesting Round Counter

    DTIC Science & Technology

    2012-07-01

    densities representing the duration of ≥ for all scaled piezo ................................7 1 INTRODUCTION An accurate count...limited surface area available for mounting piezos on the gun system. Figure 1. Equivalent circuit model for a piezoelectric transducer...circuit model for the linear I-V relationships is parallel combination of six stages, each of which is comprised of a series combination of a resistor , DC

  7. Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo

    2018-04-01

    We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.

  8. Interpreting the g loadings of intelligence test composite scores in light of Spearman's law of diminishing returns.

    PubMed

    Reynolds, Matthew R

    2013-03-01

    The linear loadings of intelligence test composite scores on a general factor (g) have been investigated recently in factor analytic studies. Spearman's law of diminishing returns (SLODR), however, implies that the g loadings of test scores likely decrease in magnitude as g increases, or they are nonlinear. The purpose of this study was to (a) investigate whether the g loadings of composite scores from the Differential Ability Scales (2nd ed.) (DAS-II, C. D. Elliott, 2007a, Differential Ability Scales (2nd ed.). San Antonio, TX: Pearson) were nonlinear and (b) if they were nonlinear, to compare them with linear g loadings to demonstrate how SLODR alters the interpretation of these loadings. Linear and nonlinear confirmatory factor analysis (CFA) models were used to model Nonverbal Reasoning, Verbal Ability, Visual Spatial Ability, Working Memory, and Processing Speed composite scores in four age groups (5-6, 7-8, 9-13, and 14-17) from the DAS-II norming sample. The nonlinear CFA models provided better fit to the data than did the linear models. In support of SLODR, estimates obtained from the nonlinear CFAs indicated that g loadings decreased as g level increased. The nonlinear portion for the nonverbal reasoning loading, however, was not statistically significant across the age groups. Knowledge of general ability level informs composite score interpretation because g is less likely to produce differences, or is measured less, in those scores at higher g levels. One implication is that it may be more important to examine the pattern of specific abilities at higher general ability levels. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. Analysis of linear elasticity and non-linearity due to plasticity and material damage in woven and biaxial braided composites

    NASA Astrophysics Data System (ADS)

    Goyal, Deepak

    Textile composites have a wide variety of applications in the aerospace, sports, automobile, marine and medical industries. Due to the availability of a variety of textile architectures and numerous parameters associated with each, optimal design through extensive experimental testing is not practical. Predictive tools are needed to perform virtual experiments of various options. The focus of this research is to develop a better understanding of linear elastic response, plasticity and material damage induced nonlinear behavior and mechanics of load flow in textile composites. Textile composites exhibit multiple scales of complexity. The various textile behaviors are analyzed using a two-scale finite element modeling. A framework to allow use of a wide variety of damage initiation and growth models is proposed. Plasticity induced non-linear behavior of 2x2 braided composites is investigated using a modeling approach based on Hill's yield function for orthotropic materials. The mechanics of load flow in textile composites is demonstrated using special non-standard postprocessing techniques that not only highlight the important details, but also transform the extensive amount of output data into comprehensible modes of behavior. The investigations show that the damage models differ from each other in terms of amount of degradation as well as the properties to be degraded under a particular failure mode. When compared with experimental data, predictions of some models match well for glass/epoxy composite whereas other's match well for carbon/epoxy composites. However, all the models predicted very similar response when damage factors were made similar, which shows that the magnitude of damage factors are very important. Full 3D as well as equivalent tape laminate predictions lie within the range of the experimental data for a wide variety of braided composites with different material systems, which validated the plasticity analysis. Conclusions about the effect of fiber type on the degree of plasticity induced non-linearity in a +/-25° braid depend on the measure of non-linearity. Investigations about the mechanics of load flow in textile composites bring new insights about the textile behavior. For example, the reasons for existence of transverse shear stress under uni-axial loading and occurrence of stress concentrations at certain locations were explained.

  10. Neutrino masses and leptogenesis in left-right symmetric models: a review from a model building perspective

    NASA Astrophysics Data System (ADS)

    Hati, Chandan; Patra, Sudhanwa; Pritimita, Prativa; Sarkar, Utpal

    2018-03-01

    In this review, we present several variants of left-right symmetric models in the context of neutrino masses and leptogenesis. In particular, we discuss various low scale seesaw mechanisms like linear seesaw, inverse seesaw, extended seesaw and their implications to lepton number violating process like neutrinoless double beta decay. We also visit an alternative framework of left-right models with the inclusion of vector-like fermions to analyze the aspects of universal seesaw. The symmetry breaking of left-right symmetric model around few TeV scale predicts the existence of massive right-handed gauge bosons W_R and Z_R which might be detected at the LHC in near future. If such signals are detected at the LHC that can have severe implications for leptogenesis, a mechanism to explain the observed baryon asymmetry of the Universe. We review the implications of TeV scale left-right symmetry breaking for leptogenesis.

  11. Timber markets and fuel treatments in the western US

    Treesearch

    Karen L. Abt; Jeffrey P. Prestemon

    2006-01-01

    We developed a model of interrelated timber markets in the U.S. West to assess the impacts of large-scale fuel reduction programs on these markets, and concomitant effects of the market on the fuel reduction programs. The linear programming spatial equilibrium model allows interstate and international trade with western Canada and the rest of the world, while...

  12. Left-Right Non-Linear Dynamical Higgs

    NASA Astrophysics Data System (ADS)

    Jing, Shu; Juan, Yepes

    2016-12-01

    All the possible CP-conserving non-linear operators up to the p4-order in the Lagrangian expansion are analysed here for the left-right symmetric model in the non-linear electroweak chiral context coupled to a light dynamical Higgs. The low energy effects will be triggered by an emerging new physics field content in the nature, more specifically, from spin-1 resonances sourced by the straightforward extension of the SM local gauge symmetry to the larger local group SU(2)L × SU(2)R × U(1)B-L. Low energy phenomenology will be altered by integrating out the resonances from the physical spectrum, being manifested through induced corrections onto the left handed operators. Such modifications are weighted by powers of the scales ratio implied by the symmetries of the model and will determine the size of the effective operator basis to be used. The recently observed diboson excess around the invariant mass 1.8 TeV-2 TeV entails a scale suppression that suggests to encode the low energy effects via a much smaller set of effective operators. J. Y. also acknowledges KITPC financial support during the completion of this work

  13. Friction laws at the nanoscale.

    PubMed

    Mo, Yifei; Turner, Kevin T; Szlufarska, Izabela

    2009-02-26

    Macroscopic laws of friction do not generally apply to nanoscale contacts. Although continuum mechanics models have been predicted to break down at the nanoscale, they continue to be applied for lack of a better theory. An understanding of how friction force depends on applied load and contact area at these scales is essential for the design of miniaturized devices with optimal mechanical performance. Here we use large-scale molecular dynamics simulations with realistic force fields to establish friction laws in dry nanoscale contacts. We show that friction force depends linearly on the number of atoms that chemically interact across the contact. By defining the contact area as being proportional to this number of interacting atoms, we show that the macroscopically observed linear relationship between friction force and contact area can be extended to the nanoscale. Our model predicts that as the adhesion between the contacting surfaces is reduced, a transition takes place from nonlinear to linear dependence of friction force on load. This transition is consistent with the results of several nanoscale friction experiments. We demonstrate that the breakdown of continuum mechanics can be understood as a result of the rough (multi-asperity) nature of the contact, and show that roughness theories of friction can be applied at the nanoscale.

  14. Cosmographical Implications

    NASA Astrophysics Data System (ADS)

    Wright, E. L.

    1992-12-01

    The COBE() DMR observation of large scale anisotropy of the CMBR allows one to compare the gravitational potential measured using Delta T to the gravitational forces required to produce the observed clustering of galaxies. This comparison helps to define the allowed range of cosmological models. As shown by Wright etal 1992, the COBE Delta T agrees quite well with the bulk flow velocity measured by Bertschinger etal 1990 in a window of radius 6000 km/sec. This is the best evidence that the initial perturbation spectrum in fact followed the Harrison-Zeldovich (and inflationary) prediction that P(k) ~ k(n) with n = 1. Assuming that n ~ 1, one can deduce information about the nature of the matter in the Universe: the first conclusion is that a large amount of non-baryonic dark matter is required. The second conclusion is that a linearly evolving model dominated by Cold Dark Matter produces too little structure on 2500 km/sec scales. However, mixed Cold Plus Hot Dark Matter models, vacuum dominated models, or the Couchman & Carlberg (1992) non-linear recipe for making galaxies out of CDM all seem to reproduce the observed structures on scales from 500-6,000 km/sec while connecting to the COBE results with the expected n ~ 1 slope. () COBE is supported by NASA's Astrophysics Division. Goddard Space Flight Center (GSFC), under the scientific guidance of the COBE Science Working Group, is responsible for the development and operation of COBE.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capela, Fabio; Ramazanov, Sabir, E-mail: fc403@cam.ac.uk, E-mail: Sabir.Ramazanov@ulb.ac.be

    At large scales and for sufficiently early times, dark matter is described as a pressureless perfect fluid—dust— non-interacting with Standard Model fields. These features are captured by a simple model with two scalars: a Lagrange multiplier and another playing the role of the velocity potential. That model arises naturally in some gravitational frameworks, e.g., the mimetic dark matter scenario. We consider an extension of the model by means of higher derivative terms, such that the dust solutions are preserved at the background level, but there is a non-zero sound speed at the linear level. We associate this Modified Dust withmore » dark matter, and study the linear evolution of cosmological perturbations in that picture. The most prominent effect is the suppression of their power spectrum for sufficiently large cosmological momenta. This can be relevant in view of the problems that cold dark matter faces at sub-galactic scales, e.g., the missing satellites problem. At even shorter scales, however, perturbations of Modified Dust are enhanced compared to the predictions of more common particle dark matter scenarios. This is a peculiarity of their evolution in radiation dominated background. We also briefly discuss clustering of Modified Dust. We write the system of equations in the Newtonian limit, and sketch the possible mechanism which could prevent the appearance of caustic singularities. The same mechanism may be relevant in light of the core-cusp problem.« less

  16. Joint scaling laws in functional and evolutionary categories in prokaryotic genomes

    PubMed Central

    Grilli, J.; Bassetti, B.; Maslov, S.; Cosentino Lagomarsino, M.

    2012-01-01

    We propose and study a class-expansion/innovation/loss model of genome evolution taking into account biological roles of genes and their constituent domains. In our model, numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional ‘recipe’ for genome composition of the type ‘a spoonful of sugar for each egg yolk’. The model jointly reproduces two known empirical laws: the distribution of family sizes and the non-linear scaling of the number of genes in certain functional categories (e.g. transcription factors) with genome size. In addition, it allows us to derive a novel relation between the exponents characterizing these two scaling laws, establishing a direct quantitative connection between evolutionary and functional categories. It predicts that functional categories that grow faster-than-linearly with genome size to be characterized by flatter-than-average family size distributions. This relation is confirmed by our bioinformatics analysis of prokaryotic genomes. This proves that the joint quantitative trends of functional and evolutionary classes can be understood in terms of evolutionary growth with proportional recipes. PMID:21937509

  17. A Novel Fractional Order Model for the Dynamic Hysteresis of Piezoelectrically Actuated Fast Tool Servo

    PubMed Central

    Zhu, Zhiwei; Zhou, Xiaoqin

    2012-01-01

    The main contribution of this paper is the development of a linearized model for describing the dynamic hysteresis behaviors of piezoelectrically actuated fast tool servo (FTS). A linearized hysteresis force model is proposed and mathematically described by a fractional order differential equation. Combining the dynamic modeling of the FTS mechanism, a linearized fractional order dynamic hysteresis (LFDH) model for the piezoelectrically actuated FTS is established. The unique features of the LFDH model could be summarized as follows: (a) It could well describe the rate-dependent hysteresis due to its intrinsic characteristics of frequency-dependent nonlinear phase shifts and amplitude modulations; (b) The linearization scheme of the LFDH model would make it easier to implement the inverse dynamic control on piezoelectrically actuated micro-systems. To verify the effectiveness of the proposed model, a series of experiments are conducted. The toolpaths of the FTS for creating two typical micro-functional surfaces involving various harmonic components with different frequencies and amplitudes are scaled and employed as command signals for the piezoelectric actuator. The modeling errors in the steady state are less than ±2.5% within the full span range which is much smaller than certain state-of-the-art modeling methods, demonstrating the efficiency and superiority of the proposed model for modeling dynamic hysteresis effects. Moreover, it indicates that the piezoelectrically actuated micro systems would be more suitably described as a fractional order dynamic system.

  18. Applications of Perron-Frobenius theory to population dynamics.

    PubMed

    Li, Chi-Kwong; Schneider, Hans

    2002-05-01

    By the use of Perron-Frobenius theory, simple proofs are given of the Fundamental Theorem of Demography and of a theorem of Cushing and Yicang on the net reproductive rate occurring in matrix models of population dynamics. The latter result, which is closely related to the Stein-Rosenberg theorem in numerical linear algebra, is further refined with some additional nonnegative matrix theory. When the fertility matrix is scaled by the net reproductive rate, the growth rate of the model is $1$. More generally, we show how to achieve a given growth rate for the model by scaling the fertility matrix. Demographic interpretations of the results are given.

  19. Multi-scale and Multi-physics Numerical Methods for Modeling Transport in Mesoscopic Systems

    DTIC Science & Technology

    2014-10-13

    function and wide band Fast multipole methods for Hankel waves. (2) a new linear scaling discontinuous Galerkin density functional theory, which provide a...inflow boundary condition for Wigner quantum transport equations. Also, a book titled "Computational Methods for Electromagnetic Phenomena...equationsin layered media with FMM for Bessel functions , Science China Mathematics, (12 2013): 2561. doi: TOTAL: 6 Number of Papers published in peer

  20. Scale-free networks as an epiphenomenon of memory

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Hamma, A.; Di Ventra, M.

    2015-01-01

    Many realistic networks are scale free, with small characteristic path lengths, high clustering, and power law in their degree distribution. They can be obtained by dynamical networks in which a preferential attachment process takes place. However, this mechanism is non-local, in the sense that it requires knowledge of the whole graph in order for the graph to be updated. Instead, if preferential attachment and realistic networks occur in physical systems, these features need to emerge from a local model. In this paper, we propose a local model and show that a possible ingredient (which is often underrated) for obtaining scale-free networks with local rules is memory. Such a model can be realised in solid-state circuits, using non-linear passive elements with memory such as memristors, and thus can be tested experimentally.

  1. Linear and Non-linear Information Flows In Rainfall Field

    NASA Astrophysics Data System (ADS)

    Molini, A.; La Barbera, P.; Lanza, L. G.

    The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.

  2. Extending the Coyote emulator to dark energy models with standard w {sub 0}- w {sub a} parametrization of the equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casarini, L.; Bonometto, S.A.; Tessarotto, E.

    2016-08-01

    We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less

  3. Can a minimalist model of wind forced baroclinic Rossby waves produce reasonable results?

    NASA Astrophysics Data System (ADS)

    Watanabe, Wandrey B.; Polito, Paulo S.; da Silveira, Ilson C. A.

    2016-04-01

    The linear theory predicts that Rossby waves are the large scale mechanism of adjustment to perturbations of the geophysical fluid. Satellite measurements of sea level anomaly (SLA) provided sturdy evidence of the existence of these waves. Recent studies suggest that the variability in the altimeter records is mostly due to mesoscale nonlinear eddies and challenges the original interpretation of westward propagating features as Rossby waves. The objective of this work is to test whether a classic linear dynamic model is a reasonable explanation for the observed SLA. A linear-reduced gravity non-dispersive Rossby wave model is used to estimate the SLA forced by direct and remote wind stress. Correlations between model results and observations are up to 0.88. The best agreement is in the tropical region of all ocean basins. These correlations decrease towards insignificance in mid-latitudes. The relative contributions of eastern boundary (remote) forcing and local wind forcing in the generation of Rossby waves are also estimated and suggest that the main wave forming mechanism is the remote forcing. Results suggest that linear long baroclinic Rossby wave dynamics explain a significant part of the SLA annual variability at least in the tropical oceans.

  4. White noise analysis of Phycomyces light growth response system. I. Normal intensity range.

    PubMed Central

    Lipson, E D

    1975-01-01

    The Wiener-Lee-Schetzen method for the identification of a nonlinear system through white gaussian noise stimulation was applied to the transient light growth response of the sporangiophore of Phycomyces. In order to cover a moderate dynamic range of light intensity I, the imput variable was defined to be log I. The experiments were performed in the normal range of light intensity, centered about I0 = 10(-6) W/cm2. The kernels of the Wierner functionals were computed up to second order. Within the range of a few decades the system is reasonably linear with log I. The main nonlinear feature of the second-order kernel corresponds to the property of rectification. Power spectral analysis reveals that the slow dynamics of the system are of at least fifth order. The system can be represented approximately by a linear transfer function, including a first-order high-pass (adaptation) filter with a 4 min time constant and an underdamped fourth-order low-pass filter. Accordingly a linear electronic circuit was constructed to simulate the small scale response characteristics. In terms of the adaptation model of Delbrück and Reichardt (1956, in Cellular Mechanisms in Differentiation and Growth, Princeton University Press), kernels were deduced for the dynamic dependence of the growth velocity (output) on the "subjective intensity", a presumed internal variable. Finally the linear electronic simulator above was generalized to accommodate the large scale nonlinearity of the adaptation model and to serve as a tool for deeper test of the model. PMID:1203444

  5. Non-linear hydrodynamic instability and turbulence in eccentric astrophysical discs with vertical structure

    NASA Astrophysics Data System (ADS)

    Wienkers, A. F.; Ogilvie, G. I.

    2018-07-01

    Non-linear evolution of the parametric instability of inertial waves inherent to eccentric discs is studied by way of a new local numerical model. Mode coupling of tidal deformation with the disc eccentricity is known to produce exponentially growing eccentricities at certain mean-motion resonances. However, the details of an efficient saturation mechanism balancing this growth still are not fully understood. This paper develops a local numerical model for an eccentric quasi-axisymmetric shearing box which generalizes the often-used Cartesian shearing box model. The numerical method is an overall second-order well-balanced finite volume method which maintains the stratified and oscillatory steady-state solution by construction. This implementation is employed to study the non-linear outcome of the parametric instability in eccentric discs with vertical structure. Stratification is found to constrain the perturbation energy near the mid-plane and localize the effective region of inertial wave breaking that sources turbulence. A saturated marginally sonic turbulent state results from the non-linear breaking of inertial waves and is subsequently unstable to large-scale axisymmetric zonal flow structures. This resulting limit-cycle behaviour reduces access to the eccentric energy source and prevents substantial transport of angular momentum radially through the disc. Still, the saturation of this parametric instability of inertial waves is shown to damp eccentricity on a time-scale of a thousand orbital periods. It may thus be a promising mechanism for intermittently regaining balance with the exponential growth of eccentricity from the eccentric Lindblad resonances and may also help explain the occurrence of 'bursty' dynamics such as the superhump phenomenon.

  6. A Thermodynamic Approach to Soil-Plant-Atmosphere Modeling: From Metabolic Biochemical Processes to Water-Carbon-Nitrogen Balance

    NASA Astrophysics Data System (ADS)

    Clavijo, H. W.

    2016-12-01

    Modeling the soil-plant-atmosphere continuum has been central part of understanding interrelationships among biogeochemical and hydrological processes. Theory behind of couplings Land Surface Models (LSM) and Dynamical Global Vegetation Models (DGVM) are based on physical and physiological processes connected by input-output interactions mainly. This modeling framework could be improved by the application of non-equilibrium thermodynamic basis that could encompass the majority of biophysical processes in a standard fashion. This study presents an alternative model for plant-water-atmosphere based on energy-mass thermodynamics. The system of dynamic equations derived is based on the total entropy, the total energy balance for the plant, the biomass dynamics at metabolic level and the water-carbon-nitrogen fluxes and balances. One advantage of this formulation is the capability to describe adaptation and evolution of dynamics of plant as a bio-system coupled to the environment. Second, it opens a window for applications on specific conditions from individual plant scale, to watershed scale, to global scale. Third, it enhances the possibility of analyzing anthropogenic impacts on the system, benefiting from the mathematical formulation and its non-linearity. This non-linear model formulation is analyzed under the concepts of qualitative system dynamics theory, for different state-space phase portraits. The attractors and sources are pointed out with its stability analysis. Possibility of bifurcations are explored and reported. Simulations for the system dynamics under different conditions are presented. These results show strong consistency and applicability that validates the use of the non-equilibrium thermodynamic theory.

  7. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  8. Influence of landscape-scale factors in limiting brook trout populations in Pennsylvania streams

    USGS Publications Warehouse

    Kocovsky, P.M.; Carline, R.F.

    2006-01-01

    Landscapes influence the capacity of streams to produce trout through their effect on water chemistry and other factors at the reach scale. Trout abundance also fluctuates over time; thus, to thoroughly understand how spatial factors at landscape scales affect trout populations, one must assess the changes in populations over time to provide a context for interpreting the importance of spatial factors. We used data from the Pennsylvania Fish and Boat Commission's fisheries management database to investigate spatial factors that affect the capacity of streams to support brook trout Salvelinus fontinalis and to provide models useful for their management. We assessed the relative importance of spatial and temporal variation by calculating variance components and comparing relative standard errors for spatial and temporal variation. We used binary logistic regression to predict the presence of harvestable-length brook trout and multiple linear regression to assess the mechanistic links between landscapes and trout populations and to predict population density. The variance in trout density among streams was equal to or greater than the temporal variation for several streams, indicating that differences among sites affect population density. Logistic regression models correctly predicted the absence of harvestable-length brook trout in 60% of validation samples. The r 2-value for the linear regression model predicting density was 0.3, indicating low predictive ability. Both logistic and linear regression models supported buffering capacity against acid episodes as an important mechanistic link between landscapes and trout populations. Although our models fail to predict trout densities precisely, their success at elucidating the mechanistic links between landscapes and trout populations, in concert with the importance of spatial variation, increases our understanding of factors affecting brook trout abundance and will help managers and private groups to protect and enhance populations of wild brook trout. ?? Copyright by the American Fisheries Society 2006.

  9. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  10. An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin

    Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less

  11. An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology

    DOE PAGES

    Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin; ...

    2017-05-15

    Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less

  12. Magnetic Doppler imaging of Ap stars

    NASA Astrophysics Data System (ADS)

    Silvester, J.; Wade, G. A.; Kochukhov, O.; Landstreet, J. D.; Bagnulo, S.

    2008-04-01

    Historically, the magnetic field geometries of the chemically peculiar Ap stars were modelled in the context of a simple dipole field. However, with the acquisition of increasingly sophisticated diagnostic data, it has become clear that the large-scale field topologies exhibit important departures from this simple model. Recently, new high-resolution circular and linear polarisation spectroscopy has even hinted at the presence of strong, small-scale field structures, which were completely unexpected based on earlier modelling. This project investigates the detailed structure of these strong fossil magnetic fields, in particular the large-scale field geometry, as well as small scale magnetic structures, by mapping the magnetic and chemical surface structure of a selected sample of Ap stars. These maps will be used to investigate the relationship between the local field vector and local surface chemistry, looking for the influence the field may have on the various chemical transport mechanisms (i.e., diffusion, convection and mass loss). This will lead to better constraints on the origin and evolution, as well as refining the magnetic field model for Ap stars. Mapping will be performed using high resolution and signal-to-noise ratio time-series of spectra in both circular and linear polarisation obtained using the new-generation ESPaDOnS (CFHT, Mauna Kea, Hawaii) and NARVAL spectropolarimeters (Pic du Midi Observatory). With these data we will perform tomographic inversion of Doppler-broadened Stokes IQUV Zeeman profiles of a large variety of spectral lines using the INVERS10 magnetic Doppler imaging code, simultaneously recovering the detailed surface maps of the vector magnetic field and chemical abundances.

  13. Hysteresis, regime shifts, and non-stationarity in aquifer recharge-storage-discharge systems

    NASA Astrophysics Data System (ADS)

    Klammler, Harald; Jawitz, James; Annable, Michael; Hatfield, Kirk; Rao, Suresh

    2016-04-01

    Based on physical principles and geological information we develop a parsimonious aquifer model for Silver Springs, one of the largest karst springs in Florida. The model structure is linear and time-invariant with recharge, aquifer head (storage) and spring discharge as dynamic variables at the springshed (landscape) scale. Aquifer recharge is the hydrological driver with trends over a range of time scales from seasonal to multi-decadal. The freshwater-saltwater interaction is considered as a dynamic storage mechanism. Model results and observed time series show that aquifer storage causes significant rate-dependent hysteretic behavior between aquifer recharge and discharge. This leads to variable discharge per unit recharge over time scales up to decades, which may be interpreted as a gradual and cyclic regime shift in the aquifer drainage behavior. Based on field observations, we further amend the aquifer model by assuming vegetation growth in the spring run to be inversely proportional to stream velocity and to hinder stream flow. This simple modification introduces non-linearity into the dynamic system, for which we investigate the occurrence of rate-independent hysteresis and of different possible steady states with respective regime shifts between them. Results may contribute towards explaining observed non-stationary behavior potentially due to hydrological regime shifts (e.g., triggered by gradual, long-term changes in recharge or single extreme events) or long-term hysteresis (e.g., caused by aquifer storage). This improved understanding of the springshed hydrologic response dynamics is fundamental for managing the ecological, economic and social aspects at the landscape scale.

  14. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    NASA Astrophysics Data System (ADS)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  15. Multidecadal Variability in Surface Albedo Feedback Across CMIP5 Models

    NASA Astrophysics Data System (ADS)

    Schneider, Adam; Flanner, Mark; Perket, Justin

    2018-02-01

    Previous studies quantify surface albedo feedback (SAF) in climate change, but few assess its variability on decadal time scales. Using the Coupled Model Intercomparison Project Version 5 (CMIP5) multimodel ensemble data set, we calculate time evolving SAF in multiple decades from surface albedo and temperature linear regressions. Results are meaningful when temperature change exceeds 0.5 K. Decadal-scale SAF is strongly correlated with century-scale SAF during the 21st century. Throughout the 21st century, multimodel ensemble mean SAF increases from 0.37 to 0.42 W m-2 K-1. These results suggest that models' mean decadal-scale SAFs are good estimates of their century-scale SAFs if there is at least 0.5 K temperature change. Persistent SAF into the late 21st century indicates ongoing capacity for Arctic albedo decline despite there being less sea ice. If the CMIP5 multimodel ensemble results are representative of the Earth, we cannot expect decreasing Arctic sea ice extent to suppress SAF in the 21st century.

  16. Constraints on a scale-dependent bias from galaxy clustering

    NASA Astrophysics Data System (ADS)

    Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.

    2017-01-01

    We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.

  17. Comparative study of sea ice dynamics simulations with a Maxwell elasto-brittle rheology and the elastic-viscous-plastic rheology in NEMO-LIM3

    NASA Astrophysics Data System (ADS)

    Raulier, Jonathan; Dansereau, Véronique; Fichefet, Thierry; Legat, Vincent; Weiss, Jérôme

    2017-04-01

    Sea ice is a highly dynamical environment characterized by a dense mesh of fractures or leads, constantly opening and closing over short time scales. This characteristic geomorphology is linked to the existence of linear kinematic features, which consist of quasi-linear patterns emerging from the observed strain rate field of sea ice. Standard rheologies used in most state-of-the-art sea ice models, like the well-known elastic-viscous-plastic rheology, are thought to misrepresent those linear kinematic features and the observed statistical distribution of deformation rates. Dedicated rheologies built to catch the processes known to be at the origin of the formation of leads are developed but still need evaluations on the global scale. One of them, based on a Maxwell elasto-brittle formulation, is being integrated in the NEMO-LIM3 global ocean-sea ice model (www.nemo-ocean.eu; www.elic.ucl.ac.be/lim). In the present study, we compare the results of the sea ice model LIM3 obtained with two different rheologies: the elastic-viscous-plastic rheology commonly used in LIM3 and a Maxwell elasto-brittle rheology. This comparison is focused on the statistical characteristics of the simulated deformation rate and on the ability of the model to reproduce the existence of leads within the ice pack. The impact of the lead representation on fluxes between ice, atmosphere and ocean is also assessed.

  18. Modeling small-scale dairy farms in central Mexico using multi-criteria programming.

    PubMed

    Val-Arreola, D; Kebreab, E; France, J

    2006-05-01

    Milk supply from Mexican dairy farms does not meet demand and small-scale farms can contribute toward closing the gap. Two multi-criteria programming techniques, goal programming and compromise programming, were used in a study of small-scale dairy farms in central Mexico. To build the goal and compromise programming models, 4 ordinary linear programming models were also developed, which had objective functions to maximize metabolizable energy for milk production, to maximize margin of income over feed costs, to maximize metabolizable protein for milk production, and to minimize purchased feedstuffs. Neither multi-criteria approach was significantly better than the other; however, by applying both models it was possible to perform a more comprehensive analysis of these small-scale dairy systems. The multi-criteria programming models affirm findings from previous work and suggest that a forage strategy based on alfalfa, ryegrass, and corn silage would meet nutrient requirements of the herd. Both models suggested that there is an economic advantage in rescheduling the calving season to the second and third calendar quarters to better synchronize higher demand for nutrients with the period of high forage availability.

  19. A generalized analog implementation of piecewise linear neuron models using CCII building blocks.

    PubMed

    Soleimani, Hamid; Ahmadi, Arash; Bavandpour, Mohammad; Sharifipoor, Ozra

    2014-03-01

    This paper presents a set of reconfigurable analog implementations of piecewise linear spiking neuron models using second generation current conveyor (CCII) building blocks. With the same topology and circuit elements, without W/L modification which is impossible after circuit fabrication, these circuits can produce different behaviors, similar to the biological neurons, both for a single neuron as well as a network of neurons just by tuning reference current and voltage sources. The models are investigated, in terms of analog implementation feasibility and costs, targeting large scale hardware implementations. Results show that, in order to gain the best performance, area and accuracy; these models can be compromised. Simulation results are presented for different neuron behaviors with CMOS 350 nm technology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Dynamics of f(R) gravity models and asymmetry of time

    NASA Astrophysics Data System (ADS)

    Verma, Murli Manohar; Yadav, Bal Krishna

    We solve the field equations of modified gravity for f(R) model in metric formalism. Further, we obtain the fixed points of the dynamical system in phase-space analysis of f(R) models, both with and without the effects of radiation. The stability of these points is studied against the perturbations in a smooth spatial background by applying the conditions on the eigenvalues of the matrix obtained in the linearized first-order differential equations. Following this, these fixed points are used for analyzing the dynamics of the system during the radiation, matter and acceleration-dominated phases of the universe. Certain linear and quadratic forms of f(R) are determined from the geometrical and physical considerations and the behavior of the scale factor is found for those forms. Further, we also determine the Hubble parameter H(t), the Ricci scalar R and the scale factor a(t) for these cosmic phases. We show the emergence of an asymmetry of time from the dynamics of the scalar field exclusively owing to the f(R) gravity in the Einstein frame that may lead to an arrow of time at a classical level.

  1. Application of a linear spectral model to the study of Amazonian squall lines during GTE/ABLE 2B

    NASA Technical Reports Server (NTRS)

    Silva Dias, Maria A. F.; Ferreira, Rosana N.

    1992-01-01

    A linear nonhydrostatic spectral model is run with the basic state, or large scale, vertical profiles of temperature and wind observed prior to convective development along the northern coast of South America during the GTE/ABLE 2B. The model produces unstable modes with mesoscale wavelength and propagation speed comparable to observed Amazonian squall lines. Several tests with different vertical profiles of low-level winds lead to the conclusion that a shallow and/or weak low-level jet either does not produce a scale selection or, if it does, the selected mode is stationary, indicating the absence of a propagating disturbance. A 700-mbar jet of 13 m/s, with a 600-mbar wind speed greater or equal to 10 m/s, is enough to produce unstable modes with propagating features resembling those of observed Amazonian squall lines. However, a deep layer of moderate winds (about 10 m/s) may produce similar results even in the absence of a low-level wind maximum. The implications in terms of short-term weather forecasting are discussed.

  2. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical

  3. A perturbation analysis of a mechanical model for stable spatial patterning in embryology

    NASA Astrophysics Data System (ADS)

    Bentil, D. E.; Murray, J. D.

    1992-12-01

    We investigate a mechanical cell-traction mechanism that generates stationary spatial patterns. A linear analysis highlights the model's potential for these heterogeneous solutions. We use multiple-scale perturbation techniques to study the evolution of these solutions and compare our solutions with numerical simulations of the model system. We discuss some potential biological applications among which are the formation of ridge patterns, dermatoglyphs, and wound healing.

  4. Multiscale Constitutive Modeling of Asphalt Concrete

    NASA Astrophysics Data System (ADS)

    Underwood, Benjamin Shane

    Multiscale modeling of asphalt concrete has become a popular technique for gaining improved insight into the physical mechanisms that affect the material's behavior and ultimately its performance. This type of modeling considers asphalt concrete, not as a homogeneous mass, but rather as an assemblage of materials at different characteristic length scales. For proper modeling these characteristic scales should be functionally definable and should have known properties. Thus far, research in this area has not focused significant attention on functionally defining what the characteristic scales within asphalt concrete should be. Instead, many have made assumptions on the characteristic scales and even the characteristic behaviors of these scales with little to no support. This research addresses these shortcomings by directly evaluating the microstructure of the material and uses these results to create materials of different characteristic length scales as they exist within the asphalt concrete mixture. The objectives of this work are to; 1) develop mechanistic models for the linear viscoelastic (LVE) and damage behaviors in asphalt concrete at different length scales and 2) develop a mechanistic, mechanistic/empirical, or phenomenological formulation to link the different length scales into a model capable of predicting the effects of microstructural changes on the linear viscoelastic behaviors of asphalt concrete mixture, e.g., a microstructure association model for asphalt concrete mixture. Through the microstructural study it is found that asphalt concrete mixture can be considered as a build-up of three different phases; asphalt mastic, fine aggregate matrix (FAM), and finally the coarse aggregate particles. The asphalt mastic is found to exist as a homogenous material throughout the mixture and FAM, and the filler content within this material is consistent with the volumetric averaged concentration, which can be calculated from the job mix formula. It is also found that the maximum aggregate size of the FAM is mixture dependent, but consistent with a gradation parameter from the Baily Method of mixture design. Mechanistic modeling of these different length scales reveals that although many consider asphalt concrete to be a LVE material, it is in fact only quasi-LVE because it shows some tendencies that are inconsistent with LVE theory. Asphalt FAM and asphalt mastic show similar nonlinear tendencies although the exact magnitude of the effect differs. These tendencies can be ignored for damage modeling in the mixture and FAM scales as long as the effects are consistently ignored, but it is found that they must be accounted for in mastic and binder damage modeling. The viscoelastic continuum damage (VECD) model is used for damage modeling in this research. To aid in characterization and application of the VECD model for cyclic testing, a simplified version (S-VECD) is rigorously derived and verified. Through the modeling efforts at each scale, various factors affecting the fundamental and engineering properties at each scale are observed and documented. A microstructure association model that accounts for particle interaction through physico-chemical processes and the effects of aggregate structuralization is developed to links the moduli at each scale. This model is shown to be capable of upscaling the mixture modulus from either the experimentally determined mastic modulus or FAM modulus. Finally, an initial attempt at upscaling the damage and nonlinearity phenomenon is shown.

  5. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    NASA Astrophysics Data System (ADS)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  6. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  7. The development of neutrino-driven convection in core-collapse supernovae: 2D vs 3D

    NASA Astrophysics Data System (ADS)

    Kazeroni, R.; Krueger, B. K.; Guilet, J.; Foglizzo, T.

    2017-12-01

    A toy model is used to study the non-linear conditions for the development of neutrino-driven convection in the post-shock region of core-collapse supernovae. Our numerical simulations show that a buoyant non-linear perturbation is able to trigger self-sustained convection only in cases where convection is not linearly stabilized by advection. Several arguments proposed to interpret the impact of the dimensionality on global core-collapse supernova simulations are discussed in the light of our model. The influence of the numerical resolution is also addressed. In 3D a strong mixing to small scales induces an increase of the neutrino heating efficiency in a runaway process. This phenomenon is absent in 2D and this may indicate that the tridimensional nature of the hydrodynamics could foster explosions.

  8. Improvements in mode-based waveform modeling and application to Eurasian velocity structure

    NASA Astrophysics Data System (ADS)

    Panning, M. P.; Marone, F.; Kim, A.; Capdeville, Y.; Cupillard, P.; Gung, Y.; Romanowicz, B.

    2006-12-01

    We introduce several recent improvements to mode-based 3D and asymptotic waveform modeling and examine how to integrate them with numerical approaches for an improved model of upper-mantle structure under eastern Eurasia. The first step in our approach is to create a large-scale starting model including shear anisotropy using Nonlinear Asymptotic Coupling Theory (NACT; Li and Romanowicz, 1995), which models the 2D sensitivity of the waveform to the great-circle path between source and receiver. We have recently improved this approach by implementing new crustal corrections which include a non-linear correction for the difference between the average structure of several large regions from the global model with further linear corrections to account for the local structure along the path between source and receiver (Marone and Romanowicz, 2006; Panning and Romanowicz, 2006). This model is further refined using a 3D implementation of Born scattering (Capdeville, 2005). We have made several recent improvements to this method, in particular introducing the ability to represent perturbations to discontinuities. While the approach treats all sensitivity as linear perturbations to the waveform, we have also experimented with a non-linear modification analogous to that used in the development of NACT. This allows us to treat large accumulated phase delays determined from a path-average approximation non-linearly, while still using the full 3D sensitivity of the Born approximation. Further refinement of shallow regions of the model is obtained using broadband forward finite-difference waveform modeling. We are also integrating a regional Spectral Element Method code into our tomographic modeling, allowing us to move beyond many assumptions inherent in the analytic mode-based approaches, while still taking advantage of their computational efficiency. Illustrations of the effects of these increasingly sophisticated steps will be presented.

  9. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    PubMed

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  10. Schwarzschild and linear potentials in Mannheim's model of conformal gravity

    NASA Astrophysics Data System (ADS)

    Phillips, Peter R.

    2018-05-01

    We study the equations of conformal gravity, as given by Mannheim, in the weak field limit, so that a linear approximation is adequate. Specialising to static fields with spherical symmetry, we obtain a second-order equation for one of the metric functions. We obtain the Green function for this equation, and represent the metric function in the form of integrals over the source. Near a compact source such as the Sun the solution no longer has a form that is compatible with observations. We conclude that a solution of Mannheim type (a Schwarzschild term plus a linear potential of galactic scale) cannot exist for these field equations.

  11. Experimental Analysis of the Vorticity and Turbulent Flow Dynamics of a Pitching Airfoil at Realistic Flight Conditions

    DTIC Science & Technology

    2007-08-31

    Element type Hex, independent meshing, Linear 3D stress Hex, independent meshing, Linear 3D stress 1 English Units were used in ABAQUS The NACA...Flow Freestream Condition Instrumentation Test section conditions were measured using a Druck DPI 203 digital pressure gage and an Omega Model 199...temperature gage. The Druck pressure gage measures the set dynamic pressure within 0.08%± of full scale, and the Omega thermometer is accurate to

  12. COSMOS: STOCHASTIC BIAS FROM MEASUREMENTS OF WEAK LENSING AND GALAXY CLUSTERING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jullo, Eric; Rhodes, Jason; Kiessling, Alina

    2012-05-01

    In the theory of structure formation, galaxies are biased tracers of the underlying matter density field. The statistical relation between galaxy and matter density field is commonly referred to as galaxy bias. In this paper, we test the linear bias model with weak-lensing and galaxy clustering measurements in the 2 deg{sup 2} COSMOS field. We estimate the bias of galaxies between redshifts z = 0.2 and z = 1 and over correlation scales between R = 0.2 h{sup -1} Mpc and R = 15 h{sup -1} Mpc. We focus on three galaxy samples, selected in flux (simultaneous cuts I{sub 814W}more » < 26.5 and K{sub s} < 24) and in stellar mass (10{sup 9} < M{sub *} < 10{sup 10} h{sup -2} M{sub Sun} and 10{sup 10} < M{sub *} < 10{sup 11} h{sup -2} M{sub Sun }). At scales R > 2 h{sup -1} Mpc, our measurements support a model of bias increasing with redshift. The Tinker et al. fitting function provides a good fit to the data. We find the best-fit mass of the galaxy halos to be log (M{sub 200}/h{sup -1} M{sub Sun }) = 11.7{sup +0.6}{sub -1.3} and log (M{sub 200}/h{sup -1} M{sub Sun }) = 12.4{sup +0.2}{sub -2.9}, respectively, for the low and high stellar-mass samples. In the halo model framework, bias is scale dependent with a change of slope at the transition scale between the one and the two halo terms. We detect a scale dependence of bias with a turndown at scale R = 2.3 {+-} 1.5 h{sup -1} Mpc, in agreement with previous galaxy clustering studies. We find no significant amount of stochasticity, suggesting that a linear bias model is sufficient to describe our data. We use N-body simulations to quantify both the amount of cosmic variance and systematic errors in the measurement.« less

  13. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  14. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)

  15. Exercise Prescription Using a Group-Normalized Rating of Perceived Exertion in Adolescents and Adults With Spina Bifida.

    PubMed

    Crytzer, Theresa M; Keramati, Mariam; Anthony, Steven J; Cheng, Yu-Ting; Robertson, Robert J; Dicianno, Brad E

    2018-02-03

    People with spina bifida (SB) face personal and environmental barriers to exercise that contribute to physical inactivity, obesity, risk of cardiovascular disease, and poor aerobic fitness. The WHEEL rating of perceived exertion (RPE) Scale was validated in people with SB to monitor exercise intensity. However, the psycho-physiological link between RPE and ventilatory breakpoint (Vpt), the group-normalized perceptual response, has not been determined and would provide a starting point for aerobic exercise in this cohort. The primary objectives were to determine the group-normalized RPE equivalent to Vpt based on WHEEL and Borg Scale ratings and to develop a regression model to predict Borg Scale (conditional metric) from WHEEL Scale (criterion metric). The secondary objective was to create a table of interchangeable values between WHEEL and Borg Scale RPE for people with SB performing a load incremental stress test. Cross-sectional observational. University laboratory. Twenty-nine participants with SB. Participants completed a load incremented arm ergometer exercise stress test. WHEEL and Borg Scale ratings were recorded the last 15 seconds of each 1-minute test phase. WHEEL and Borg Scale ratings, metabolic measures (eg, oxygen consumption, carbon dioxide production). Determined Vpt via plots of oxygen consumption and carbon dioxide production against time. Nineteen of 29 participants achieved Vpt (Group A). The mean ± standard deviation peak oxygen consumption at Vpt for Group A was 61.76 ± 16.26. The WHEEL and Borg Scale RPE at Vpt were 5.74 ± 2.58 (range 0-10) and 13.95 ± 3.50 (range 6-19), respectively. A significant linear regression model was developed (Borg Scale rating = 1.22 × WHEEL Scale rating + 7.14) and used to create a WHEEL-to-Borg Scale RPE conversion table. A significant linear regression model and table of interchangeable values was developed for participants with SB. The group-normalized RPE (WHEEL, 5.74; Borg, 13.95) can be used to prescribe and self-regulate arm ergometer exercise intensity approximating the Vpt. II. Copyright © 2018. Published by Elsevier Inc.

  16. Using HLM to Explore the Effects of Perceptions of Learning Environments and Assessments on Students' Test Performance

    ERIC Educational Resources Information Center

    Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P.

    2014-01-01

    The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was explained…

  17. Advancing Blade Concept (ABC) Technology Demonstrator

    DTIC Science & Technology

    1981-04-01

    simulated 40-knot full-scale speed were conducted in Phase 0 on the Princeton dynamic model tract (Reference 7). Forward flight tests to a...laterally and longitudinally but also to control the thrust sharing between the rotors are presented in Figure 28. Phase II Tests : This model test phase...were rigged to the required values. Control system linearity and hysteresis tests were conducted to determine

  18. Heuristic Model Of The Composite Quality Index Of Environmental Assessment

    NASA Astrophysics Data System (ADS)

    Khabarov, A. N.; Knyaginin, A. A.; Bondarenko, D. V.; Shepet, I. P.; Korolkova, L. N.

    2017-01-01

    The goal of the paper is to present the heuristic model of the composite environmental quality index based on the integrated application of the elements of utility theory, multidimensional scaling, expert evaluation and decision-making. The composite index is synthesized in linear-quadratic form, it provides higher adequacy of the results of the assessment preferences of experts and decision-makers.

  19. On the Scaling Law for Broadband Shock Noise Intensity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Kanudula, Max

    2009-01-01

    A theoretical model for the scaling of broadband shock noise intensity in supersonic jets was formulated on the basis of linear shock-shear wave interaction. An hypothesis has been postulated that the peak angle of incidence (closer to the critical angle) for the shear wave primarily governs the generation of sound in the interaction process rather than the noise generation contribution from off-peak incident angles. The proposed theory satisfactorily explains the well-known scaling law for the broadband shock -associated noise in supersonic jets.

  20. HYDRORECESSION: A toolbox for streamflow recession analysis

    NASA Astrophysics Data System (ADS)

    Arciniega, S.

    2015-12-01

    Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.

  1. The minimal axion minimal linear σ model

    NASA Astrophysics Data System (ADS)

    Merlo, L.; Pobbe, F.; Rigolin, S.

    2018-05-01

    The minimal SO(5) / SO(4) linear σ model is extended including an additional complex scalar field, singlet under the global SO(5) and the Standard Model gauge symmetries. The presence of this scalar field creates the conditions to generate an axion à la KSVZ, providing a solution to the strong CP problem, or an axion-like-particle. Different choices for the PQ charges are possible and lead to physically distinct Lagrangians. The internal consistency of each model necessarily requires the study of the scalar potential describing the SO(5)→ SO(4), electroweak and PQ symmetry breaking. A single minimal scenario is identified and the associated scalar potential is minimised including counterterms needed to ensure one-loop renormalizability. In the allowed parameter space, phenomenological features of the scalar degrees of freedom, of the exotic fermions and of the axion are illustrated. Two distinct possibilities for the axion arise: either it is a QCD axion with an associated scale larger than ˜ 105 TeV and therefore falling in the category of the invisible axions; or it is a more massive axion-like-particle, such as a 1 GeV axion with an associated scale of ˜ 200 TeV, that may show up in collider searches.

  2. Atmospheric planetary wave response to external forcing

    NASA Technical Reports Server (NTRS)

    Stevens, D. E.; Reiter, E. R.

    1985-01-01

    The tools of observational analysis, complex general circulation modeling, and simpler modeling approaches were combined in order to attack problems on the largest spatial scales of the earth's atmosphere. Two different models were developed and applied. The first is a two level, global spectral model which was designed primarily to test the effects of north-south sea surface temperature anomaly (SSTA) gradients between the equatorial and midlatitude north Pacific. The model is nonlinear, contains both radiation and a moisture budget with associated precipitation and surface evaporation, and utilizes a linear balance dynamical framework. Supporting observational analysis of atmospheric planetary waves is briefly summarized. More extensive general circulation models have also been used to consider the problem of the atmosphere's response, especially in the horizontal propagation of planetary scale waves, to SSTA.

  3. Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2017-12-21

    Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less

  4. Modeling turbulent energy behavior and sudden viscous dissipation in compressing plasma turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidovits, Seth; Fisch, Nathaniel J.

    Here, we present a simple model for the turbulent kinetic energy behavior of subsonic plasma turbulence undergoing isotropic three-dimensional compression, which may exist in various inertial confinement fusion experiments or astrophysical settings. The plasma viscosity depends on both the temperature and the ionization state, for which many possible scalings with compression are possible. For example, in an adiabatic compression the temperature scales as 1/L 2, with L the linear compression ratio, but if thermal energy loss mechanisms are accounted for, the temperature scaling may be weaker. As such, the viscosity has a wide range of net dependencies on the compression.more » The model presented here, with no parameter changes, agrees well with numerical simulations for a range of these dependencies. This model permits the prediction of the partition of injected energy between thermal and turbulent energy in a compressing plasma.« less

  5. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    PubMed

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  6. Scaling laws for mixing and dissipation in unforced rotating stratified turbulence

    NASA Astrophysics Data System (ADS)

    Pouquet, A.; Rosenberg, D.; Marino, R.; Herbert, C.

    2018-06-01

    We present a model for the scaling of mixing in weakly rotating stratified flows characterized by their Rossby, Froude and Reynolds numbers Ro, Fr, Re. It is based on quasi-equipartition between kinetic and potential modes, sub-dominant vertical velocity and lessening of the energy transfer to small scales as measured by the ratio rE of kinetic energy dissipation to its dimensional expression. We determine their domains of validity for a numerical study of the unforced Boussinesq equations mostly on grids of 10243 points, with Ro/Fr> 2.5 and with 1600< Re<1.9x104; the Prandtl number is one, initial conditions are either isotropic and at large scale for the velocity, and zero for the temperature {\\theta}, or in geostrophic balance. Three regimes in Fr are observed: dominant waves, eddy-wave interactions and strong turbulence. A wave-turbulence balance for the transfer time leads to rE growing linearly with Fr in the intermediate regime, with a saturation at ~0.3 or more, depending on initial conditions for larger Froude numbers. The Ellison scale is also found to scale linearly with Fr, and the flux Richardson number Rf transitions for roughly the same parameter values as well. Putting together the 3 relationships of the model allows for the prediction of mixing efficiency scaling as Fr-2~RB-1 in the low and intermediate regimes, whereas for higher Fr, it scales as RB-1/2, as already observed: as turbulence strengthens, rE~1, the velocity is isotropic and smaller buoyancy fluxes altogether correspond to a decoupling of velocity and temperature fluctuations, the latter becoming passive.

  7. Mountain plover population responses to black-tailed prairie dogs in Montana

    USGS Publications Warehouse

    Dinsmore, S.J.; White, Gary C.; Knopf, F.L.

    2005-01-01

    We studied a local population of mountain plovers (Charadrius montanus) in southern Phillips County, Montana, USA, from 1995 to 2000 to estimate annual rates of recruitment rate (f) and population change (??). We used Pradel models, and we modeled ?? as a constant across years, as a linear time trend, as year-specific, and with an additive effect of area occupied by prairie dogs (Cynomys ludovicianus). We modeled recruitment rate (f) as a function of area occupied by prairie dogs with the remaining model structure identical to the best model used to estimate ??. Our results indicated a strong negative effect of area occupied by prairie dogs on both ?? (slope coefficient on a log scale was -0.11; 95% CI was -0.17, -0.05) and f (slope coefficient on a logit scale was -0.23; 95% CI was -0.36, -0.10). We also found good evidence for a negative time trend on ??; this model had substantial weight (wi = 0.31), and the slope coefficient on the linear trend on a log scale was -0.10 (95% CI was -0.15, -0.05). Yearly estimates of ?? were >1 in all years except 1999, indicating that the population initially increased and then stabilized in the last year of the study. We found weak evidence for year-specific estimates of ??; the best model with year-specific estimates had a low weight (wi = 0.02), although the pattern of yearly estimates of ?? closely matched those estimated with a linear time trend. In southern Phillips County, the population trend of mountain plovers closely matched the trend in the area occupied by black-tailed prairie dogs. Black-tailed prairie dogs declined sharply in the mid-1990s in response to an outbreak of sylvatic plague, but their numbers have steadily increased since 1996 in concert with increases in plovers. The results of this study (1) increase our understanding of the dynamics of this population and how they relate to the area occupied by prairie dogs, and (2) will be useful for planning plover conservation in a prairie dog ecosystem.

  8. Modes and emergent time scales of embayed beach dynamics

    NASA Astrophysics Data System (ADS)

    Ratliff, Katherine M.; Murray, A. Brad

    2014-10-01

    In this study, we use a simple numerical model (the Coastline Evolution Model) to explore alongshore transport-driven shoreline dynamics within generalized embayed beaches (neglecting cross-shore effects). Using principal component analysis (PCA), we identify two primary orthogonal modes of shoreline behavior that describe shoreline variation about its unchanging mean position: the rotation mode, which has been previously identified and describes changes in the mean shoreline orientation, and a newly identified breathing mode, which represents changes in shoreline curvature. Wavelet analysis of the PCA mode time series reveals characteristic time scales of these modes (typically years to decades) that emerge within even a statistically constant white-noise wave climate (without changes in external forcing), suggesting that these time scales can arise from internal system dynamics. The time scales of both modes increase linearly with shoreface depth, suggesting that the embayed beach sediment transport dynamics exhibit a diffusive scaling.

  9. Cosmological tests of modified gravity.

    PubMed

    Koyama, Kazuya

    2016-04-01

    We review recent progress in the construction of modified gravity models as alternatives to dark energy as well as the development of cosmological tests of gravity. Einstein's theory of general relativity (GR) has been tested accurately within the local universe i.e. the Solar System, but this leaves the possibility open that it is not a good description of gravity at the largest scales in the Universe. This being said, the standard model of cosmology assumes GR on all scales. In 1998, astronomers made the surprising discovery that the expansion of the Universe is accelerating, not slowing down. This late-time acceleration of the Universe has become the most challenging problem in theoretical physics. Within the framework of GR, the acceleration would originate from an unknown dark energy. Alternatively, it could be that there is no dark energy and GR itself is in error on cosmological scales. In this review, we first give an overview of recent developments in modified gravity theories including f(R) gravity, braneworld gravity, Horndeski theory and massive/bigravity theory. We then focus on common properties these models share, such as screening mechanisms they use to evade the stringent Solar System tests. Once armed with a theoretical knowledge of modified gravity models, we move on to discuss how we can test modifications of gravity on cosmological scales. We present tests of gravity using linear cosmological perturbations and review the latest constraints on deviations from the standard [Formula: see text]CDM model. Since screening mechanisms leave distinct signatures in the non-linear structure formation, we also review novel astrophysical tests of gravity using clusters, dwarf galaxies and stars. The last decade has seen a number of new constraints placed on gravity from astrophysical to cosmological scales. Thanks to on-going and future surveys, cosmological tests of gravity will enjoy another, possibly even more, exciting ten years.

  10. Formation of large-scale structure from cosmic strings and massive neutrinos

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Melott, Adrian L.; Bertschinger, Edmund

    1989-01-01

    Numerical simulations of large-scale structure formation from cosmic strings and massive neutrinos are described. The linear power spectrum in this model resembles the cold-dark-matter power spectrum. Galaxy formation begins early, and the final distribution consists of isolated density peaks embedded in a smooth background, leading to a natural bias in the distribution of luminous matter. The distribution of clustered matter has a filamentary appearance with large voids.

  11. Macroweather Predictions and Climate Projections using Scaling and Historical Observations

    NASA Astrophysics Data System (ADS)

    Hébert, R.; Lovejoy, S.; Del Rio Amador, L.

    2017-12-01

    There are two fundamental time scales that are pertinent to decadal forecasts and multidecadal projections. The first is the lifetime of planetary scale structures, about 10 days (equal to the deterministic predictability limit), and the second is - in the anthropocene - the scale at which the forced anthropogenic variability exceeds the internal variability (around 16 - 18 years). These two time scales define three regimes of variability: weather, macroweather and climate that are respectively characterized by increasing, decreasing and then increasing varibility with scale.We discuss how macroweather temperature variability can be skilfully predicted to its theoretical stochastic predictability limits by exploiting its long-range memory with the Stochastic Seasonal and Interannual Prediction System (StocSIPS). At multi-decadal timescales, the temperature response to forcing is approximately linear and this can be exploited to make projections with a Green's function, or Climate Response Function (CRF). To make the problem tractable, we exploit the temporal scaling symmetry and restrict our attention to global mean forcing and temperature response using a scaling CRF characterized by the scaling exponent H and an inner scale of linearity τ. An aerosol linear scaling factor α and a non-linear volcanic damping exponent ν were introduced to account for the large uncertainty in these forcings. We estimate the model and forcing parameters by Bayesian inference using historical data and these allow us to analytically calculate a median (and likely 66% range) for the transient climate response, and for the equilibrium climate sensitivity: 1.6K ([1.5,1.8]K) and 2.4K ([1.9,3.4]K) respectively. Aerosol forcing typically has large uncertainty and we find a modern (2005) forcing very likely range (90%) of [-1.0, -0.3] Wm-2 with median at -0.7 Wm-2. Projecting to 2100, we find that to keep the warming below 1.5 K, future emissions must undergo cuts similar to Representative Concentration Pathway (RCP) 2.6 for which the probability to remain under 1.5 K is 48%. RCP 4.5 and RCP 8.5-like futures overshoot with very high probability. This underscores that over the next century, the state of the environment will be strongly influenced by past, present and future economical policies.

  12. Are Regional Habitat Models Useful at a Local-Scale? A Case Study of Threatened and Common Insectivorous Bats in South-Eastern Australia

    PubMed Central

    McConville, Anna; Law, Bradley S.; Mahony, Michael J.

    2013-01-01

    Habitat modelling and predictive mapping are important tools for conservation planning, particularly for lesser known species such as many insectivorous bats. However, the scale at which modelling is undertaken can affect the predictive accuracy and restrict the use of the model at different scales. We assessed the validity of existing regional-scale habitat models at a local-scale and contrasted the habitat use of two morphologically similar species with differing conservation status (Mormopterus norfolkensis and Mormopterus species 2). We used negative binomial generalised linear models created from indices of activity and environmental variables collected from systematic acoustic surveys. We found that habitat type (based on vegetation community) best explained activity of both species, which were more active in floodplain areas, with most foraging activity recorded in the freshwater wetland habitat type. The threatened M. norfolkensis avoided urban areas, which contrasts with M. species 2 which occurred frequently in urban bushland. We found that the broad habitat types predicted from local-scale models were generally consistent with those from regional-scale models. However, threshold-dependent accuracy measures indicated a poor fit and we advise caution be applied when using the regional models at a fine scale, particularly when the consequences of false negatives or positives are severe. Additionally, our study illustrates that habitat type classifications can be important predictors and we suggest they are more practical for conservation than complex combinations of raw variables, as they are easily communicated to land managers. PMID:23977296

  13. Quantum Drude oscillator model of atoms and molecules: Many-body polarization and dispersion interactions for atomistic simulation

    NASA Astrophysics Data System (ADS)

    Jones, Andrew P.; Crain, Jason; Sokhan, Vlad P.; Whitfield, Troy W.; Martyna, Glenn J.

    2013-04-01

    Treating both many-body polarization and dispersion interactions is now recognized as a key element in achieving the level of atomistic modeling required to reveal novel physics in complex systems. The quantum Drude oscillator (QDO), a Gaussian-based, coarse grained electronic structure model, captures both many-body polarization and dispersion and has linear scale computational complexity with system size, hence it is a leading candidate next-generation simulation method. Here, we investigate the extent to which the QDO treatment reproduces the desired long-range atomic and molecular properties. We present closed form expressions for leading order polarizabilities and dispersion coefficients and derive invariant (parameter-free) scaling relationships among multipole polarizability and many-body dispersion coefficients that arise due to the Gaussian nature of the model. We show that these “combining rules” hold to within a few percent for noble gas atoms, alkali metals, and simple (first-row hydride) molecules such as water; this is consistent with the surprising success that models with underlying Gaussian statistics often exhibit in physics. We present a diagrammatic Jastrow-type perturbation theory tailored to the QDO model that serves to illustrate the rich types of responses that the QDO approach engenders. QDO models for neon, argon, krypton, and xenon, designed to reproduce gas phase properties, are constructed and their condensed phase properties explored via linear scale diffusion Monte Carlo (DMC) and path integral molecular dynamics (PIMD) simulations. Good agreement with experimental data for structure, cohesive energy, and bulk modulus is found, demonstrating a degree of transferability that cannot be achieved using current empirical models or fully ab initio descriptions.

  14. Fully coupled approach to modeling shallow water flow, sediment transport, and bed evolution in rivers

    NASA Astrophysics Data System (ADS)

    Li, Shuangcai; Duffy, Christopher J.

    2011-03-01

    Our ability to predict complex environmental fluid flow and transport hinges on accurate and efficient simulations of multiple physical phenomenon operating simultaneously over a wide range of spatial and temporal scales, including overbank floods, coastal storm surge events, drying and wetting bed conditions, and simultaneous bed form evolution. This research implements a fully coupled strategy for solving shallow water hydrodynamics, sediment transport, and morphological bed evolution in rivers and floodplains (PIHM_Hydro) and applies the model to field and laboratory experiments that cover a wide range of spatial and temporal scales. The model uses a standard upwind finite volume method and Roe's approximate Riemann solver for unstructured grids. A multidimensional linear reconstruction and slope limiter are implemented, achieving second-order spatial accuracy. Model efficiency and stability are treated using an explicit-implicit method for temporal discretization with operator splitting. Laboratory-and field-scale experiments were compiled where coupled processes across a range of scales were observed and where higher-order spatial and temporal accuracy might be needed for accurate and efficient solutions. These experiments demonstrate the ability of the fully coupled strategy in capturing dynamics of field-scale flood waves and small-scale drying-wetting processes.

  15. nonlinMIP contribution to CMIP6: model intercomparison project for non-linear mechanisms: physical basis, experimental design and analysis principles (v1.0)

    NASA Astrophysics Data System (ADS)

    Good, Peter; Andrews, Timothy; Chadwick, Robin; Dufresne, Jean-Louis; Gregory, Jonathan M.; Lowe, Jason A.; Schaller, Nathalie; Shiogama, Hideo

    2016-11-01

    nonlinMIP provides experiments that account for state-dependent regional and global climate responses. The experiments have two main applications: (1) to focus understanding of responses to CO2 forcing on states relevant to specific policy or scientific questions (e.g. change under low-forcing scenarios, the benefits of mitigation, or from past cold climates to the present day), or (2) to understand the state dependence (non-linearity) of climate change - i.e. why doubling the forcing may not double the response. State dependence (non-linearity) of responses can be large at regional scales, with important implications for understanding mechanisms and for general circulation model (GCM) emulation techniques (e.g. energy balance models and pattern-scaling methods). However, these processes are hard to explore using traditional experiments, which explains why they have had so little attention in previous studies. Some single model studies have established novel analysis principles and some physical mechanisms. There is now a need to explore robustness and uncertainty in such mechanisms across a range of models (point 2 above), and, more broadly, to focus work on understanding the response to CO2 on climate states relevant to specific policy/science questions (point 1). nonlinMIP addresses this using a simple, small set of CO2-forced experiments that are able to separate linear and non-linear mechanisms cleanly, with a good signal-to-noise ratio - while being demonstrably traceable to realistic transient scenarios. The design builds on the CMIP5 (Coupled Model Intercomparison Project Phase 5) and CMIP6 DECK (Diagnostic, Evaluation and Characterization of Klima) protocols, and is centred around a suite of instantaneous atmospheric CO2 change experiments, with a ramp-up-ramp-down experiment to test traceability to gradual forcing scenarios. In all cases the models are intended to be used with CO2 concentrations rather than CO2 emissions as the input. The understanding gained will help interpret the spread in policy-relevant scenario projections. Here we outline the basic physical principles behind nonlinMIP, and the method of establishing traceability from abruptCO2 to gradual forcing experiments, before detailing the experimental design, and finally some analysis principles. The test of traceability from abruptCO2 to transient experiments is recommended as a standard analysis within the CMIP5 and CMIP6 DECK protocols.

  16. Linearized spectrum correlation analysis for line emission measurements

    NASA Astrophysics Data System (ADS)

    Nishizawa, T.; Nornberg, M. D.; Den Hartog, D. J.; Sarff, J. S.

    2017-08-01

    A new spectral analysis method, Linearized Spectrum Correlation Analysis (LSCA), for charge exchange and passive ion Doppler spectroscopy is introduced to provide a means of measuring fast spectral line shape changes associated with ion-scale micro-instabilities. This analysis method is designed to resolve the fluctuations in the emission line shape from a stationary ion-scale wave. The method linearizes the fluctuations around a time-averaged line shape (e.g., Gaussian) and subdivides the spectral output channels into two sets to reduce contributions from uncorrelated fluctuations without averaging over the fast time dynamics. In principle, small fluctuations in the parameters used for a line shape model can be measured by evaluating the cross spectrum between different channel groupings to isolate a particular fluctuating quantity. High-frequency ion velocity measurements (100-200 kHz) were made by using this method. We also conducted simulations to compare LSCA with a moment analysis technique under a low photon count condition. Both experimental and synthetic measurements demonstrate the effectiveness of LSCA.

  17. Multi-scale Quantitative Precipitation Forecasting Using Nonlinear and Nonstationary Teleconnection Signals and Artificial Neural Network Models

    EPA Science Inventory

    Global sea surface temperature (SST) anomalies can affect terrestrial precipitation via ocean-atmosphere interaction known as climate teleconnection. Non-stationary and non-linear characteristics of the ocean-atmosphere system make the identification of the teleconnection signals...

  18. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  19. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  20. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  1. Energy harvesting with stacked dielectric elastomer transducers: Nonlinear theory, optimization, and linearized scaling law

    NASA Astrophysics Data System (ADS)

    Tutcuoglu, A.; Majidi, C.

    2014-12-01

    Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.

  2. Age constraints on the evolution of the Quetico belt, Superior Province, Ontario

    NASA Technical Reports Server (NTRS)

    Percival, J. A.; Sullivan, R. W.

    1986-01-01

    Much attention has been focused on the nature of Archean tectonic processes and the extent to which they were different from modern rigid-plate tectonics. The Archean Superior Province has linear metavolcanic and metasediment-dominated subprovinces of similar scale to cenozoic island arc-trench systems of the western Pacific, suggesting an origin by accreting arcs. Models of the evolution of metavolcanic belts in parts of the Superior Province suggest an arc setting but the tectonic environment and evolution of the intervening metasedimentary belts are poorly understood. In addition to explaining the setting giving rise to a linear sedimentary basin, models must account for subsequent shortening and high-temperature, low-pressure metamorphism. Correlation of rock units and events in adjacent metavolcanic and metasedimentary belts is a first step toward understanding large-scale crustal interactions. To this end, zircon geochronology has been applied to metavolcanic belts of the western Superior Province; new age data for the Quetico metasedimentary belt is reported, permitting correlation with the adjacent Wabigoon and Wawa metavolcanic subprovinces.

  3. Advanced turbo-prop airplane interior noise reduction-source definition

    NASA Technical Reports Server (NTRS)

    Magliozzi, B.; Brooks, B. M.

    1979-01-01

    Acoustic pressure amplitudes and phases were measured in model scale on the surface of a rigid semicylinder mounted in an acoustically treated wind tunnel near a prop-fan (an advanced turboprop with many swept blades) model. Operating conditions during the test simulated those of a prop-fan at 0.8 Mach number cruise. Acoustic pressure amplitude and phase contours were defined on the semicylinder surface. Measurements obtained without the semi-cylinder in place were used to establish the magnitude of pressure doubling for an aircraft fuselage located near a prop-fan. Pressure doubling effects were found to be 6dB at 90 deg incidence decreasing to no effect at grazing incidence. Comparisons of measurements with predictions made using a recently developed prop-fan noise prediction theory which includes linear and non-linear source terms showed good agreement in phase and in peak noise amplitude. Predictions of noise amplitude and phase contours, including pressure doubling effects derived from test, are included for a full scale prop-fan installation.

  4. Hierarchical stochastic modeling of large river ecosystems and fish growth across spatio-temporal scales and climate models: the Missouri River endangered pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.; Dey, Rima

    2017-01-01

    We present a hierarchical series of spatially decreasing and temporally increasing models to evaluate the uncertainty in the atmosphere – ocean global climate model (AOGCM) and the regional climate model (RCM) relative to the uncertainty in the somatic growth of the endangered pallid sturgeon (Scaphirhynchus albus). For effects on fish populations of riverine ecosystems, cli- mate output simulated by coarse-resolution AOGCMs and RCMs must be downscaled to basins to river hydrology to population response. One needs to transfer the information from these climate simulations down to the individual scale in a way that minimizes extrapolation and can account for spatio-temporal variability in the intervening stages. The goal is a framework to determine whether, given uncertainties in the climate models and the biological response, meaningful inference can still be made. The non-linear downscaling of climate information to the river scale requires that one realistically account for spatial and temporal variability across scale. Our down- scaling procedure includes the use of fixed/calibrated hydrological flow and temperature models coupled with a stochastically parameterized sturgeon bioenergetics model. We show that, although there is a large amount of uncertainty associated with both the climate model output and the fish growth process, one can establish significant differences in fish growth distributions between models, and between future and current climates for a given model.

  5. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  6. Comparison of TOPEX/Poseidon Sea Level and Linear Model Results forced by Various Wind Products for the Tropical Pacific

    NASA Technical Reports Server (NTRS)

    Hackert, Eric C.; Busalacchi, Antonio J.

    1997-01-01

    The goal of this paper is to compare TOPEX/Posaidon (T/P) sea level with sea level results from linear ocean model experiments forced by several different wind products for the tropical Pacific. During the period of this study (October 1992 - October 1995), available wind products include satellite winds from the ERS-1 scatterometer product of [HALP 97] and the passive microwave analysis of SSMI winds produced using the variational analysis method (VAM) of [ATLA 91]. In addition, atmospheric GCM winds from the NCEP reanalysis [KALN 96], ECMWF analysis [ECMW94], and the Goddard EOS-1 (GEOS-1) reanalysis experiment [SCHU 93] are available for comparison. The observed ship wind analysis of FSU [STRI 92] is also included in this study. The linear model of [CANE 84] is used as a transfer function to test the quality of each of these wind products for the tropical Pacific. The various wind products are judged by comparing the wind-forced model sea level results against the T/P sea level anomalies. Correlation and RMS difference maps show how well each wind product does in reproducing the T/P sea level signal. These results are summarized in a table showing area average correlations and RMS differences. The large-scale low-frequency temporal signal is reproduced by all of the wind products, However, significant differences exist in both amplitude and phase on regional scales. In general, the model results forced by satellite winds do a better job reproducing the T/P signal (i.e. have a higher average correlation and lower RMS difference) than the results forced by atmospheric model winds.

  7. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  8. Turbulence closure for mixing length theories

    NASA Astrophysics Data System (ADS)

    Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.

    2018-05-01

    We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.

  9. Modeling and control design of a wind tunnel model support

    NASA Technical Reports Server (NTRS)

    Howe, David A.

    1990-01-01

    The 12-Foot Pressure Wind Tunnel at Ames Research Center is being restored. A major part of the restoration is the complete redesign of the aircraft model supports and their associated control systems. An accurate trajectory control servo system capable of positioning a model (with no measurable overshoot) is needed. Extremely small errors in scaled-model pitch angle can increase airline fuel costs for the final aircraft configuration by millions of dollars. In order to make a mechanism sufficiently accurate in pitch, a detailed structural and control-system model must be created and then simulated on a digital computer. The model must contain linear representations of the mechanical system, including masses, springs, and damping in order to determine system modes. Electrical components, both analog and digital, linear and nonlinear must also be simulated. The model of the entire closed-loop system must then be tuned to control the modes of the flexible model-support structure. The development of a system model, the control modal analysis, and the control-system design are discussed.

  10. Surface-wave amplitude analysis for array data with non-linear waveform fitting: Toward high-resolution attenuation models of the upper mantle

    NASA Astrophysics Data System (ADS)

    Hamada, K.; Yoshizawa, K.

    2013-12-01

    Anelastic attenuation of seismic waves provides us with valuable information on temperature and water content in the Earth's mantle. While seismic velocity models have been investigated by many researchers, anelastic attenuation (or Q) models have yet to be investigated in detail mainly due to the intrinsic difficulties and uncertainties in the amplitude analysis of observed seismic waveforms. To increase the horizontal resolution of surface wave attenuation models on a regional scale, we have developed a new method of fully non-linear waveform fitting to measure inter-station phase velocities and amplitude ratios simultaneously, using the Neighborhood Algorithm (NA) as a global optimizer. Model parameter space (perturbations of phase speed and amplitude ratio) is explored to fit two observed waveforms on a common great-circle path by perturbing both phase and amplitude of the fundamental-mode surface waves. This method has been applied to observed waveform data of the USArray from 2007 to 2008, and a large-number of inter-station amplitude and phase speed data are corrected in a period range from 20 to 200 seconds. We have constructed preliminary phase speed and attenuation models using the observed phase and amplitude data, with careful considerations of the effects of elastic focusing and station correction factors for amplitude data. The phase velocity models indicate good correlation with the conventional tomographic results in North America on a large-scale; e.g., significant slow velocity anomaly in volcanic regions in the western United States. The preliminary results of surface-wave attenuation achieved a better variance reduction when the amplitude data are inverted for attenuation models in conjunction with corrections for receiver factors. We have also taken into account the amplitude correction for elastic focusing based on a geometrical ray theory, but its effects on the final model is somewhat limited and our attenuation model show anti-correlation with the phase velocity models; i.e., lower attenuation is found in slower velocity areas that cannot readily be explained by the temperature effects alone. Some former global scale studies (e.g., Dalton et al., JGR, 2006) indicated that the ray-theoretical focusing corrections on amplitude data tend to eliminate such anti-correlation of phase speed and attenuation, but this seems not to work sufficiently well for our regional scale model, which is affected by stronger velocity gradient relative to global-scale models. Thus, the estimated elastic focusing effects based on ray theory may be underestimated in our regional-scale studies. More rigorous ways to estimate the focusing corrections as well as data selection criteria for amplitude measurements are required to achieve a high-resolution attenuation models on regional scales in the future.

  11. Complexion as a Soft Biometric in Human-Robot Interaction

    DTIC Science & Technology

    2013-10-01

    model the effect of shadows by providing a linear scaling of each color channel. RcGc Bc  =  a 0 00 a 0 0 0 a  RuGu Bu  (1) The second...a  RuGu Bu +  o1o1 o1  (3) Note that in Eq. 1 - 3 each channel is scaled by the same factor, a or adjusted by the same amount o1. Scaling ...the red channel can be scaled by a = 1.2, whereas the other channels are not scaled at all (b = c = 1.0). Such situations are common in white

  12. Simulation of double layers in a model auroral circuit with nonlinear impedance

    NASA Technical Reports Server (NTRS)

    Smith, R. A.

    1986-01-01

    A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.

  13. On the Role of Surface Friction in Tropical Intraseasonal Oscillation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Chen, Baode

    1999-01-01

    The Madden-Julian oscillation (MJO), or the tropical intraseasonal oscillation, has attracted much attention, ever since its discovery in the early seventies for reasons of both scientific understanding and practical forecasts. Among the theoretical interpretations of the MJO, the wave-CISK (conditional instability of the second kind) mechanism is the most popular. The basic idea of the wave-CISK interpretation is that the cooperation between the low-level convergence associated with the eastward moving Kelvin wave and the cumulus convection generates an eastward moving Kelvin-wave-like mode. Later it was recognized that the MJO has an important Rossby-wave-like component. However linear analysis and numerical simulations based on it (even when conditional heating is used) have revealed two problems with the wave-CISK interpretation; i.e., excessive speed and the most preferred scale being zero or grid scale. Chao (1995) presented a discussion of these problems and attributed these problems to the particular type of expression for the cumulus heating used in the linear analyses and numerical studies (i.e., the convective heating is proportional to low-level convergence and a fixed vertical heating profile). It should be pointed out that in the relatively successful simulation of MJO with general circulation models the problem of grid scale being the most preferred scale does not appear and die problem of excessive speed is not as severe as in the linear analysis.

  14. Reconstruction of halo power spectrum from redshift-space galaxy distribution: cylinder-grouping method and halo exclusion effect

    NASA Astrophysics Data System (ADS)

    Okumura, Teppei; Takada, Masahiro; More, Surhud; Masaki, Shogo

    2017-07-01

    The peculiar velocity field measured by redshift-space distortions (RSD) in galaxy surveys provides a unique probe of the growth of large-scale structure. However, systematic effects arise when including satellite galaxies in the clustering analysis. Since satellite galaxies tend to reside in massive haloes with a greater halo bias, the inclusion boosts the clustering power. In addition, virial motions of the satellite galaxies cause a significant suppression of the clustering power due to non-linear RSD effects. We develop a novel method to recover the redshift-space power spectrum of haloes from the observed galaxy distribution by minimizing the contamination of satellite galaxies. The cylinder-grouping method (CGM) we study effectively excludes satellite galaxies from a galaxy sample. However, we find that this technique produces apparent anisotropies in the reconstructed halo distribution over all the scales which mimic RSD. On small scales, the apparent anisotropic clustering is caused by exclusion of haloes within the anisotropic cylinder used by the CGM. On large scales, the misidentification of different haloes in the large-scale structures, aligned along the line of sight, into the same CGM group causes the apparent anisotropic clustering via their cross-correlation with the CGM haloes. We construct an empirical model for the CGM halo power spectrum, which includes correction terms derived using the CGM window function at small scales as well as the linear matter power spectrum multiplied by a simple anisotropic function at large scales. We apply this model to a mock galaxy catalogue at z = 0.5, designed to resemble Sloan Digital Sky Survey-III Baryon Oscillation Spectroscopic Survey (BOSS) CMASS galaxies, and find that our model can predict both the monopole and quadrupole power spectra of the host haloes up to k < 0.5 {{h Mpc^{-1}}} to within 5 per cent.

  15. Note on the initial conditions within the effective field theory approach of cosmic acceleration

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Wen; Hu, Bin; Zhang, Yi

    2017-12-01

    By using the effective field theory approach, we investigate the role of initial conditions for the dark energy or modified gravity models. In detail, we consider the constant and linear parametrization of the effective Newton constant models. First, under the adiabatic assumption, the correction from the extra scalar degree of freedom in the beyond Λ CDM model is found to be negligible. The dominant ingredient in this setup is the primordial curvature perturbation originated from the inflation mechanism, and the energy budget of the matter components is not very crucial. Second, the isocurvature perturbation sourced by the extra scalar field is studied. For the constant and linear models of the effective Newton constant, no such kind of scalar mode exists. For the quadratic model, there is a nontrivial one. However, the amplitude of the scalar field is damped away very fast on all scales. Consequently, it could not support a reasonable structure formation. Finally, we study the importance of the setup of the scalar field starting time. By setting different turn-on times, namely, a =10-2 and a =10-7, we compare the cosmic microwave background radiation temperature, lensing deflection angle autocorrelation function, and the matter power spectrum in the constant and linear models. We find there is an order of O (1 %) difference in the observable spectra for constant model, while for the linear model, it is smaller than O (0.1 %).

  16. Scaling local species-habitat relations to the larger landscape with a hierarchical spatial count model

    USGS Publications Warehouse

    Thogmartin, W.E.; Knutson, M.G.

    2007-01-01

    Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.

  17. Scaling effects in the static and dynamic response of graphite-epoxy beam-columns. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.

    1990-01-01

    Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.

  18. Size Scaling in Western North Atlantic Loggerhead Turtles Permits Extrapolation between Regions, but Not Life Stages.

    PubMed

    Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko

    2015-01-01

    Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal.

  19. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  20. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Design of permanent magnet eddy current brake for a small scaled electromagnetic launch model

    NASA Astrophysics Data System (ADS)

    Zhou, Shigui; Yu, Haitao; Hu, Minqiang; Huang, Lei

    2012-04-01

    A variable pole-pitch double-sided permanent magnet (PM) linear eddy current brake (LECB) is proposed for a small scaled electromagnetic launch model. A two-dimensional (2D) analytical steady state model is presented for the double-sided PM-LECB, and the expression for the braking force is derived. Based on the analytical model, the material and eddy current skin effect of the conducting plate are analyzed. Moreover, a variable pole-pitch double-sided PM-LECB is proposed for the effective braking of the moving plate. In addition, the braking force is predicted by finite element (FE) analysis, and the simulated results are in good agreement with the analytical model. Finally, a prototype is presented to test the braking profile for validation of the proposed design.

  2. Dunes on Saturn’s moon Titan as revealed by the Cassini Mission

    NASA Astrophysics Data System (ADS)

    Radebaugh, Jani

    2013-12-01

    Dunes on Titan, a dominant landform comprising at least 15% of the surface, represent the end product of many physical processes acting in alien conditions. Winds in a nitrogen-rich atmosphere with Earth-like pressure transport sand that is likely to have been derived from complex organics produced in the atmosphere. These sands then accumulate into large, planet-encircling sand seas concentrated near the equator. Dunes on Titan are predominantly linear and similar in size and form to the large linear dunes of the Namib, Arabian and Saharan sand seas. They likely formed from wide bimodal winds and appear to undergo average sand transport to the east. Their singular form across the satellite indicates Titan’s dunes may be highly mature, and may reside in a condition of stability that permitted their growth and evolution over long time scales. The dunes are among the youngest surface features, as even river channels do not cut through them. However, reorganization time scales of large linear dunes on Titan are likely tens of thousands of years. Thus, Titan’s dune forms may be long-lived and yet be actively undergoing sand transport. This work is a summary of research on dunes on Titan after the Cassini Prime and Equinox Missions (2004-2010) and now during the Solstice Mission (to end in 2017). It discusses results of Cassini data analysis and modeling of conditions on Titan and it draws comparisons with observations and models of linear dune formation and evolution on Earth.

  3. Some effects of horizontal discretization on linear baroclinic and symmetric instabilities

    NASA Astrophysics Data System (ADS)

    Barham, William; Bachman, Scott; Grooms, Ian

    2018-05-01

    The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.

  4. Inlet Turbulence and Length Scale Measurements in a Large Scale Transonic Turbine Cascade

    NASA Technical Reports Server (NTRS)

    Thurman, Douglas; Flegel, Ashlie; Giel, Paul

    2014-01-01

    Constant temperature hotwire anemometry data were acquired to determine the inlet turbulence conditions of a transonic turbine blade linear cascade. Flow conditions and angles were investigated that corresponded to the take-off and cruise conditions of the Variable Speed Power Turbine (VSPT) project and to an Energy Efficient Engine (EEE) scaled rotor blade tip section. Mean and turbulent flowfield measurements including intensity, length scale, turbulence decay, and power spectra were determined for high and low turbulence intensity flows at various Reynolds numbers and spanwise locations. The experimental data will be useful for establishing the inlet boundary conditions needed to validate turbulence models in CFD codes.

  5. On Instability of Geostrophic Current with Linear Vertical Shear at Length Scales of Interleaving

    NASA Astrophysics Data System (ADS)

    Kuzmina, N. P.; Skorokhodov, S. L.; Zhurbas, N. V.; Lyzhkov, D. A.

    2018-01-01

    The instability of long-wave disturbances of a geostrophic current with linear velocity shear is studied with allowance for the diffusion of buoyancy. A detailed derivation of the model problem in dimensionless variables is presented, which is used for analyzing the dynamics of disturbances in a vertically bounded layer and for describing the formation of large-scale intrusions in the Arctic basin. The problem is solved numerically based on a high-precision method developed for solving fourth-order differential equations. It is established that there is an eigenvalue in the spectrum of eigenvalues that corresponds to unstable (growing with time) disturbances, which are characterized by a phase velocity exceeding the maximum velocity of the geostrophic flow. A discussion is presented to explain some features of the instability.

  6. Nonlinear effective theory of dark energy

    NASA Astrophysics Data System (ADS)

    Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo

    2018-04-01

    We develop an approach to parametrize cosmological perturbations beyond linear order for general dark energy and modified gravity models characterized by a single scalar degree of freedom. We derive the full nonlinear action, focusing on Horndeski theories. In the quasi-static, non-relativistic limit, there are a total of six independent relevant operators, three of which start at nonlinear order. The new nonlinear couplings modify, beyond linear order, the generalized Poisson equation relating the Newtonian potential to the matter density contrast. We derive this equation up to cubic order in perturbations and, in a companion article [1], we apply it to compute the one-loop matter power spectrum. Within this approach, we also discuss the Vainshtein regime around spherical sources and the relation between the Vainshtein scale and the nonlinear scale for structure formation.

  7. Nonlinear forcing in the resolvent analysis of wall-turbulence

    NASA Astrophysics Data System (ADS)

    Rosenberg, Kevin; Lozano Duran, Adrian; Towne, Aaron; McKeon, Beverley

    2016-11-01

    The resolvent analysis of McKeon and Sharma formulates the Navier-Stokes equations as an input/output system in which the nonlinearity is treated as a forcing that acts upon the linear dynamics to yield a velocity response across wavenumber/frequency space. DNS data for a low Reynolds number turbulent channel (Reτ = 180) is used to investigate the structure of the nonlinear forcing directly. Specifically, we explore the spatio-temporal scales where the forcing is active and analyze its interplay with the linear amplification mechanisms present in the resolvent operator. This work could provide insight into self-sustaining processes in wall-turbulence and inform the modeling of scale interactions in large eddy simulations. We gratefully acknowledge Stanford's Center for Turbulence Research for support of this work.

  8. A non-linear optimization programming model for air quality planning including co-benefits for GHG emissions.

    PubMed

    Turrini, Enrico; Carnevale, Claudio; Finzi, Giovanna; Volta, Marialuisa

    2018-04-15

    This paper introduces the MAQ (Multi-dimensional Air Quality) model aimed at defining cost-effective air quality plans at different scales (urban to national) and assessing the co-benefits for GHG emissions. The model implements and solves a non-linear multi-objective, multi-pollutant decision problem where the decision variables are the application levels of emission abatement measures allowing the reduction of energy consumption, end-of pipe technologies and fuel switch options. The objectives of the decision problem are the minimization of tropospheric secondary pollution exposure and of internal costs. The model assesses CO 2 equivalent emissions in order to support decision makers in the selection of win-win policies. The methodology is tested on Lombardy region, a heavily polluted area in northern Italy. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Physical Origin of Long Gas Depletion Times in Galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semenov, Vadim A.; Kravtsov, Andrey V.; Gnedin, Nickolay Y.

    2017-08-18

    We present a model that elucidates why gas depletion times in galaxies are long compared to the time scales of the processes driving the evolution of the interstellar medium. We show that global depletion times are not set by any "bottleneck" in the process of gas evolution towards the star-forming state. Instead, depletion times are long because star-forming gas converts only a small fraction of its mass into stars before it is dispersed by dynamical and feedback processes. Thus, complete depletion requires that gas transitions between star-forming and non-star-forming states multiple times. Our model does not rely on the assumption of equilibrium and can be used to interpret trends of depletion times with the properties of observed galaxies and the parameters of star formation and feedback recipes in galaxy simulations. In particular, the model explains the mechanism by which feedback self-regulates star formation rate in simulations and makes it insensitive to the local star formation efficiency. We illustrate our model using the results of an isolatedmore » $$L_*$$-sized disk galaxy simulation that reproduces the observed Kennicutt-Schmidt relation for both molecular and atomic gas. Interestingly, the relation for molecular gas is close to linear on kiloparsec scales, even though a non-linear relation is adopted in simulation cells. This difference is due to stellar feedback, which breaks the self-similar scaling of the gas density PDF with the average gas surface density.« less

  10. A new hybrid code (CHIEF) implementing the inertial electron fluid equation without approximation

    NASA Astrophysics Data System (ADS)

    Muñoz, P. A.; Jain, N.; Kilian, P.; Büchner, J.

    2018-03-01

    We present a new hybrid algorithm implemented in the code CHIEF (Code Hybrid with Inertial Electron Fluid) for simulations of electron-ion plasmas. The algorithm treats the ions kinetically, modeled by the Particle-in-Cell (PiC) method, and electrons as an inertial fluid, modeled by electron fluid equations without any of the approximations used in most of the other hybrid codes with an inertial electron fluid. This kind of code is appropriate to model a large variety of quasineutral plasma phenomena where the electron inertia and/or ion kinetic effects are relevant. We present here the governing equations of the model, how these are discretized and implemented numerically, as well as six test problems to validate our numerical approach. Our chosen test problems, where the electron inertia and ion kinetic effects play the essential role, are: 0) Excitation of parallel eigenmodes to check numerical convergence and stability, 1) parallel (to a background magnetic field) propagating electromagnetic waves, 2) perpendicular propagating electrostatic waves (ion Bernstein modes), 3) ion beam right-hand instability (resonant and non-resonant), 4) ion Landau damping, 5) ion firehose instability, and 6) 2D oblique ion firehose instability. Our results reproduce successfully the predictions of linear and non-linear theory for all these problems, validating our code. All properties of this hybrid code make it ideal to study multi-scale phenomena between electron and ion scales such as collisionless shocks, magnetic reconnection and kinetic plasma turbulence in the dissipation range above the electron scales.

  11. Changes in the lower boundary condition of water fluxes in the NOAH land surface scheme

    NASA Astrophysics Data System (ADS)

    Lohmann, D.; Peters-Lidard, C. D.

    2002-05-01

    One problem with current land surface schemes (LSS) used in weather prediction and climate models is their inabilty to reproduce streamflow in large river basins. This can be attributed to the weak representation of their upper (infiltration) and lower (baseflow) boundary conditions in their water balance / transport equations. Operational (traditional) hydrological models, which operate on the same spatial scale as a LSS, on the other hand, are able to reproduce streamflow time series. Their infiltration and baseflow equations are often empirically based and therefore have been neglected by the LSS community. It must be argued that we need to include a better representation of long time scales (as represented by groundwater and baseflow) into the current LSS to make valuable predictions of streamflow and water resources. This talk concentrates on the lower boundary condition of water fluxes within LSS. It reviews briefly previous attempts to incorporate groundwater and more realistic lower boundary conditions into LSS and summarizes the effect on the runoff (baseflow) production time scales as compared to currently used lower boundary conditions in LSS. The NOAH - LSM in the LDAS and DMIP setting is used to introduce a simplified groundwater model, based on the linearized Boussinesq equation, and the TOPMODEL. The NOAH - LSM will be coupled to a linear routing model to investigate the effects of the new lower boundary condition on the water balance (in particular, streamflow) in small to medium sized catchments in the LDAS / DMIP domain.

  12. The non-linear, interactive effects of population density and climate drive the geographical patterns of waterfowl survival

    USGS Publications Warehouse

    Zhao, Qing; Boomer, G. Scott; Kendall, William L.

    2018-01-01

    On-going climate change has major impacts on ecological processes and patterns. Understanding the impacts of climate on the geographical patterns of survival can provide insights to how population dynamics respond to climate change and provide important information for the development of appropriate conservation strategies at regional scales. It is challenging to understand the impacts of climate on survival, however, due to the fact that the non-linear relationship between survival and climate can be modified by density-dependent processes. In this study we extended the Brownie model to partition hunting and non-hunting mortalities and linked non-hunting survival to covariates. We applied this model to four decades (1972–2014) of waterfowl band-recovery, breeding population survey, and precipitation and temperature data covering multiple ecological regions to examine the non-linear, interactive effects of population density and climate on waterfowl non-hunting survival at a regional scale. Our results showed that the non-linear effect of temperature on waterfowl non-hunting survival was modified by breeding population density. The concave relationship between non-hunting survival and temperature suggested that the effects of warming on waterfowl survival might be multifaceted. Furthermore, the relationship between non-hunting survival and temperature was stronger when population density was higher, suggesting that high-density populations may be less buffered against warming than low-density populations. Our study revealed distinct relationships between waterfowl non-hunting survival and climate across and within ecological regions, highlighting the importance of considering different conservation strategies according to region-specific population and climate conditions. Our findings and associated novel modelling approach have wide implications in conservation practice.

  13. Thermodynamic scaling of dynamic properties of liquid crystals: Verifying the scaling parameters using a molecular model

    NASA Astrophysics Data System (ADS)

    Satoh, Katsuhiko

    2013-08-01

    The thermodynamic scaling of molecular dynamic properties of rotation and thermodynamic parameters in a nematic phase was investigated by a molecular dynamic simulation using the Gay-Berne potential. A master curve for the relaxation time of flip-flop motion was obtained using thermodynamic scaling, and the dynamic property could be solely expressed as a function of TV^{γ _τ }, where T and V are the temperature and volume, respectively. The scaling parameter γτ was in excellent agreement with the thermodynamic parameter Γ, which is the logarithm of the slope of a line plotted for the temperature and volume at constant P2. This line was fairly linear, and as good as the line for p-azoxyanisole or using the highly ordered small cluster model. The equivalence relation between Γ and γτ was compared with results obtained from the highly ordered small cluster model. The possibility of adapting the molecular model for the thermodynamic scaling of other dynamic rotational properties was also explored. The rotational diffusion constant and rotational viscosity coefficients, which were calculated using established theoretical and experimental expressions, were rescaled onto master curves with the same scaling parameters. The simulation illustrates the universal nature of the equivalence relation for liquid crystals.

  14. Spatio-temporal Bayesian model selection for disease mapping

    PubMed Central

    Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K

    2016-01-01

    Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156

  15. Psychological stress impairs short-term muscular recovery from resistance exercise.

    PubMed

    Stults-Kolehmainen, Matthew A; Bartholomew, John B

    2012-11-01

    The primary aim of this study was to determine whether chronic mental stress moderates recovery of muscular function, perceived energy, fatigue, and soreness in the first hour after a bout of strenuous resistance exercise. Thirty-one undergraduate resistance training students (age = 20.26 ± 1.34 yr) completed the Perceived Stress Scale and Undergraduate Stress Questionnaire (USQ; a measure of life event stress) and completed fitness testing. After 5 to 14 d of recovery, they performed an acute heavy-resistance exercise protocol (10-repetition maximum (RM) leg press test plus six sets: 80%-100% of 10 RM). Maximal isometric force (MIF) was assessed before exercise, after exercise, and at 20, 40, and 60 min postexercise. Participants also reported their levels of perceived energy, fatigue, and soreness. Recovery data were analyzed with hierarchical linear modeling growth curve analysis. Life event stress significantly moderated linear (P = 0.013) and squared (P = 0.05) recovery of MIF. This relationship held even when the model was adjusted for fitness, workload, and training experience. Likewise, perceived stress moderated linear recovery of MIF (P = 0.023). Neither USQ nor Perceived Stress Scale significantly moderated changes in energy, fatigue, or soreness. Life event stress and perceived stress both moderated the recovery of muscular function, but not psychological responses, in the first hour after strenuous resistance exercise.

  16. Conceptual problems in detecting the evolution of dark energy when using distance measurements

    NASA Astrophysics Data System (ADS)

    Bolejko, K.

    2011-01-01

    Context. Dark energy is now one of the most important and topical problems in cosmology. The first step to reveal its nature is to detect the evolution of dark energy or to prove beyond doubt that the cosmological constant is indeed constant. However, in the standard approach to cosmology, the Universe is described by the homogeneous and isotropic Friedmann models. Aims: We aim to show that in the perturbed universe (even if perturbations vanish if averaged over sufficiently large scales) the distance-redshift relation is not the same as in the unperturbed universe. This has a serious consequence when studying the nature of dark energy and, as shown here, can impair the analysis and studies of dark energy. Methods: The analysis is based on two methods: the linear lensing approximation and the non-linear Szekeres Swiss-Cheese model. The inhomogeneity scale is ~50 Mpc, and both models have the same density fluctuations along the line of sight. Results: The comparison between linear and non-linear methods shows that non-linear corrections are not negligible. When inhomogeneities are present the distance changes by several percent. To show how this change influences the measurements of dark energy, ten future observations with 2% uncertainties are generated. It is shown the using the standard methods (i.e. under the assumption of homogeneity) the systematics due to inhomogeneities can distort our analysis, and may lead to a conclusion that dark energy evolves when in fact it is constant (or vice versa). Conclusions: Therefore, if future observations are analysed only within the homogeneous framework then the impact of inhomogeneities (such as voids and superclusters) can be mistaken for evolving dark energy. Since the robust distinction between the evolution and non-evolution of dark energy is the first step to understanding the nature of dark energy a proper handling of inhomogeneities is essential.

  17. Predicting trace organic compound breakthrough in granular activated carbon using fluorescence and UV absorbance as surrogates.

    PubMed

    Anumol, Tarun; Sgroi, Massimiliano; Park, Minkyu; Roccaro, Paolo; Snyder, Shane A

    2015-06-01

    This study investigated the applicability of bulk organic parameters like dissolved organic carbon (DOC), UV absorbance at 254 nm (UV254), and total fluorescence (TF) to act as surrogates in predicting trace organic compound (TOrC) removal by granular activated carbon in water reuse applications. Using rapid small-scale column testing, empirical linear correlations for thirteen TOrCs were determined with DOC, UV254, and TF in four wastewater effluents. Linear correlations (R(2) > 0.7) were obtained for eight TOrCs in each water quality in the UV254 model, while ten TOrCs had R(2) > 0.7 in the TF model. Conversely, DOC was shown to be a poor surrogate for TOrC breakthrough prediction. When the data from all four water qualities was combined, good linear correlations were still obtained with TF having higher R(2) than UV254 especially for TOrCs with log Dow>1. Excellent linear relationship (R(2) > 0.9) between log Dow and the removal of TOrC at 0% surrogate removal (y-intercept) were obtained for the five neutral TOrCs tested in this study. Positively charged TOrCs had enhanced removals due to electrostatic interactions with negatively charged GAC that caused them to deviate from removals that would be expected with their log Dow. Application of the empirical linear correlation models to full-scale samples provided good results for six of seven TOrCs (except meprobamate) tested when comparing predicted TOrC removal by UV254 and TF with actual removals for GAC in all the five samples tested. Surrogate predictions using UV254 and TF provide valuable tools for rapid or on-line monitoring of GAC performance and can result in cost savings by extended GAC run times as compared to using DOC breakthrough to trigger regeneration or replacement. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Repopulation Kinetics and the Linear-Quadratic Model

    NASA Astrophysics Data System (ADS)

    O'Rourke, S. F. C.; McAneney, H.; Starrett, C.; O'Sullivan, J. M.

    2009-08-01

    The standard Linear-Quadratic (LQ) survival model for radiotherapy is used to investigate different schedules of radiation treatment planning for advanced head and neck cancer. We explore how these treament protocols may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al. [1], which was concerned with the case of exponential repopulation between treatments. Treatment schedules investigated include standarized and accelerated fractionation. Calculations based on the present work show, that even with growth laws scaled to ensure that the repopulation kinetics for advanced head and neck cancer are comparable, considerable variation in the survival fraction to orders of magnitude emerged. Calculations show that application of the Gompertz model results in a significantly poorer prognosis for tumour eradication. Gaps in treatment also highlight the differences in the LQ model with the effect of repopulation kinetics included.

  19. Latest astronomical constraints on some non-linear parametric dark energy models

    NASA Astrophysics Data System (ADS)

    Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos

    2018-04-01

    We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.

  20. Protein linear indices of the 'macromolecular pseudograph alpha-carbon atom adjacency matrix' in bioinformatics. Part 1: prediction of protein stability effects of a complete set of alanine substitutions in Arc repressor.

    PubMed

    Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A

    2005-04-15

    A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.

  1. An open-access CMIP5 pattern library for temperature and precipitation: description and methodology

    NASA Astrophysics Data System (ADS)

    Lynch, Cary; Hartin, Corinne; Bond-Lamberty, Ben; Kravitz, Ben

    2017-05-01

    Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squares regression methods. We explore the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90° N/S). Bias and mean errors between modeled and pattern-predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5 °C, but the choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. This paper describes our library of least squares regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns. The dataset and netCDF data generation code are available at doi:10.5281/zenodo.495632.

  2. What does the Cantril Ladder measure in adolescence?

    PubMed

    Mazur, Joanna; Szkultecka-Dębek, Monika; Dzielska, Anna; Drozd, Mariola; Małkowska-Szkutnik, Agnieszka

    2018-01-01

    The Cantril Scale (CS) is a simple visual scale which makes it possible to assess general life satisfaction. The result may depend on the health, living, and studying conditions, and quality of social relations. The objective of this study is to identify key factors influencing the CS score in Polish adolescents. The survey comprised 1,423 parent-child pairs (54% girls; age range: 10-17; 67.3% urban inhabitants; 89.4% of parents were mothers). Linear and logistic models were estimated; the latter used alternative divisions into "satisfied" and "dissatisfied" with life. In addition to age and gender, child-reported KIDSCREEN-52 quality of life indexes were taken into account, along with some information provided by parents - child physical (CSHCN) and mental (SDQ) health, and family socio-economic conditions. According to the linear model, nine independent predictors, including six dimensions of KIDSCREEN-52, explain 47.2% of the variability of life satisfaction on the Cantril Scale. Self-perception was found to have a dominating influence (Δ R 2 = 0.301, p < 0.001). Important CS predictors also included Psychological Well-being (Δ R 2 = 0.088, p < 0.001) and Parent Relations (Δ R 2 = 0.041, p < 0.001). The impact of socioeconomic factors was more visible in boys and in older adolescents. According to logistic models, the key factors enhancing the chance of higher life satisfaction are Moods and Emotions (cut-off point CS > 5) and School Environment (CS > 8 points). None of the models indicated a relationship between the CS and physical health. The Cantril Scale can be considered a useful measurement tool in a broad approach to psychosocial adolescent health.

  3. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.

  4. Assessing Validity of Measurement in Learning Disabilities Using Hierarchical Generalized Linear Modeling: The Roles of Anxiety and Motivation

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.

    2016-01-01

    The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…

  5. Recognition by Linear Combination of Models

    DTIC Science & Technology

    1989-08-01

    to the model (or to the viewed object) prior to, or during the matching stage. Such an approach is used in [Chien & Aggarwal 1987 , Faugeras & Hebert...1986, Fishler & Bolles 1981, Huttenlocher & Ullman 1987 , Lowe 1985, Thompson & Mundy 1987 , Ullman 1986]. Key problems that arise in any alignment...cludes 3-D rotation, translation and scaling, followed by an orthographic projection. The 1 transformation is determined as in [Huttenlocher & Ullman 1987

  6. Bridging Scales: A Model-Based Assessment of the Technical Tidal-Stream Energy Resource off Massachusetts, USA

    NASA Astrophysics Data System (ADS)

    Cowles, G. W.; Hakim, A.; Churchill, J. H.

    2016-02-01

    Tidal in-stream energy conversion (TISEC) facilities provide a highly predictable and dependable source of energy. Given the economic and social incentives to migrate towards renewable energy sources there has been tremendous interest in the technology. Key challenges to the design process stem from the wide range of problem scales extending from device to array. In the present approach we apply a multi-model approach to bridge the scales of interest and select optimal device geometries to estimate the technical resource for several realistic sites in the coastal waters of Massachusetts, USA. The approach links two computational models. To establish flow conditions at site scales ( 10m), a barotropic setup of the unstructured grid ocean model FVCOM is employed. The model is validated using shipboard and fixed ADCP as well as pressure data. For device scale, the structured multiblock flow solver SUmb is selected. A large ensemble of simulations of 2D cross-flow tidal turbines is used to construct a surrogate design model. The surrogate model is then queried using velocity profiles extracted from the tidal model to determine the optimal geometry for the conditions at each site. After device selection, the annual technical yield of the array is evaluated with FVCOM using a linear momentum actuator disk approach to model the turbines. Results for several key Massachusetts sites including comparison with theoretical approaches will be presented.

  7. Algorithmically scalable block preconditioner for fully implicit shallow-water equations in CAM-SE

    DOE PAGES

    Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.

    2014-10-19

    Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less

  8. Community-based comprehensive intervention for people with schizophrenia in Guangzhou, China: Effects on clinical symptoms, social functioning, internalized stigma and discrimination.

    PubMed

    Li, Jie; Huang, Yuan-Guang; Ran, Mao-Sheng; Fan, Yu; Chen, Wen; Evans-Lacko, Sara; Thornicroft, Graham

    2018-04-01

    Comprehensive interventions including components of stigma and discrimination reduction in schizophrenia in low- and middle-income countries (LMICs) are lacking. We developed a community-based comprehensive intervention to evaluate its effects on clinical symptoms, social functioning, internalized stigma and discrimination among patients with schizophrenia. A randomized controlled trial including an intervention group (n = 169) and a control group (n = 158) was performed. The intervention group received comprehensive intervention (strategies against stigma and discrimination, psycho-education, social skills training and cognitive behavioral therapy) and the control group received face to face interview. Both lasted for nine months. Participants were measured at baseline, 6 months and 9 months using the Internalized Stigma of Mental Illness scale (ISMI), Discrimination and Stigma Scale (DISC-12), Global Assessment of Functioning (GAF), Schizophrenia Quality of Life Scale (SQLS), Self-Esteem Scale (SES), Brief Psychiatric Rating Scale (BPRS) and PANSS negative scale (PANSS-N). Insight and medication compliance were evaluated by senior psychiatrists. Data were analyzed by descriptive statistics, t-test, chi-square test or Fisher's exact test. Linear Mixed Models were used to show intervention effectiveness on scales. General Linear Mixed Models with multinomial logistic link function were used to assess the effectiveness on medication compliance and insight. We found a significant reduction on anticipated discrimination, BPRS and PANSS-N total scores, and an elevation on overcoming stigma and GAF in the intervention group after 9 months. These suggested the intervention may be effective in reducing anticipated discrimination, increasing skills overcoming stigma as well as improving clinical symptoms and social functioning in Chinese patients with schizophrenia. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  9. On the use of a physically-based baseflow timescale in land surface models.

    NASA Astrophysics Data System (ADS)

    Jost, A.; Schneider, A. C.; Oudin, L.; Ducharne, A.

    2017-12-01

    Groundwater discharge is an important component of streamflow and estimating its spatio-temporal variation in response to changes in recharge is of great value to water resource planning, and essential for modelling accurate large scale water balance in land surface models (LSMs). First-order representation of groundwater as a single linear storage element is frequently used in LSMs for the sake of simplicity, but requires a suitable parametrization of the aquifer hydraulic behaviour in the form of the baseflow characteristic timescale (τ). Such a modelling approach can be hampered by the lack of available calibration data at global scale. Hydraulic groundwater theory provides an analytical framework to relate the baseflow characteristics to catchment descriptors. In this study, we use the long-time solution of the linearized Boussinesq equation to estimate τ at global scale, as a function of groundwater flow length and aquifer hydraulic diffusivity. Our goal is to evaluate the use of this spatially variable and physically-based τ in the ORCHIDEE surface model in terms of simulated river discharges across large catchments. Aquifer transmissivity and drainable porosity stem from GLHYMPS high-resolution datasets whereas flow length is derived from an estimation of drainage density, using the GRIN global river network. ORCHIDEE is run in offline mode and its results are compared to a reference simulation using an almost spatially constant topographic-dependent τ. We discuss the limits of our approach in terms of both the relevance and accuracy of global estimates of aquifer hydraulic properties and the extent to which the underlying assumptions in the analytical method are valid.

  10. Mapping the Dark Matter with 6dFGS

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy R.; Magoulas, C.; Springob, C.; Colless, M.; Jones, H.; Lucey, J.; Erdogdu, P.; Campbell, L.

    2012-05-01

    Fundamental plane distances from the 6dF Galaxy Redshift Survey are fitted to a model of the density field within 200/h Mpc. Likelihood is maximized for a single value of the local galaxy density, as expected in linear theory for the relation between overdensity and peculiar velocity. The dipole of the inferred southern hemisphere early type galaxy peculiar velocities is calculated within 150/h Mpc, before and after correction for the individual galaxy velocities predicted by the model. The former agrees with that obtained by other peculiar velocity studies (e.g. SFI++). The latter is only of order 150 km/sec and consistent with the expectations of the standard cosmological model and recent forecasts of the cosmic mach number, which show linearly declining bulk flow with increasing scale.

  11. Spatial structure, sampling design and scale in remotely-sensed imagery of a California savanna woodland

    NASA Technical Reports Server (NTRS)

    Mcgwire, K.; Friedl, M.; Estes, J. E.

    1993-01-01

    This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.

  12. System identification and the modeling of sailing yachts

    NASA Astrophysics Data System (ADS)

    Legursky, Katrina

    This research represents an exploration of sailing yacht dynamics with full-scale sailing motion data, physics-based models, and system identification techniques. The goal is to provide a method of obtaining and validating suitable physics-based dynamics models for use in control system design on autonomous sailing platforms, which have the capacity to serve as mobile, long range, high endurance autonomous ocean sensing platforms. The primary contributions of this study to the state-of-the-art are the formulation of a five degree-of-freedom (DOF) linear multi-input multi-output (MIMO) state space model of sailing yacht dynamics, the process for identification of this model from full-scale data, a description of the maneuvers performed during on-water tests, and an analysis method to validate estimated models. The techniques and results described herein can be directly applied to and tested on existing autonomous sailing platforms. A full-scale experiment on a 23ft monohull sailing yacht is developed to collect motion data for physics-based model identification. Measurements include 3 axes of accelerations, velocities, angular rates, and attitude angles in addition to apparent wind speed and direction. The sailing yacht herein is treated as a dynamic system with two control inputs, the rudder angle, deltaR, and the mainsail angle, delta B, which are also measured. Over 20 hours of full scale sailing motion data is collected, representing three sail configurations corresponding to a range of wind speeds: the Full Main and Genoa (abbrev. Genoa) for lower wind speeds, the Full Main and Jib (abbrev. Jib) for mid-range wind speeds, and the Reefed Main and Jib (abbrev. Reef) for the highest wind speeds. The data also covers true wind angles from upwind through a beam reach. A physics-based non-linear model to describe sailing yacht motion is outlined, including descriptions of methods to model the aerodynamics and hydrodynamics of a sailing yacht in surge, sway, roll, and yaw. Existing aerodynamic models for sailing yachts are unsuitable for control system design as they do not include a physical description of the sails' dynamic effect on the system. A new aerodynamic model is developed and validated using the full-scale sailing data which includes sail deflection as a control input to the system. The Maximum Likelihood Estimation (MLE) algorithm is used with non-linear simulation data to successfully estimate a set of hydrodynamic derivatives for a sailing yacht. It is shown that all sailing yacht models will contain a second order mode (referred to herein as Mode 1A.S or 4B.S) which is dependent upon trimmed roll angle. For the test yacht it is concluded that for this mode when the trimmed roll angle is, roll rate and roll angle are the dominant motion variables, and for surge velocity and yaw rate dominate. This second order mode is dynamically stable for . It transitions from stability in the higher values of to instability in the region defined by. These conclusions align with other work which has also found roll angle to be a driving factor in the dynamic behavior of a tall-ship (Johnson, Miles, Lasher, & Womack, 2009). It is also shown that all linear models also contain a first order mode, (referred to herein as Mode 3A.F or 1B.F), which lies very close to the origin of the complex plane indicating a long time constant. Measured models have indicated this mode can be stable or unstable. The eigenvector analysis reveals that the mode is stable if the surge contribution is < 40% and the sway contribution is > 20%. The small set of maneuvers necessary for model identification, quick OSLS estimation method, and detailed modal analysis of estimated models outlined in this work are immediately applicable to existing autonomous mono-hull sailing yachts, and could readily be adapted for use with other wind-powered vessel configurations such as wing-sails, catamarans, and tri-marans. (Abstract shortened by UMI.)

  13. A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data

    NASA Astrophysics Data System (ADS)

    MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.

    2015-12-01

    Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.

  14. Lifting primordial non-Gaussianity above the noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welling, Yvette; Woude, Drian van der; Pajer, Enrico, E-mail: welling@strw.leidenuniv.nl, E-mail: D.C.vanderWoude@uu.nl, E-mail: enrico.pajer@gmail.com

    2016-08-01

    Primordial non-Gaussianity (PNG) in Large Scale Structures is obfuscated by the many additional sources of non-linearity. Within the Effective Field Theory approach to Standard Perturbation Theory, we show that matter non-linearities in the bispectrum can be modeled sufficiently well to strengthen current bounds with near future surveys, such as Euclid. We find that the EFT corrections are crucial to this improvement in sensitivity. Yet, our understanding of non-linearities is still insufficient to reach important theoretical benchmarks for equilateral PNG, while, for local PNG, our forecast is more optimistic. We consistently account for the theoretical error intrinsic to the perturbative approachmore » and discuss the details of its implementation in Fisher forecasts.« less

  15. Distinguishing centrarchid genera by use of lateral line scales

    USGS Publications Warehouse

    Roberts, N.M.; Rabeni, C.F.; Stanovick, J.S.

    2007-01-01

    Predator-prey relations involving fishes are often evaluated using scales remaining in gut contents or feces. While several reliable keys help identify North American freshwater fish scales to the family level, none attempt to separate the family Centrarchidae to the genus level. Centrarchidae is of particular concern in the midwestern United States because it contains several popular sport fishes, such as smallmouth bass Micropterus dolomieu, largemouth bass M. salmoides, and rock bass Ambloplites rupestris, as well as less-sought-after species of sunfishes Lepomis spp. and crappies Pomoxis spp. Differentiating sport fish from non-sport fish has important management implications. Morphological characteristics of lateral line scales (n = 1,581) from known centrarchid fishes were analyzed. The variability of measurements within and between genera was examined to select variables that were the most useful in further classifying unknown centrarchid scales. A linear discriminant analysis model was developed using 10 variables. Based on this model, 84.4% of Ambloplites scales, 81.2% of Lepomis scales, and 86.6% of Micropterus scales were classified correctly using a jackknife procedure. ?? Copyright by the American Fisheries Society 2007.

  16. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  17. Logarithmic violation of scaling in anisotropic kinematic dynamo model

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2016-01-01

    Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t')/k⊥d-1 +ξ , where k⊥ = |k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: instead of power-like corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L.

  18. Remote and Local Influences in Forecasting Pacific SST: a Linear Inverse Model and a Multimodel Ensemble Study

    NASA Astrophysics Data System (ADS)

    Faggiani Dias, D.; Subramanian, A. C.; Zanna, L.; Miller, A. J.

    2017-12-01

    Sea surface temperature (SST) in the Pacific sector is well known to vary on time scales from seasonal to decadal, and the ability to predict these SST fluctuations has many societal and economical benefits. Therefore, we use a suite of statistical linear inverse models (LIMs) to understand the remote and local SST variability that influences SST predictions over the North Pacific region and further improve our understanding on how the long-observed SST record can help better guide multi-model ensemble forecasts. Observed monthly SST anomalies in the Pacific sector (between 15oS and 60oN) are used to construct different regional LIMs for seasonal to decadal prediction. The forecast skills of the LIMs are compared to that from two operational forecast systems in the North American Multi-Model Ensemble (NMME) revealing that the LIM has better skill in the Northeastern Pacific than NMME models. The LIM is also found to have comparable forecast skill for SST in the Tropical Pacific with NMME models. This skill, however, is highly dependent on the initialization month, with forecasts initialized during the summer having better skill than those initialized during the winter. The forecast skill with LIM is also influenced by the verification period utilized to make the predictions, likely due to the changing character of El Niño in the 20th century. The North Pacific seems to be a source of predictability for the Tropics on seasonal to interannual time scales, while the Tropics act to worsen the skill for the forecast in the North Pacific. The data were also bandpassed into seasonal, interannual and decadal time scales to identify the relationships between time scales using the structure of the propagator matrix. For the decadal component, this coupling occurs the other way around: Tropics seem to be a source of predictability for the Extratropics, but the Extratropics don't improve the predictability for the Tropics. These results indicate the importance of temporal scale interactions in improving predictability on decadal timescales. Hence, we show that LIMs are not only useful as benchmarks for estimates of statistical skill, but also to isolate contributions to the forecast skills from different timescales, spatial scales or even model components.

  19. Polarization and Compressibility of Oblique Kinetic Alfven Waves

    NASA Technical Reports Server (NTRS)

    Hunana, Peter; Goldstein, M. L.; Passot, T.; Sulem, P. L.; Laveder, D.; Zank, G. P.

    2012-01-01

    Even though solar wind, as a collisionless plasma, is properly described by the kineticMaxwell-Vlasov description, it can be argued that much of our understanding of solar wind observational data comes from an interpretation and numerical modeling which is based on a fluid description of magnetohydrodynamics. In recent years, there has been a significant interest in better understanding the importance of kinetic effects, i.e. the differences between the kinetic and usual fluid descriptions. Here we concentrate on physical properties of oblique kinetic Alfvn waves (KAWs), which are often recognized as one of the key ingredients in the solar wind turbulence cascade. We use three different fluid models with various degrees of complexity and calculate polarization and magnetic compressibility of oblique KAWs (propagation angle q = 88), which we compare to solutions derived from linear kinetic theory. We explore a wide range of possible proton plasma b = [0.1,10.0] and a wide range of length scales krL = [0.001,10.0]. It is shown that the classical isotropic two-fluid model is very compressible in comparison with kinetic theory and that the largest discrepancy occurs at scales larger than the proton gyroscale. We also show that the two-fluid model contains a large error in the polarization of electric field, even at scales krL 1. Furthermore, to understand these discrepancies between the two-fluid model and the kinetic theory, we employ two versions of the Landau fluid model that incorporate linear low-frequency kinetic effects such as Landau damping and finite Larmor radius (FLR) corrections into the fluid description. It is shown that Landau damping significantly reduces the magnetic compressibility and that FLR corrections (i.e. nongyrotropic contributions) are required to correctly capture the polarization.We also show that, in addition to Landau damping, FLR corrections are necessary to accurately describe the damping rate of KAWs. We conclude that kinetic effects are important even at scales which are significantly larger than the proton gyroscale krL 1.

  20. Petri Net controller synthesis based on decomposed manufacturing models.

    PubMed

    Dideban, Abbas; Zeraatkar, Hashem

    2018-06-01

    Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

Top