NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
Zonally averaged model of dynamics, chemistry and radiation for the atmosphere
NASA Technical Reports Server (NTRS)
Tung, K. K.
1985-01-01
A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations on a sphere, taking advantage of the more direct relationship between the mean meridional circulation and diabatic heating rate which is available in isentropic coordinates. Possible differences between results of nongeostrophic theory and the commonly used geostrophic formulation are discussed concerning: (1) the role of eddy forcing of the diabatic circulation, and (2) the nonlinear nearly inviscid limit vs the geostrophic limit. Problems associated with the traditional Rossby number scaling in quasi-geostrophic formulations are pointed out and an alternate, more general scaling based on the smallness of mean meridional to zonal velocities for a rotating planet is suggested. Such a scaling recovers the geostrophic balanced wind relationship for the mean zonal flow but reveals that the mean meridional velocity is in general ageostrophic.
NASA Technical Reports Server (NTRS)
Plumb, R. A.
1985-01-01
Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.
Version 8 SBUV Ozone Profile Trends Compared with Trends from a Zonally Averaged Chemical Model
NASA Technical Reports Server (NTRS)
Rosenfield, Joan E.; Frith, Stacey; Stolarski, Richard
2004-01-01
Linear regression trends for the years 1979-2003 were computed using the new Version 8 merged Solar Backscatter Ultraviolet (SBUV) data set of ozone profiles. These trends were compared to trends computed using ozone profiles from the Goddard Space Flight Center (GSFC) zonally averaged coupled model. Observed and modeled annual trends between 50 N and 50 S were a maximum in the higher latitudes of the upper stratosphere, with southern hemisphere (SH) trends greater than northern hemisphere (NH) trends. The observed upper stratospheric maximum annual trend is -5.5 +/- 0.9 % per decade (1 sigma) at 47.5 S and -3.8 +/- 0.5 % per decade at 47.5 N, to be compared with the modeled trends of -4.5 +/- 0.3 % per decade in the SH and -4.0 +/- 0.2% per decade in the NH. Both observed and modeled trends are most negative in winter and least negative in summer, although the modeled seasonal difference is less than observed. Model trends are shown to be greatest in winter due to a repartitioning of chlorine species and the increasing abundance of chlorine with time. The model results show that trend differences can occur depending on whether ozone profiles are in mixing ratio or number density coordinates, and on whether they are recorded on pressure or altitude levels.
Zonally averaged thermal balance and stability models for nitrogen polar caps on Triton
NASA Technical Reports Server (NTRS)
Stansberry, John A.; Lunine, J. I.; Porco, C. C.; Mcewen, A. S.
1990-01-01
Voyager four-color imaging data of Triton are analyzed to calculate the bolometric hemispheric albedo as a function of latitude and longitude. Zonal averages of these data have been incorporated into a thermal balance model involving insolation, reradiation, and latent heat of sublimation of N2 ice for the surface. The current average bolometric albedo of Triton's polar caps is 0.8, implying an effective temperature of 34.2 K and a surface pressure of N2 of 1.6 microbar for unit emissivity. This pressure is an order of magnitude lower than the surface pressure of 18 microbar inferred from Voyager data (Broadfoot et al., 1989; Conrath et al., 1989), a discrepancy that can be reconciled if the emissivity of the N2 on Triton's surface is 0.66. The model predicts that Triton's surface north of 15 deg N latitude is experiencing deposition of N2 frosts, as are the bright portions of the south polar cap near the equator. This result explains why the south cap covers nearly the entire southern hemisphere of Triton.
A zonally averaged, three-basin ocean circulation model for climate studies
NASA Astrophysics Data System (ADS)
Hovine, S.; Fichefet, T.
1994-09-01
A two-dimensional, three-basin ocean model suitable for long-term climate studies is developed. The model is based on the zonally averaged form of the primitive equations written in spherical coordinates. The east-west density difference which arises upon averaging the momentum equations is taken to be proportional to the meridional density gradient. Lateral exchanges of heat and salt between the basins are explicitly resolved. Moreover, the model includes bottom topography and has representations of the Arctic Ocean and of the Weddell and Ross seas. Under realistic restoring boundary conditions, the model reproduces the global conveyor belt: deep water is formed in the Atlantic between 60 and 70°N at a rate of about 17 Sv (1 Sv=106 m3 s-1) and in the vicinity of the Antarctic continent, while the Indian and Pacific basins show broad upwelling. Superimposed on this thermohaline circulation are vigorous wind-driven cells in the upper thermocline. The simulated temperature and salinity fields and the computed meridional heat transport compare reasonably well with the observational estimates. When mixed boundary conditions (i.e., a restoring condition on sea-surface temperature and flux condition on sea-surface salinity) are applied, the model exhibits an irregular behavior before reaching a steady state characterized by self-sustained oscillations of 8.5-y period. The conveyor-belt circulation always results at this stage. A series of perturbation experiments illustrates the ability of the model to reproduce different steady-state circulations under mixed boundary conditions. Finally, the model sensitivity to various factors is examined. This sensitivity study reveals that the bottom topography and the presence of a submarine meridional ridge in the zone of the Drake Passage play a crucial role in determining the properties of the model bottom-water masses. The importance of the seasonality of the surface forcing is also stressed.
2007-08-28
Solar- QBO interaction and its impact on stratospheric ozone in a zonally averaged photochemical transport model of the middle atmosphere J. P...investigate the solar cycle modulation of the quasi-biennial oscillation ( QBO ) in stratospheric zonal winds and its impact on stratospheric ozone with an...updated version of the zonally averaged CHEM2D middle atmosphere model. We find that the duration of the westerly QBO phase at solar maximum is 3 months
NASA Technical Reports Server (NTRS)
Remsberg, Ellis E.; Bhatt, Praful P.
1994-01-01
Comparisons of satellite-derived temperatures with correlative temperatures indicate that the LIMS temperatures are accurate and contain more of the needed vertical resolution for calculating a residual mean circulation for transporting tracer-like species. Generally, the LIMS temperatures are accurate to at least 2 K. Other satellite data sets are comprised of temperatures with coarser vertical resolution, leading to biases that occur with an error pattern that is characteristic of their resolution. Their biases exceed 2 K at some altitudes. Retrievals of species using an infrared limb emission technique are sensitive to any temperature bias. Generally, the IMS comparisons with other data sets for ozone and water vapor are good to better than 20 percent; this represents an independent confirmation of the quality of LIMS and temperatures. Zonal mean comparisons between LIMS and SAMS temperatures also indicate agreement to better than 2 K from about 7 to 2hPa. Therefore, we are confident that SAMS N2O and CH4 are relatively free of temperature bias in that region. These factors support the generally good agreement in G90 between model N2O transported using a LIMS-derived RMC and the N2O contours from SAMS.
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Yao, Mao-Sung
1990-01-01
A number of perpetual January simulations are carried out with a two-dimensional zonally averaged model employing various parameterizations of the eddy fluxes of heat (potential temperature) and moisture. The parameterizations are evaluated by comparing these results with the eddy fluxes calculated in a parallel simulation using a three-dimensional general circulation model with zonally symmetric forcing. The three-dimensional model's performance in turn is evaluated by comparing its results using realistic (nonsymmetric) boundary conditions with observations. Branscome's parameterization of the meridional eddy flux of heat and Leovy's parameterization of the meridional eddy flux of moisture simulate the seasonal and latitudinal variations of these fluxes reasonably well, while somewhat underestimating their magnitudes. New parameterizations of the vertical eddy fluxes are developed that take into account the enhancement of the eddy mixing slope in a growing baroclinic wave due to condensation, and also the effect of eddy fluctuations in relative humidity. The new parameterizations, when tested in the two-dimensional model, simulate the seasonal, latitudinal, and vertical variations of the vertical eddy fluxes quite well, when compared with the three-dimensional model, and only underestimate the magnitude of the fluxes by 10 to 20 percent.
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2007-12-01
We recently introduced a method to rigorously test the statistical compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models against any lava flow paleomagnetic database (Khokhlov et al., 2001, 2006). Applying this method to test (TAF+PSV) models against synthetic data produced from those shows that the method is very efficient at discriminating models, and very sensitive, provided those data errors are properly taken into account. This prompted us to test a variety of published combined (TAF+PSV) models against a test Bruhnes stable polarity data set extracted from the Quidelleur et al. (1994) data base. Not surprisingly, ignoring data errors leads all models to be rejected. But taking data errors into account leads to the stimulating conclusion that at least one (TAF+PSV) model appears to be compatible with the selected data set, this model being purely axisymmetric. This result shows that in practice also, and with the data bases currently available, the method can discriminate various candidate models and decide which actually best fits a given data set. But it also shows that likely non-zonal signatures of non-homogeneous boundary conditions imposed by the mantle are difficult to identify as statistically robust from paleomagnetic directional data sets. In the present paper, we will discuss the possibility that such signatures could eventually be identified as robust with the help of more recent data sets (such as the one put together under the collaborative "TAFI" effort, see e.g. Johnson et al. abstract #GP21A-0013, AGU Fall Meeting, 2005) or by taking additional information into account (such as the possible coincidence of non-zonal time-averaged field patterns with analogous patterns in the modern field).
Nongeostrophic theory of zonally averaged circulation. I - Formulation
NASA Technical Reports Server (NTRS)
Tung, Ka Kit
1986-01-01
A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations (mass conservation, thermodynamics, and zonal momentum) on a sphere. The relationship between the mean meridional circulation and diabatic heating rate is studied. Differences between results of nongeostropic theory and the geostrophic formulation concerning the role of eddy forcing of the diabatic circulation and the nonlinear nearly inviscid limit versus the geostrophic limit are discussed. Consideration is given to the Eliassen-Palm flux divergence, the Eliassen-Palm pseudodivergence, the nonacceleration theorem, and the nonlinear nongeostrophic Taylor relationship.
NASA Technical Reports Server (NTRS)
Remsberg, Ellis E.; Bhatt, Praful P.; Miles, Thomas
1994-01-01
Determinations of the zonally averaged and diabatically derived residual mean circulation (RMC) are particularly sensitive to the assumed zonal mean temperature distribution used as input. Several different middle atmosphere satellite temperature distributions have been employed in models and are compared here: a 4-year (late 1978 to early 1982) National Meteorological Center (NMC) climatology, the Barnett and Corney (or BC) climatology, and the 7 months of Nimbus 7 limb infrared monitor of the stratosphere (LIMS) temperatures. All three climatologies are generally accurate below the 10 hPa level, but there are systematic differences between them of up to +/-5 K in the upper stratosphere and lower mesosphere. The NMC/LIMS differences are evaluated using time series of rocketsonde and reconstructed satellite temperatures at station locations. Much of those biases can be explained by the differing vertical resolutions for the satellite-derived temperatures; the time series of reconstructed LIMS profiles have higher resolution and are more accurate. Because the LIMS temperatures are limited to just two full seasons, one cannot obtain monthly RMCs from them for an annual model calculation. Two alternate monthly climatologies are examined briefly: the 4-year Nimbus 7 stratospheric and mesospheric sounder (SAMS) temperatures and for the mesosphere the distribution from the Solar Mesosphere Explorer (SME), both of which are limb viewers of medium vertical resolution. There are also differences of the order of +/-5 K for those data sets. It is concluded that a major source of error in the determination of diabatic RMCs is a persistent pattern of temperature bias whose characteristics vary according to the vertical resolution of each individual climatology.
Jet and storm track variability and change: adiabatic QG zonal averages and beyond... (Invited)
NASA Astrophysics Data System (ADS)
Robinson, W. A.
2013-12-01
The zonally averaged structures of extratropical jets and stormtracks, their slow variations, and their responses to climate change are all tightly constrained on the one hand by thermal wind balance and the necessary application of eddy torques to produce zonally averaged meridional motion, and, on the other hand, by the necessity that eddies propagate upshear to extract energy from the mean flow. Combining these constraints with the well developed theory of linear Rossby-wave propagation on zonally symmetric basic states has led to a large and growing number of plausible mechanisms to explain observed and modeled jet/storm track variability and responses to climate change and idealized forcing. Hidden within zonal averages is the reality that most baroclinic eddy activity is destroyed at the same latitude at which is generated: from one end to another of the fixed stormtracks in the Northern Hemisphere and baroclinic wave packets in the Southern Hemisphere. Ignored within adiabatic QG theory is the reality that baroclinic eddies gain significant energy from latent heating that involves sub-syntopic scale structures and dynamics. Here we use results from high-resolution regional and global simulations of the Northern Hemisphere storm tracks to explore the importance of non-zonal and diabatic dynamics in influencing jet change and variability and their influences on the much-studied zonal means.
NASA Technical Reports Server (NTRS)
North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.
1982-01-01
Geometric characteristics of the spherical earth are shown to be responsible for the increase of variance with latitude of zonally averaged meteorological statistics. An analytic model is constructed to display the effect of a spherical geometry on zonal averages, employing a sphere labeled with radial unit vectors in a real, stochastic field expanded in complex spherical harmonics. The variance of a zonally averaged field is found to be expressible in terms of the spectrum of the vector field of the spherical harmonics. A maximum variance is then located at the poles, and the ratio of the variance to the zonally averaged grid-point variance, weighted by the cosine of the latitude, yields the zonal correlation typical of the latitude. An example is provided for the 500 mb level in the Northern Hemisphere compared to 15 years of data. Variance is determined to increase north of 60 deg latitude.
Zonal average earth radiation budget measurements from satellites for climate studies
NASA Technical Reports Server (NTRS)
Ellis, J. S.; Haar, T. H. V.
1976-01-01
Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.
NASA Astrophysics Data System (ADS)
Zerefos, Christos; Kapsomenakis, John; Eleftheratos, Kostas; Tourpali, Kleareti; Petropavlovskikh, Irina; Hubert, Daan; Godin-Beekmann, Sophie; Steinbrecht, Wolfgang; Frith, Stacey; Sofieva, Viktoria; Hassler, Birgit
2018-05-01
This paper is focusing on the representativeness of single lidar stations for zonally averaged ozone profile variations over the middle and upper stratosphere. From the lower to the upper stratosphere, ozone profiles from single or grouped lidar stations correlate well with zonal means calculated from the Solar Backscatter Ultraviolet Radiometer (SBUV) satellite overpasses. The best representativeness with significant correlation coefficients is found within ±15° of latitude circles north or south of any lidar station. This paper also includes a multivariate linear regression (MLR) analysis on the relative importance of proxy time series for explaining variations in the vertical ozone profiles. Studied proxies represent variability due to influences outside of the earth system (solar cycle) and within the earth system, i.e. dynamic processes (the Quasi Biennial Oscillation, QBO; the Arctic Oscillation, AO; the Antarctic Oscillation, AAO; the El Niño Southern Oscillation, ENSO), those due to volcanic aerosol (aerosol optical depth, AOD), tropopause height changes (including global warming) and those influences due to anthropogenic contributions to atmospheric chemistry (equivalent effective stratospheric chlorine, EESC). Ozone trends are estimated, with and without removal of proxies, from the total available 1980 to 2015 SBUV record. Except for the chemistry related proxy (EESC) and its orthogonal function, the removal of the other proxies does not alter the significance of the estimated long-term trends. At heights above 15 hPa an inflection point
between 1997 and 1999 marks the end of significant negative ozone trends, followed by a recent period between 1998 and 2015 with positive ozone trends. At heights between 15 and 40 hPa the pre-1998 negative ozone trends tend to become less significant as we move towards 2015, below which the lower stratosphere ozone decline continues in agreement with findings of recent literature.
A simple inertial model for Neptune's zonal circulation
NASA Technical Reports Server (NTRS)
Allison, Michael; Lumetta, James T.
1990-01-01
Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.
Application of zonal model on indoor air sensor network design
NASA Astrophysics Data System (ADS)
Chen, Y. Lisa; Wen, Jin
2007-04-01
Growing concerns over the safety of the indoor environment have made the use of sensors ubiquitous. Sensors that detect chemical and biological warfare agents can offer early warning of dangerous contaminants. However, current sensor system design is more informed by intuition and experience rather by systematic design. To develop a sensor system design methodology, a proper indoor airflow modeling approach is needed. Various indoor airflow modeling techniques, from complicated computational fluid dynamics approaches to simplified multi-zone approaches, exist in the literature. In this study, the effects of two airflow modeling techniques, multi-zone modeling technique and zonal modeling technique, on indoor air protection sensor system design are discussed. Common building attack scenarios, using a typical CBW agent, are simulated. Both multi-zone and zonal models are used to predict airflows and contaminant dispersion. Genetic Algorithm is then applied to optimize the sensor location and quantity. Differences in the sensor system design resulting from the two airflow models are discussed for a typical office environment and a large hall environment.
NASA Technical Reports Server (NTRS)
Sohn, Byung-Ju; Smith, Eric A.
1993-01-01
The maximum entropy production principle suggested by Paltridge (1975) is applied to separating the satellite-determined required total transports into atmospheric and oceanic components. Instead of using the excessively restrictive equal energy dissipation hypothesis as a deterministic tool for separating transports between the atmosphere and ocean fluids, the satellite-inferred required 2D energy transports are imposed on Paltridge's energy balance model, which is then solved as a variational problem using the equal energy dissipation hypothesis only to provide an initial guess field. It is suggested that Southern Ocean transports are weaker than previously reported. It is argued that a maximum entropy production principle can serve as a governing rule on macroscale global climate, and, in conjunction with conventional satellite measurements of the net radiation balance, provides a means to decompose atmosphere and ocean transports from the total transport field.
Fluid simulation of tokamak ion temperature gradient turbulence with zonal flow closure model
NASA Astrophysics Data System (ADS)
Yamagishi, Osamu; Sugama, Hideo
2016-03-01
Nonlinear fluid simulation of turbulence driven by ion temperature gradient modes in the tokamak fluxtube configuration is performed by combining two different closure models. One model is a gyrofluid model by Beer and Hammett [Phys. Plasmas 3, 4046 (1996)], and the other is a closure model to reproduce the kinetic zonal flow response [Sugama et al., Phys. Plasmas 14, 022502 (2007)]. By including the zonal flow closure, generation of zonal flows, significant reduction in energy transport, reproduction of the gyrokinetic transport level, and nonlinear upshift on the critical value of gradient scale length are observed.
Fluid simulation of tokamak ion temperature gradient turbulence with zonal flow closure model
Yamagishi, Osamu, E-mail: yamagisi@nifs.ac.jp; Sugama, Hideo
Nonlinear fluid simulation of turbulence driven by ion temperature gradient modes in the tokamak fluxtube configuration is performed by combining two different closure models. One model is a gyrofluid model by Beer and Hammett [Phys. Plasmas 3, 4046 (1996)], and the other is a closure model to reproduce the kinetic zonal flow response [Sugama et al., Phys. Plasmas 14, 022502 (2007)]. By including the zonal flow closure, generation of zonal flows, significant reduction in energy transport, reproduction of the gyrokinetic transport level, and nonlinear upshift on the critical value of gradient scale length are observed.
Zonal flow as pattern formation
Parker, Jeffrey B.; Krommes, John A.
2013-10-15
Zonal flows are well known to arise spontaneously out of turbulence. We show that for statistically averaged equations of the stochastically forced generalized Hasegawa-Mima model, steady-state zonal flows, and inhomogeneous turbulence fit into the framework of pattern formation. There are many implications. First, the wavelength of the zonal flows is not unique. Indeed, in an idealized, infinite system, any wavelength within a certain continuous band corresponds to a solution. Second, of these wavelengths, only those within a smaller subband are linearly stable. Unstable wavelengths must evolve to reach a stable wavelength; this process manifests as merging jets.
Two- and three-dimensional natural and mixed convection simulation using modular zonal models
Wurtz, E.; Nataf, J.M.; Winkelmann, F.
We demonstrate the use of the zonal model approach, which is a simplified method for calculating natural and mixed convection in rooms. Zonal models use a coarse grid and use balance equations, state equations, hydrostatic pressure drop equations and power law equations of the form {ital m} = {ital C}{Delta}{sup {ital n}}. The advantage of the zonal approach and its modular implementation are discussed. The zonal model resolution of nonlinear equation systems is demonstrated for three cases: a 2-D room, a 3-D room and a pair of 3-D rooms separated by a partition with an opening. A sensitivity analysis withmore » respect to physical parameters and grid coarseness is presented. Results are compared to computational fluid dynamics (CFD) calculations and experimental data.« less
Model test of anchoring effect on zonal disintegration in deep surrounding rock masses.
Chen, Xu-Guang; Zhang, Qiang-Yong; Wang, Yuan; Liu, De-Jun; Zhang, Ning
2013-01-01
The deep rock masses show a different mechanical behavior compared with the shallow rock masses. They are classified into alternating fractured and intact zones during the excavation, which is known as zonal disintegration. Such phenomenon is a great disaster and will induce the different excavation and anchoring methodology. In this study, a 3D geomechanics model test was conducted to research the anchoring effect of zonal disintegration. The model was constructed with anchoring in a half and nonanchoring in the other half, to compare with each other. The optical extensometer and optical sensor were adopted to measure the displacement and strain changing law in the model test. The displacement laws of the deep surrounding rocks were obtained and found to be nonmonotonic versus the distance to the periphery. Zonal disintegration occurs in the area without anchoring and did not occur in the model under anchoring condition. By contrasting the phenomenon, the anchor effect of restraining zonal disintegration was revealed. And the formation condition of zonal disintegration was decided. In the procedure of tunnel excavation, the anchor strain was found to be alternation in tension and compression. It indicates that anchor will show the nonmonotonic law during suppressing the zonal disintegration.
Model Test of Anchoring Effect on Zonal Disintegration in Deep Surrounding Rock Masses
Chen, Xu-Guang; Zhang, Qiang-Yong; Wang, Yuan; Liu, De-Jun; Zhang, Ning
2013-01-01
The deep rock masses show a different mechanical behavior compared with the shallow rock masses. They are classified into alternating fractured and intact zones during the excavation, which is known as zonal disintegration. Such phenomenon is a great disaster and will induce the different excavation and anchoring methodology. In this study, a 3D geomechanics model test was conducted to research the anchoring effect of zonal disintegration. The model was constructed with anchoring in a half and nonanchoring in the other half, to compare with each other. The optical extensometer and optical sensor were adopted to measure the displacement and strain changing law in the model test. The displacement laws of the deep surrounding rocks were obtained and found to be nonmonotonic versus the distance to the periphery. Zonal disintegration occurs in the area without anchoring and did not occur in the model under anchoring condition. By contrasting the phenomenon, the anchor effect of restraining zonal disintegration was revealed. And the formation condition of zonal disintegration was decided. In the procedure of tunnel excavation, the anchor strain was found to be alternation in tension and compression. It indicates that anchor will show the nonmonotonic law during suppressing the zonal disintegration. PMID:23997683
Dynamic Stall Computations Using a Zonal Navier-Stokes Model
1988-06-01
NAVAL POSTGRADUATE SCHOOL lotMonterey ,California CD Lj STATF ,-S THESIS DYNAMIC STALL CALCULATIONS USING A ZONAL.-,_ % 0 NVETESISDE by Jack H...Conroyd, Jr. June 1988 Thesis Co-advisors: M.F. Platzer Lawrence W. Carr Approved for public release; distribution is unlimitedDOTIC , ~~~~~~~~ELECT...OINT %, Master s Thesis OM To June 212 6 SLP;’LEENTARY NOTATION ri The views expressed in this thesis are those of the author and do not reflect the
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-05
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
NASA Astrophysics Data System (ADS)
Li, King-Fai; Yao, Kaixuan; Taketa, Cameron; Zhang, Xi; Liang, Mao-Chang; Jiang, Xun; Newman, Claire; Tung, Ka-Kit; Yung, Yuk L.
2016-04-01
With the advance of modern computers, studies of planetary atmospheres have heavily relied on general circulation models (GCMs). Because these GCMs are usually very complicated, the simulations are sometimes difficult to understand. Here we develop a semi-analytic zonally averaged, cyclostrophic residual Eulerian model to illustrate how some of the large-scale structures of the middle atmospheric circulation can be explained qualitatively in terms of simple thermal (e.g. solar heating) and mechanical (the Eliassen-Palm flux divergence) forcings. This model is a generalization of that for fast rotating planets such as the Earth, where geostrophy dominates (Andrews and McIntyre 1987). The solution to this semi-analytic model consists of a set of modified Hough functions of the generalized Laplace's tidal equation with the cyclostrohpic terms. As an example, we apply this model to Titan. We show that the seasonal variations of the temperature and the circulation of these slowly-rotating planets can be well reproduced by adjusting only three parameters in the model: the Brunt-Väisälä bouyancy frequency, the Newtonian radiative cooling rate, and the Rayleigh friction damping rate. We will also discuss an application of this model to study the meridional transport of photochemically produced tracers that can be observed by space instruments.
NASA Astrophysics Data System (ADS)
Li, K. F.; Yao, K.; Taketa, C.; Zhang, X.; Liang, M. C.; Jiang, X.; Newman, C. E.; Tung, K. K.; Yung, Y. L.
2015-12-01
With the advance of modern computers, studies of planetary atmospheres have heavily relied on general circulation models (GCMs). Because these GCMs are usually very complicated, the simulations are sometimes difficult to understand. Here we develop a semi-analytic zonally averaged, cyclostrophic residual Eulerian model to illustrate how some of the large-scale structures of the middle atmospheric circulation can be explained qualitatively in terms of simple thermal (e.g. solar heating) and mechanical (the Eliassen-Palm flux divergence) forcings. This model is a generalization of that for fast rotating planets such as the Earth, where geostrophy dominates (Andrews and McIntyre 1987). The solution to this semi-analytic model consists of a set of modified Hough functions of the generalized Laplace's tidal equation with the cyclostrohpic terms. As examples, we apply this model to Titan and Venus. We show that the seasonal variations of the temperature and the circulation of these slowly-rotating planets can be well reproduced by adjusting only three parameters in the model: the Brunt-Väisälä bouyancy frequency, the Newtonian radiative cooling rate, and the Rayleigh friction damping rate. We will also discuss the application of this model to study the meridional transport of photochemically produced tracers that can be observed by space instruments.
A zonal method for modeling powered-lift aircraft flow fields
NASA Technical Reports Server (NTRS)
Roberts, D. W.
1989-01-01
A zonal method for modeling powered-lift aircraft flow fields is based on the coupling of a three-dimensional Navier-Stokes code to a potential flow code. By minimizing the extent of the viscous Navier-Stokes zones the zonal method can be a cost effective flow analysis tool. The successful coupling of the zonal solutions provides the viscous/inviscid interations that are necessary to achieve convergent and unique overall solutions. The feasibility of coupling the two vastly different codes is demonstrated. The interzone boundaries were overlapped to facilitate the passing of boundary condition information between the codes. Routines were developed to extract the normal velocity boundary conditions for the potential flow zone from the viscous zone solution. Similarly, the velocity vector direction along with the total conditions were obtained from the potential flow solution to provide boundary conditions for the Navier-Stokes solution. Studies were conducted to determine the influence of the overlap of the interzone boundaries and the convergence of the zonal solutions on the convergence of the overall solution. The zonal method was applied to a jet impingement problem to model the suckdown effect that results from the entrainment of the inviscid zone flow by the viscous zone jet. The resultant potential flow solution created a lower pressure on the base of the vehicle which produces the suckdown load. The feasibility of the zonal method was demonstrated. By enhancing the Navier-Stokes code for powered-lift flow fields and optimizing the convergence of the coupled analysis a practical flow analysis tool will result.
Comparative analysis of zonal systems for macro-level crash modeling.
Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen
2017-06-01
Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), state-wide traffic analysis zones (STAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) are developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposes a method to compare the modeling performance of the three types of geographic units at different spatial configurations through a grid based framework. Specifically, the study region is partitioned to grids of various sizes and the model prediction accuracy of the various macro models is considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperform the ones that do not consider it. Based on the modeling results and motivation for developing the different zonal systems, it is recommended using CTs for socio-demographic data collection, employing TAZs for transportation demand forecasting, and adopting TADs for transportation safety planning. The findings from this study can help practitioners select appropriate zonal systems for traffic crash modeling, which leads to develop more efficient policies to enhance transportation safety. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.
A Model Study of Zonal Forcing in the Equatorial Stratosphere by Convectively Induced Gravity Waves
NASA Technical Reports Server (NTRS)
Alexander, M. J.; Holton, James R.
1997-01-01
A two-dimensional cloud-resolving model is used to examine the possible role of gravity waves generated by a simulated tropical squall line in forcing the quasi-biennial oscillation (QBO) of the zonal winds in the equatorial stratosphere. A simulation with constant background stratospheric winds is compared to simulations with background winds characteristic of the westerly and easterly QBO phases, respectively. In all three cases a broad spectrum of both eastward and westward propagating gravity waves is excited. In the constant background wind case the vertical momentum flux is nearly constant with height in the stratosphere, after correction for waves leaving the model domain. In the easterly and westerly shear cases, however, westward and eastward propagating waves, respectively, are strongly damped as they approach their critical levels, owing to the strongly scale-dependent vertical diffusion in the model. The profiles of zonal forcing induced by this wave damping are similar to profiles given by critical level absorption, but displaced slightly downward. The magnitude of the zonal forcing is of order 5 m/s/day. It is estimated that if 2% of the area of the Tropics were occupied by storms of similar magnitude, mesoscale gravity waves could provide nearly 1/4 of the zonal forcing required for the QBO.
Results of a zonally truncated three-dimensional model of the Venus middle atmosphere
NASA Technical Reports Server (NTRS)
Newman, M.
1992-01-01
Although the equatorial rotational speed of the solid surface of Venus is only 4 m s(exp-1), the atmospheric rotational speed reaches a maximum of approximately 100 m s(exp-1) near the equatorial cloud top level (65 to 70 km). This phenomenon, known as superrotation, is the central dynamical problem of the Venus atmosphere. We report here the results of numerical simulations aimed at clarifying the mechanism for maintaining the equatorial cloud top rotation. Maintenance of an equatorial rotational speed maximum above the surface requires waves or eddies that systematically transport angular momentum against its zonal mean gradient. The zonally symmetric Hadley circulation is driven thermally and acts to reduce the rotational speed at the equatorial cloud top level; thus wave or eddy transport must counter this tendency as well as friction. Planetary waves arising from horizontal shear instability of the zonal flow (barotropic instability) could maintain the equatorial rotation by transporting angular momentum horizontally from midlatitudes toward the equator. Alternatively, vertically propagating waves could provide the required momentum source. The relative motion between the rotating atmosphere and the pattern of solar heating, which as a maximum where solar radiation is absorbed near the cloud tops, drives diurnal and semidiurnal thermal tides that propagate vertically away from the cloud top level. The effect of this wave propagation is to transport momentum toward the cloud top level at low latitudes and accelerate the mean zonal flow there. We employ a semispectral primitive equation model with a zonal mean flow and zonal wavenumbers 1 and 2. These waves correspond to the diurnal and semidiurnal tides, but they can also be excited by barotropic or baroclinic instability. Waves of higher wavenumbers and interactions between the waves are neglected. Symmetry about the equator is assumed, so the model applies to one hemisphere and covers the altitude range 30 to
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
NASA Astrophysics Data System (ADS)
Méchi, Rachid; Farhat, Habib; Said, Rachid
2016-01-01
Nongray radiation calculations are carried out for a case problem available in the literature. The problem is a non-isothermal and inhomogeneous CO2-H2O- N2 gas mixture confined within an axisymmetric cylindrical furnace. The numerical procedure is based on the zonal method associated with the weighted sum of gray gases (WSGG) model. The effect of the wall emissivity on the heat flux losses is discussed. It is shown that this property affects strongly the furnace efficiency and that the most important heat fluxes are those leaving through the circumferential boundary. The numerical procedure adopted in this work is found to be effective and may be relied on to simulate coupled turbulent combustion-radiation in fired furnaces.
A Zonal Climate Model for the 1-D Mars Evolution Code: Explaining Meridiani Planum.
NASA Astrophysics Data System (ADS)
Manning, C. V.; McKay, C. P.; Zahnle, K. J.
2005-12-01
Recent MER Opportunity observations suggest there existed an extensive body of shallow water in the present Meridiani Planum during the late Noachian [1]. Observations of roughly contemporaneous valley networks show little net erosion [2]. Hypsometric analysis [3] finds that martian drainage basins are similar to terrestrial drainage basins in very arid regions. The immaturity of martian drainage basins suggests they were formed by infrequent fluvial action. If similar fluvial discharges are responsible for the laminations in the salt-bearing outcrops of Meridiani Planum, their explanation may require a climate model based on surface thermal equilibrium with diurnally averaged temperatures greater than freezing. In the context of Mars' chaotic obliquity, invoking a moderately thick atmosphere with seasonal insolation patterns may uncover the conditions under which the outcrops formed. We compounded a 1-D model of the evolution of Mars' inventories of CO2 over its lifetime called the Mars Evolution Code (MEC) [4]. We are assembling a zonal climate model that includes meridional heat transport, heat conduction to/from the regolith, latent heat deposition, and an albedo distribution based on the depositional patterns of ices. Since water vapor is an important greenhouse gas, and whose ice affects the albedo, we must install a full hydrological cycle. This requires a thermal model of the regolith to model diffusion of water vapor to/from a permafrost layer. Our model carries obliquity and eccentricity distributions consistent with Laskar et al. [5], so we will be able to model the movement of the ice cap with changes in obliquity. The climate model will be used to investigate the conditions under which ponded water could have occurred in the late Noachian, thus supplying a constraint on the free inventory of CO2 at that time. Our evolution code can then investigate Hesperian and Amazonian climates. The model could also be used to understand evidence of recent climate
Rossby and drift wave turbulence and zonal flows: The Charney-Hasegawa-Mima model and its extensions
NASA Astrophysics Data System (ADS)
Connaughton, Colm; Nazarenko, Sergey; Quinn, Brenda
2015-12-01
A detailed study of the Charney-Hasegawa-Mima model and its extensions is presented. These simple nonlinear partial differential equations suggested for both Rossby waves in the atmosphere and drift waves in a magnetically-confined plasma, exhibit some remarkable and nontrivial properties, which in their qualitative form, survive in more realistic and complicated models. As such, they form a conceptual basis for understanding the turbulence and zonal flow dynamics in real plasma and geophysical systems. Two idealised scenarios of generation of zonal flows by small-scale turbulence are explored: a modulational instability and turbulent cascades. A detailed study of the generation of zonal flows by the modulational instability reveals that the dynamics of this zonal flow generation mechanism differ widely depending on the initial degree of nonlinearity. The jets in the strongly nonlinear case further roll up into vortex streets and saturate, while for the weaker nonlinearities, the growth of the unstable mode reverses and the system oscillates between a dominant jet, which is slightly inclined to the zonal direction, and a dominant primary wave. A numerical proof is provided for the extra invariant in Rossby and drift wave turbulence-zonostrophy. While the theoretical derivations of this invariant stem from the wave kinetic equation which assumes weak wave amplitudes, it is shown to be relatively well-conserved for higher nonlinearities also. Together with the energy and enstrophy, these three invariants cascade into anisotropic sectors in the k-space as predicted by the Fjørtoft argument. The cascades are characterised by the zonostrophy pushing the energy to the zonal scales. A small scale instability forcing applied to the model has demonstrated the well-known drift wave-zonal flow feedback loop. The drift wave turbulence is generated from this primary instability. The zonal flows are then excited by either one of the generation mechanisms, extracting energy from
Anderson, J.; Miki, K.; Uzawa, K.
2006-11-30
During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference ismore » used in the (WKE) approach.« less
Zonal harmonic model of Saturn's magnetic field from Voyager 1 and 2 observations
NASA Technical Reports Server (NTRS)
Connerney, J. E. P.; Ness, N. F.; Acuna, M. H.
1982-01-01
An analysis of the magnetic field of Saturn is presented which takes into account both the Voyager 1 and 2 vector magnetic field observations. The analysis is based on the traditional spherical harmonic expansion of a scale potential to derive the magnetic field within 8 Saturn radii. A third-order zonal harmonic model fitted to Voyager 1 and 2 observations is found to be capable of predicting the magnetic field characteristics at one encounter based on those observed at another, unlike models including dipole and quadrupole terms only. The third-order model is noted to lead to significantly enhanced polar surface field intensities with respect to dipole models, and probably represents the axisymmetric part of a complex dynamo field.
NASA Astrophysics Data System (ADS)
Elkins, J. W.; Nance, J. D.; Dutton, G. S.; Montzka, S. A.; Hall, B. D.; Miller, B.; Butler, J. H.; Mondeel, D. J.; Siso, C.; Moore, F. L.; Hintsa, E. J.; Wofsy, S. C.; Rigby, M. L.
2015-12-01
The Halocarbons and other Atmospheric Trace Species (HATS) of NOAA's Global Monitoring Division started measurements of the major chlorofluorocarbons and nitrous oxide in 1977 from flask samples collected at five remote sites around the world. Our program has expanded to over 40 compounds at twelve sites, which includes six in situ instruments and twelve flask sites. The Montreal Protocol for Substances that Deplete the Ozone Layer and its subsequent amendments has helped to decrease the concentrations of many of the ozone depleting compounds in the atmosphere. Our goal is to provide zonal emission estimates for these trace gases from multi-box models and their estimated atmospheric lifetimes in this presentation and make the emission values available on our web site. We plan to use our airborne measurements to calibrate the exchange times between the boxes for 5-box and 12-box models using sulfur hexafluoride where emissions are better understood.
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
NASA Astrophysics Data System (ADS)
Guervilly, C.; Cardin, P.
2017-10-01
We study rapidly rotating Boussinesq convection driven by internal heating in a full sphere. We use a numerical model based on the quasi-geostrophic approximation for the velocity field, whereas the temperature field is 3-D. This approximation allows us to perform simulations for Ekman numbers down to 10-8, Prandtl numbers relevant for liquid metals (˜10-1) and Reynolds numbers up to 3 × 104. Persistent zonal flows composed of multiple jets form as a result of the mixing of potential vorticity. For the largest Rayleigh numbers computed, the zonal velocity is larger than the convective velocity despite the presence of boundary friction. The convective structures and the zonal jets widen when the thermal forcing increases. Prograde and retrograde zonal jets are dynamically different: in the prograde jets (which correspond to weak potential vorticity gradients) the convection transports heat efficiently and the mean temperature tends to be homogenized; by contrast, in the cores of the retrograde jets (which correspond to steep gradients of potential vorticity) the dynamics is dominated by the propagation of Rossby waves, resulting in the formation of steep mean temperature gradients and the dominance of conduction in the heat transfer process. Consequently, in quasi-geostrophic systems, the width of the retrograde zonal jets controls the efficiency of the heat transfer.
Potter, G.L.; MacCracken, M.C.; Ellsaesser, H.W.
1975-08-01
Recent interest in the cause of the sub-Sahara drought has initiated several investigations implying possible anthropogenic origin through increased surface albedo due to reduced plant cover from overgrazing. Results of two integrations of the Zonal Atmospheric Model (ZAM2) are presented, differing only in the prescribed surface albedo for the subtropical land masses of the northern hemisphere. These studies were initiated to determine whether an albedo change alone can bring about such dramatic impacts on local precipitation rates as have been implied. Preliminary results indicate that an albedo change can affect the climate, not just at the latitude of change butmore » also at other latitudes due to various atmospheric feedback mechanisms. (auth)« less
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Vaporization and Zonal Mixing in Performance Modeling of Advanced LOX-Methane Rockets
NASA Technical Reports Server (NTRS)
Williams, George J., Jr.; Stiegemeier, Benjamin R.
2013-01-01
Initial modeling of LOX-Methane reaction control (RCE) 100 lbf thrusters and larger, 5500 lbf thrusters with the TDK/VIPER code has shown good agreement with sea-level and altitude test data. However, the vaporization and zonal mixing upstream of the compressible flow stage of the models leveraged empirical trends to match the sea-level data. This was necessary in part because the codes are designed primarily to handle the compressible part of the flow (i.e. contraction through expansion) and in part because there was limited data on the thrusters themselves on which to base a rigorous model. A more rigorous model has been developed which includes detailed vaporization trends based on element type and geometry, radial variations in mixture ratio within each of the "zones" associated with elements and not just between zones of different element types, and, to the extent possible, updated kinetic rates. The Spray Combustion Analysis Program (SCAP) was leveraged to support assumptions in the vaporization trends. Data of both thrusters is revisited and the model maintains a good predictive capability while addressing some of the major limitations of the previous version.
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Shape, zonal winds and gravitational field of Jupiter: a fully self-consistent, multi-layered model
NASA Astrophysics Data System (ADS)
Schubert, Gerald; Kong, Dali; Zhang, Keke
2016-10-01
We construct a three-dimensional, finite-element, fully self-consistent, multi-layered,non-spheroidal model of Jupiter consisting of an inner core, a metallic electrically conducting dynamo region and an outer molecular electrically insulating envelope. We assume that the Jovian zonal winds are on cylinders parallel to the rotation axis but, due to the effect of magnetic braking, are confined within the outer molecular envelope. Two related calculations are carried out. The first provides an accurate description of the shape and internal density profile of Jupiter; the effect of rotational distortion is not treated as a small perturbation on a spherically symmetric state. This calculation determines the density, size and shape of the inner core, the irregular shape of the 1-bar pressure level, and the internal structure of Jupiter; the full effect of rotational distortion, without the influence of the zonal winds, is accounted for. Our multi-layered model is able to produce the known mass, the known equatorial and polar radii, and the known zonal gravitational coefficient J2 of Jupiter within their error bars; it also yields the coefficients J4 and J6 within about 5% accuracy, and the core equatorial radius 0.09RJ containing 3.73 Earth masses.The second calculation determines the variation of the gravitational field caused solely by the effect of the zonal winds on the rotationally distorted non-spheroidal Jupiter. Four different cases, ranging from a deep wind profile to a very shallow profile, are considered and implications for accurate interpretation of the zonal gravitational coefficients expected from the Juno mission are discussed.
Zonal NePhRO scoring system: a superior renal tumor complexity classification model.
Hakky, Tariq S; Baumgarten, Adam S; Allen, Bryan; Lin, Hui-Yi; Ercole, Cesar E; Sexton, Wade J; Spiess, Philippe E
2014-02-01
Since the advent of the first standardized renal tumor complexity system, many subsequent scoring systems have been introduced, many of which are complicated and can make it difficult to accurately measure data end points. In light of these limitations, we introduce the new zonal NePhRO scoring system. The zonal NePhRO score is based on 4 anatomical components that are assigned a score of 1, 2, or 3, and their sum is used to classify renal tumors. The zonal NePhRO scoring system is made up of the (Ne)arness to collecting system, (Ph)ysical location of the tumor in the kidney, (R)adius of the tumor, and (O)rganization of the tumor. In this retrospective study, we evaluated patients exhibiting clinical stage T1a or T1b who underwent open partial nephrectomy performed by 2 genitourinary surgeons. Each renal unit was assigned both a zonal NePhRO score and a RENAL (radius, exophytic/endophytic properties, nearness of tumor to the collecting system or sinus in millimeters, anterior/posterior, location relative to polar lines) score, and a blinded reviewer used the same preoperative imaging study to obtain both scores. Additional data points gathered included age, clamp time, complication rate, urine leak rate, intraoperative blood loss, and pathologic tumor size. One hundred sixty-six patients underwent open partial nephrectomy. There were 37 perioperative complications quantitated using the validated Clavien-Dindo system; their occurrence was predicted by the NePhRO score on both univariate and multivariate analyses (P = .0008). Clinical stage, intraoperative blood loss, and tumor diameter were all correlated with the zonal NePhRO score on univariate analysis only. The zonal NePhRO scoring system is a simpler tool that accurately predicts the surgical complexity of a renal lesion. Copyright © 2014 Elsevier Inc. All rights reserved.
A zonally symmetric model for the monsoon-Hadley circulation with stochastic convective forcing
NASA Astrophysics Data System (ADS)
De La Chevrotière, Michèle; Khouider, Boualem
2017-02-01
Idealized models of reduced complexity are important tools to understand key processes underlying a complex system. In climate science in particular, they are important for helping the community improve our ability to predict the effect of climate change on the earth system. Climate models are large computer codes based on the discretization of the fluid dynamics equations on grids of horizontal resolution in the order of 100 km, whereas unresolved processes are handled by subgrid models. For instance, simple models are routinely used to help understand the interactions between small-scale processes due to atmospheric moist convection and large-scale circulation patterns. Here, a zonally symmetric model for the monsoon circulation is presented and solved numerically. The model is based on the Galerkin projection of the primitive equations of atmospheric synoptic dynamics onto the first modes of vertical structure to represent free tropospheric circulation and is coupled to a bulk atmospheric boundary layer (ABL) model. The model carries bulk equations for water vapor in both the free troposphere and the ABL, while the processes of convection and precipitation are represented through a stochastic model for clouds. The model equations are coupled through advective nonlinearities, and the resulting system is not conservative and not necessarily hyperbolic. This makes the design of a numerical method for the solution of this system particularly difficult. Here, we develop a numerical scheme based on the operator time-splitting strategy, which decomposes the system into three pieces: a conservative part and two purely advective parts, each of which is solved iteratively using an appropriate method. The conservative system is solved via a central scheme, which does not require hyperbolicity since it avoids the Riemann problem by design. One of the advective parts is a hyperbolic diagonal matrix, which is easily handled by classical methods for hyperbolic equations, while
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Moin, Parviz
2016-01-01
This paper focuses on numerical and practical aspects associated with a parallel implementation of a two-layer zonal wall model for large-eddy simulation (LES) of compressible wall-bounded turbulent flows on unstructured meshes. A zonal wall model based on the solution of unsteady three-dimensional Reynolds-averaged Navier-Stokes (RANS) equations on a separate near-wall grid is implemented in an unstructured, cell-centered finite-volume LES solver. The main challenge in its implementation is to couple two parallel, unstructured flow solvers for efficient boundary data communication and simultaneous time integrations. A coupling strategy with good load balancing and low processors underutilization is identified. Face mapping and interpolation procedures at the coupling interface are explained in detail. The method of manufactured solution is used for verifying the correct implementation of solver coupling, and parallel performance of the combined wall-modeled LES (WMLES) solver is investigated. The method has successfully been applied to several attached and separated flows, including a transitional flow over a flat plate and a separated flow over an airfoil at an angle of attack.
NASA Astrophysics Data System (ADS)
Cohen, Bruce; Umansky, Maxim; Joseph, Ilon
2015-11-01
Progress is reported on including self-consistent zonal flows in simulations of drift-resistive ballooning turbulence using the BOUT + + framework. Previous published work addressed the simulation of L-mode edge turbulence in realistic single-null tokamak geometry using the BOUT three-dimensional fluid code that solves Braginskii-based fluid equations. The effects of imposed sheared ExB poloidal rotation were included, with a static radial electric field fitted to experimental data. In new work our goal is to include the self-consistent effects on the radial electric field driven by the microturbulence, which contributes to the sheared ExB poloidal rotation (zonal flow generation). We describe a model for including self-consistent zonal flows and an algorithm for maintaining underlying plasma profiles to enable the simulation of steady-state turbulence. We examine the role of Braginskii viscous forces in providing necessary dissipation when including axisymmetric perturbations. We also report on some of the numerical difficulties associated with including the axisymmetric component of the fluctuating fields. This work was performed under the auspices of the U.S. Department of Energy under contract DE-AC52-07NA27344 at the Lawrence Livermore National Laboratory (LLNL-ABS-674950).
NASA Astrophysics Data System (ADS)
Petropavlovskikh, I. V.; Zerefos, C. S.; Kapsomenakis, J. N.; Eleftheratos, K.; Tourpali, K.; Hubert, D.; Godin-Beekmann, S.; Steinbrecht, W.; Frith, S. M.; Sofieva, V.
2017-12-01
The paper is focusing on the representativeness of single lidar stations and SBUV overpasses in searching for trends in the vertical ozone profiles. It was found that from the lower to the upper stratosphere single or grouped stations correlate well with zonal means calculated from SBUV overpasses in a global perspective. The best representativeness in vertical ozone profiles is found within 5 degrees of latitude north or south of any LIDAR station or SBUV overpasses at which the latitude range is expanded as we move to the upper stratospheric layers. The paper includes a detailed analysis on a ranking of proxy footprints in the vertical ozone profiles. Major proxies studied are of different kinds: those outside of the earth system (solar cycle), those who represent dynamic processes (the QBO, the AO, AAO, ENSO), the volcanic aerosol component (AOD) and the manmade contribution to chemistry (EESC). The trends have been studied after removal of proxies from the total available SBUV records during the period 1980-2015. As seen in the detailed contributions of the proxies major contributions come from chemistry, the solar cycle and AOD. It appears that at some particular years the synergistic contribution of proxies which although contribute smaller amplitudes to ozone individually, when composited can result to anomalies that may influence the long term change or trend in the ozone profiles. Notable periods for the synergistic negative anomalies can be seen in 1983, 1985, 1988, 1992, 1993, 1995, 1997, 1999, 2002, 2004, 2006, 2008, 2011, 2013. During all these years ozone at about 24 km dropped below -6% of the mean. The so-called "inflection point" between 1997 and 1999 marks the large reduction of the significant negative ozone trends, followed by the recent period of positive ozone change 1998-2015 which is observed above 15 hPa whose significance remains to be proven due to its smaller period in comparison to the total period of 36 years of ozone profiles (1980-2015).
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Empirical wind model for the middle and lower atmosphere. Part 1: Local time average
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.
Laboratory modeling of multiple zonal jets on the polar beta-plane
NASA Astrophysics Data System (ADS)
Afanasyev, Y.
2011-12-01
Zonal jets observed in the oceans and atmospheres of planets are studied in a laboratory rotating tank. The fluid layer in the rotating tank has parabolic free surface and dynamically simulates the polar beta-plane where the Coriolis parameter varies quadratically with distance from the pole. Velocity and surface elevation fields are measured with an optical altimetry method (Afanasyev et al., Exps Fluids 2009). The flows are induced by a localized buoyancy source along radial direction. The baroclinic flow consisting of a field of eddies propagates away from the source due West and forms zonal jets (Fig. 1). Barotropic jets ahead of the baroclinic flow are formed by radiation of beta plumes. Inside the baroclinic flow the jets flow between the chains of eddies. Experimental evidence of so-called noodles (baroclinic instability mode with motions in the radial, North-South direction) theoretically predicted by Berloff et al. (JFM, JPO 2009) was found in our experiments. Beta plume radiation mechanism and the mechanism associated with the instability of noodles are likely to contribute to formation of jets in the baroclinic flow.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Two-Stage Bayesian Model Averaging in Endogenous Variable Models*
Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.
2013-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471
Averaged model to study long-term dynamics of a probe about Mercury
NASA Astrophysics Data System (ADS)
Tresaco, Eva; Carvalho, Jean Paulo S.; Prado, Antonio F. B. A.; Elipe, Antonio; de Moraes, Rodolpho Vilhena
2018-02-01
This paper provides a method for finding initial conditions of frozen orbits for a probe around Mercury. Frozen orbits are those whose orbital elements remain constant on average. Thus, at the same point in each orbit, the satellite always passes at the same altitude. This is very interesting for scientific missions that require close inspection of any celestial body. The orbital dynamics of an artificial satellite about Mercury is governed by the potential attraction of the main body. Besides the Keplerian attraction, we consider the inhomogeneities of the potential of the central body. We include secondary terms of Mercury gravity field from J_2 up to J_6, and the tesseral harmonics \\overline{C}_{22} that is of the same magnitude than zonal J_2. In the case of science missions about Mercury, it is also important to consider third-body perturbation (Sun). Circular restricted three body problem can not be applied to Mercury-Sun system due to its non-negligible orbital eccentricity. Besides the harmonics coefficients of Mercury's gravitational potential, and the Sun gravitational perturbation, our average model also includes Solar acceleration pressure. This simplified model captures the majority of the dynamics of low and high orbits about Mercury. In order to capture the dominant characteristics of the dynamics, short-period terms of the system are removed applying a double-averaging technique. This algorithm is a two-fold process which firstly averages over the period of the satellite, and secondly averages with respect to the period of the third body. This simplified Hamiltonian model is introduced in the Lagrange Planetary equations. Thus, frozen orbits are characterized by a surface depending on three variables: the orbital semimajor axis, eccentricity and inclination. We find frozen orbits for an average altitude of 400 and 1000 km, which are the predicted values for the BepiColombo mission. Finally, the paper delves into the orbital stability of frozen
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pirdavani, Ali; Brijs, Tom; Bellemans, Tom; Kochan, Bruno; Wets, Geert
2013-01-01
Travel demand management (TDM) consists of a variety of policy measures that affect the transportation system's effectiveness by changing travel behavior. The primary objective to implement such TDM strategies is not to improve traffic safety, although their impact on traffic safety should not be neglected. The main purpose of this study is to evaluate the traffic safety impact of conducting a fuel-cost increase scenario (i.e. increasing the fuel price by 20%) in Flanders, Belgium. Since TDM strategies are usually conducted at an aggregate level, crash prediction models (CPMs) should also be developed at a geographically aggregated level. Therefore zonal crash prediction models (ZCPMs) are considered to present the association between observed crashes in each zone and a set of predictor variables. To this end, an activity-based transportation model framework is applied to produce exposure metrics which will be used in prediction models. This allows us to conduct a more detailed and reliable assessment while TDM strategies are inherently modeled in the activity-based models unlike traditional models in which the impact of TDM strategies are assumed. The crash data used in this study consist of fatal and injury crashes observed between 2004 and 2007. The network and socio-demographic variables are also collected from other sources. In this study, different ZCPMs are developed to predict the number of injury crashes (NOCs) (disaggregated by different severity levels and crash types) for both the null and the fuel-cost increase scenario. The results show a considerable traffic safety benefit of conducting the fuel-cost increase scenario apart from its impact on the reduction of the total vehicle kilometers traveled (VKT). A 20% increase in fuel price is predicted to reduce the annual VKT by 5.02 billion (11.57% of the total annual VKT in Flanders), which causes the total NOCs to decline by 2.83%. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dynamics of zonal flows in helical systems.
Sugama, H; Watanabe, T-H
2005-03-25
A theory for describing collisionless long-time behavior of zonal flows in helical systems is presented and its validity is verified by gyrokinetic-Vlasov simulation. It is shown that, under the influence of particles trapped in helical ripples, the response of zonal flows to a given source becomes weaker for lower radial wave numbers and deeper helical ripples while a high-level zonal-flow response, which is not affected by helical-ripple-trapped particles, can be maintained for a longer time by reducing their bounce-averaged radial drift velocity. This implies a possibility that helical configurations optimized for reducing neoclassical ripple transport can simultaneously enhance zonal flows which lower anomalous transport.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Ghosh, Pranay; Vahedipour, Kaveh; Lin, Min; Vogel, Jens H; Haynes, Charles A; von Lieres, Eric
2013-01-01
The zonal rate model (ZRM) has previously been applied for analyzing the performance of axial flow membrane chromatography capsules by independently determining the impacts of flow and binding related non-idealities on measured breakthrough curves. In the present study, the ZRM is extended to radial flow configurations, which are commonly used at larger scales. The axial flow XT5 capsule and the radial flow XT140 capsule from Pall are rigorously analyzed under binding and non-binding conditions with bovine serum albumin (BSA) as test molecule. The binding data of this molecule is much better reproduced by the spreading model, which hypothesizes different binding orientations, than by the well-known Langmuir model. Moreover, a revised cleaning protocol with NaCl instead of NaOH and minimizing the storage time has been identified as most critical for quantitatively reproducing the measured breakthrough curves. The internal geometry of both capsules is visualized by magnetic resonance imaging (MRI). The flow in the external hold-up volumes of the XT140 capsule was found to be more homogeneous as in the previously studied XT5 capsule. An attempt for model-based scale-up was apparently impeded by irregular pleat structures in the used XT140 capsule, which might lead to local variations in the linear velocity through the membrane stack. However, the presented approach is universal and can be applied to different capsules. The ZRM is shown to potentially help save valuable material and time, as the experiments required for model calibration are much cheaper than the predicted large-scale experiment at binding conditions. Biotechnol. Bioeng. 2013; 110: 1129–1141. © 2012 Wiley Periodicals, Inc. PMID:23097218
Robust Model-Based Fault Diagnosis for DC Zonal Electrical Distribution System
2007-06-01
Conf. on Decision and Control, 1979, 149 [24] P. Balle, D. Juricic, A. Rakar and S. Ernst , "Identification of nonlinear processes and model based...Technology, IEEE Transactions on, vol. 12, pp. 183-192, 2004. [232] H. G. Kwatny, E. Mensah, D. Niebur and C. Teolis, "Optimal shipboard power
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of
Amplification of warming due to intensification of zonal circulation in the mid-latitudes
NASA Astrophysics Data System (ADS)
Alekseev, Genrikh; Ivanov, Nikolai; Kharlanenkova, Natalia; Kuzmina, Svetlana
2015-04-01
We propose a new index to evaluate the impact of atmospheric zonal transport oscillations on inter-annual variability and trends of average air temperature in mid-latitudes, Northern Hemisphere and globe. A simple model of mid-latitude channel "ocean-land-atmosphere" was used to produce the analytic relationship between the zonal circulation and the land-ocean temperature contrast which was used as a basis for index. An inverse relationship was found between indexes and average mid-latitude, hemisphere and global temperatures during the cold half of year and opposite one in summer. These relationships keep under 400 mb height. In winter relationship describes up to 70, 50 and 40 % of surface air temperature inter-annual variability of these averages, respectively. The contribution of zonal circulation to the increase in the average surface air temperature during warming period 1969-2008 reaches 75% in the mid-latitudes and 40% in the Northern Hemisphere. Proposed mid-latitude index correlates negatively with surface air temperature in the Arctic except summer. ECHAM4 projections with the A1B scenario show that increase of zonal circulation defines more than 74% of the warming in the Northern Hemisphere for 2001-2100. Our analysis confirms that the proposed index is an effective indicator of the climate change caused by variations of the zonal circulation that arise due to anthropogenic and/or natural global forcing mechanisms.
NASA Astrophysics Data System (ADS)
Konecky, B.; Russell, J. M.; Vuille, M.; Rodysill, J. R.; Cohen, L. R.; Chuman, A. F.; Huang, Y.
2011-12-01
We present new evidence for multi-decadal to millennial scale hydro-climatic change in the continental Indian Ocean region over the past two millennia. We assess regional hydrological variability using new records of the δD of terrestrial plant waxes from the sediments of several lakes in tropical East Africa and Indonesia. We compare these new data to previous δ18O and δD records from the region and interpret these results in light of an isotope-enabled climate model simulation of the past 130 years. Long-term trends in our data support a southward migration of the Intertropical Convergence Zone (ITCZ)'s mean position over the past millennium, bringing progressively wetter conditions and D-depleted waxes to our southernmost site (~8°S) starting around 950 C.E. while maintaining overall wet conditions at our northernmost site (~0°N) until the end of the 19th century. Superimposed on this long-term trend are a series of pronounced, multi-decadal to centennial scale isotopic excursions that are of the same timing but in opposite directions on the two sides of the Indian Ocean. These zonally asymmetric isotopic fluctuations become progressively more pronounced beginning around 1400 C.E., with the onset of Little Ice Age cool conditions recorded in sea surface temperature reconstructions from the Northern Hemisphere and the Indo-Pacific Warm Pool (IPWP). Previous work in the IPWP region suggests cooler SST, reduced boreal summer Asian monsoon intensity, and less ENSO-like activity during the Little Ice Age [Oppo et al., 2009, Nature 460:1113, and references therein], although recent paleolimnological reconstructions from Java indicate punctuated droughts during this time [Rodysill et al., 2010, Eos Trans. AGU, 91(52), Fall Meet. Suppl., Abstract PP51B-04]. Our records suggest that multi-decadal to centennial precipitation variability was in fact enhanced during this time period in parts of equatorial East Africa and western Indonesia. The direction of isotopic
An improved switching converter model using discrete and average techniques
NASA Technical Reports Server (NTRS)
Shortt, D. J.; Lee, F. C.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Indian Ocean zonal mode activity in 20th century observations and simulations
NASA Astrophysics Data System (ADS)
Sendelbeck, Anja; Mölg, Thomas
2016-04-01
The Indian Ocean zonal mode (IOZM) is a coupled ocean-atmosphere system with anomalous cooling in the east, warming in the west and easterly wind anomalies, resulting in a complete reversal of the climatological zonal sea surface temperature (SST) gradient. The IOZM has a strong influence on East African climate by causing anomalously strong October - December (OND) precipitation. Using observational data and historical CMIP5 (Coupled Model Intercomparison Project phase 5) model output, the September - November (SON) dipole mode index (DMI), OND East African precipitation and SON zonal wind index (ZWI) are calculated. We pay particular attention to detrending SSTs for calculating the DMI, which seems to have been neglected in some published research. The ZWI is defined as the area-averaged zonal wind component at 850 hPa over the central Indian Ocean. Regression analysis is used to evaluate the models' capability to represent the IOZM and its impact on east African climate between 1948 and 2005. Simple correlations are calculated between SST, zonal wind and precipitation to show their interdependence. High correlation in models implies a good representation of the influence of IOZM on East African climate variability and our goal is to detect the models with the highest correlation coefficients. In future research, these model data might be used to investigate the impact of IOZM on the East African climate variability in the late 20's century with regard to anthropogenic causes and internal variability.
NASA Astrophysics Data System (ADS)
Haoxiang, Chen; Qi, Chengzhi; Peng, Liu; Kairui, Li; Aifantis, Elias C.
2015-12-01
The occurrence of alternating damage zones surrounding underground openings (commonly known as zonal disintegration) is treated as a "far from thermodynamic equilibrium" dynamical process or a nonlinear continuous phase transition phenomenon. The approach of internal variable gradient theory with diffusive transport, which may be viewed as a subclass of Landau's phase transition theory, is adopted. The order parameter is identified with an irreversible strain quantity, the gradient of which enters into the expression for the free energy of the rock system. The gradient term stabilizes the material behavior in the post-softening regime, where zonal disintegration occurs. The results of a simplified linearized analysis are confirmed by the numerical solution of the nonlinear problem.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
NASA Astrophysics Data System (ADS)
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Sukhovol'skiĭ, V G; Ovchinnikova, T M; Baboĭ, S D
2014-01-01
As a description of altitude-belt zonality of wood vegetation, a model of ecological second-order transitions is proposed. Objects of the study have been chosen to be forest cenoses of the northern slope of Kulumyss Ridge (the Sayan Mauntains), while the results are comprised by the altitude profiles of wood vegetation. An ecological phase transition can be considered as the transition of cenoses at different altitudes from the state of presence of certain tree species within the studied territory to the state of their absence. By analogy with the physical model of second-order, phase transitions the order parameter is introduced (i.e., the area portion occupied by a single tree species at the certain altitude) as well as the control variable (i.e., the altitude of the wood vegetation belt). As the formal relation between them, an analog of the Landau's equation for phase transitions in physical systems is obtained. It is shown that the model is in a good accordance with the empirical data. Thus, the model can be used for estimation of upper and lower boundaries of altitude belts for individual tree species (like birch, aspen, Siberian fir, Siberian pine) as well as the breadth of their ecological niches with regard to altitude. The model includes also the parameters that describe numerically the interactions between different species of wood vegetation. The approach versatility allows to simplify description and modeling of wood vegetation altitude zonality, and enables assessment of vegetation cenoses response to climatic changes.
The Role of Monsoon-Like Zonally Asymmetric Heating in Interhemispheric Transport
NASA Technical Reports Server (NTRS)
Chen, Gang; Orbe, Clara; Waugh, Darryn
2017-01-01
While the importance of the seasonal migration of the zonally averaged Hadley circulation on interhemispheric transport of trace gases has been recognized, few studies have examined the role of the zonally asymmetric monsoonal circulation. This study investigates the role of monsoon-like zonally asymmetric heating on interhemispheric transport using a dry atmospheric model that is forced by idealized Newtonian relaxation to a prescribed radiative equilibrium temperature. When only the seasonal cycle of zonally symmetric heating is considered, the mean age of air in the Southern Hemisphere since last contact with the Northern Hemisphere midlatitude boundary layer, is much larger than the observations. The introduction of monsoon-like zonally asymmetric heating not only reduces the mean age of tropospheric air to more realistic values, but also produces an upper-tropospheric cross-equatorial transport pathway in boreal summer that resembles the transport pathway simulated in the NASA Global Modeling Initiative (GMI) Chemistry Transport Model driven with MERRA meteorological fields. These results highlight the monsoon-induced eddy circulation plays an important role in the interhemispheric transport of long-lived chemical constituents.
Bayesian block-diagonal variable selection and model averaging
Papaspiliopoulos, O.; Rossell, D.
2018-01-01
Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501
Zonal Acoustic Velocimetry in 30-cm, 60-cm, and 3-m Laboratory Models of the Outer Core
NASA Astrophysics Data System (ADS)
Rojas, R.; Doan, M. N.; Adams, M. M.; Mautino, A. R.; Stone, D.; Lekic, V.; Lathrop, D. P.
2016-12-01
A knowledge of zonal flows and shear is key in understanding magnetic field dynamics in the Earth and laboratory experiments with Earth-like geometries. Traditional techniques for measuring fluid flow using visualization and particle tracking are not well-suited to liquid metal flows. This has led us to develop a flow measurement technique based on acoustic mode velocimetry adapted from helioseismology. As a first step prior to measurements in the liquid sodium experiments, we implement this technique in our 60-cm diameter spherical Couette experiment in air. To account for a more realistic experimental geometry, including deviations from spherical symmetry, we compute predicted frequencies of acoustic normal modes using the finite element method. The higher accuracy of the predicted frequencies allows the identification of over a dozen acoustic modes, and mode identification is further aided by the use of multiple microphones and by analyzing spectra together with those obtained at a variety of nearby Rossby numbers. Differences between the predicted and observed mode frequencies are caused by differences in flow patterns present in the experiment. We compare acoustic mode frequency splittings with theoretical predictions for stationary fluid and solid body flow condition with excellent agreement. We also use this technique to estimate the zonal shear in those experiments across a range of Rossby numbers. Finally, we report on initial attempts to use this in liquid sodium in the 3-meter diameter experiment and parallel experiments performed in water in the 30-cm diameter experiment.
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation
Enhancing Flood Prediction Reliability Using Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Liu, Z.; Merwade, V.
2017-12-01
Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.
Using Bayes Model Averaging for Wind Power Forecasts
NASA Astrophysics Data System (ADS)
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Hulot, G.; Johnson, C. L.
2013-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.
New model of the average neutron and proton pairing gaps
NASA Astrophysics Data System (ADS)
Madland, David G.; Nix, J. Rayford
1988-01-01
By use of the BCS approximation applied to a distribution of dense, equally spaced levels, we derive new expressions for the average neutron pairing gap ¯gD n and average proton pairing gap ¯gD p. These expressions, which contain exponential terms, take into account the dependencies of ¯gD n and ¯gD p upon both the relative neutron excess and shape of the nucleus. The three constants that appear are determined by a least-squares adjustment to experimental pairing gaps obtained by use of fourth-order differences of measured masses. For this purpose we use the 1986 Audi-Wapstra mid-stream mass evaluation and take into account experimental uncertainties. Our new model explains not only the dependencies of ¯gD n and ¯gD p upon relative neutron excess and nuclear shape, but also the experimental result that for medium and heavy nuclei ¯gD n is generally smaller than ¯gD p. We also introduce a new expression for the average residual neutron-proton interaction energy ¯gd that appears in the masses of odd-odd nuclei, and determine the constant that appears by an analogous least-squares adjustment to experimental mass differences. Our new expressions for ¯gD n, ¯gD p and ¯gd should permit extrapolation of these quantities to heavier nuclei and to nuclei farther removed from the valley of β stability than do previous parameterizations.
The dynamics of multimodal integration: The averaging diffusion model.
Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James
2017-12-01
We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.
The Relationship Between the Zonal Mean ITCZ and Regional Precipitation during the mid-Holocene
NASA Astrophysics Data System (ADS)
Niezgoda, K.; Noone, D.; Konecky, B.
2017-12-01
Characteristics of the zonal mean Tropical Rain Belt (TRB, i.e. the ITCZ + the land-based monsoons) are often inferred from individual proxy records of precipitation or other hydroclimatic variables. However, these inferences can be misleading. Here, an isotope-enabled climate model simulation is used to evaluate metrics of the zonal mean ITCZ vs. regional hydrological characteristics during the mid-Holocene (MH, 6 kya). The MH provides a unique perspective on the relationship between the ITCZ and regional hydrology because of large, orbitally-driven shifts in tropical precipitation as well as a critical mass of proxy records. By using a climate model with simulated water isotopes, characteristics of atmospheric circulation and water transport processes can be inferred, and comparison with isotope proxies can be made more directly. We find that estimations of the zonal-mean ITCZ are insufficient for evaluating regional responses of hydrological cycles to forcing changes. For example, one approximation of a 1.5-degree northward shift in the zonal-mean ITCZ position during the MH corresponded well with northward shifts in maximum rainfall in tropical Africa, but did not match southward shifts in the tropical Pacific or longitudinal shifts in the Indian monsoon region. In many regions, the spatial distribution of water vapor isotopes suggests that changes in moisture source and atmospheric circulation were a greater influence on precipitation distribution, intensity, and isotope ratio than the average northward shift in ITCZ latitude. These findings reinforce the idea that using tropical hydrological proxy records to infer zonal-mean characteristics of the ITCZ may be misleading. Rather, tropical proxy records of precipitation, particularly those that record precipitation isotopes, serve as a guideline for regional hydrological changes while model simulations can put them in the context of zonal mean tropical convergence.
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Incorporation of fragmentation into a volume average solidification model
NASA Astrophysics Data System (ADS)
Zheng, Y.; Wu, M.; Kharicha, A.; Ludwig, A.
2018-01-01
In this study, a volume average solidification model was extended to consider fragmentation as a source of equiaxed crystals during mixed columnar-equiaxed solidification. The formulation suggested for fragmentation is based on two hypotheses: the solute-driven remelting is the dominant mechanism; and the transport of solute-enriched melt through an interdendritic flow in the columnar growth direction is favorable for solute-driven remelting and is the necessary condition for fragment transportation. Furthermore, a test case with Sn-10 wt%Pb melt solidifying vertically downward in a 2D domain (50 × 60 mm2) was calculated to demonstrate the model’s features. Solidification started from the top boundary, and a columnar structure developed initially with its tip growing downward. Furthermore, thermo-solutal convection led to fragmentation in the mushy zone near the columnar tip front. The fragments transported out of the columnar region continued to grow and sink, and finally settled down and piled up in the bottom domain. The growing columnar structure from the top and pile-up of equiaxed crystals from the bottom finally led to a mixed columnar-equiaxed structure, in turn leading to a columnar-to-equiaxed transition (CET). A special macrosegregation pattern was also predicted, in which negative segregation occurred in both columnar and equiaxed regions and a relatively strong positive segregation occurred in the middle domain near the CET line. A parameter study was performed to verify the model capability, and the uncertainty of the model assumption and parameter was discussed.
Convectively driven decadal zonal accelerations in Earth's fluid core
NASA Astrophysics Data System (ADS)
More, Colin; Dumberry, Mathieu
2018-04-01
Azimuthal accelerations of cylindrical surfaces co-axial with the rotation axis have been inferred to exist in Earth's fluid core on the basis of magnetic field observations and changes in the length-of-day. These accelerations have a typical timescale of decades. However, the physical mechanism causing the accelerations is not well understood. Scaling arguments suggest that the leading order torque averaged over cylindrical surfaces should arise from the Lorentz force. Decadal fluctuations in the magnetic field inside the core, driven by convective flows, could then force decadal changes in the Lorentz torque and generate zonal accelerations. We test this hypothesis by constructing a quasi-geostrophic model of magnetoconvection, with thermally driven flows perturbing a steady, imposed background magnetic field. We show that when the Alfvén number in our model is similar to that in Earth's fluid core, temporal fluctuations in the torque balance are dominated by the Lorentz torque, with the latter generating mean zonal accelerations. Our model reproduces both fast, free Alfvén waves and slow, forced accelerations, with ratios of relative strength and relative timescale similar to those inferred for the Earth's core. The temporal changes in the magnetic field which drive the time-varying Lorentz torque are produced by the underlying convective flows, shearing and advecting the magnetic field on a timescale associated with convective eddies. Our results support the hypothesis that temporal changes in the magnetic field deep inside Earth's fluid core drive the observed decadal zonal accelerations of cylindrical surfaces through the Lorentz torque.
NASA Astrophysics Data System (ADS)
Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing
2017-02-01
UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.
On the Variation of Zonal Gravity Coefficients of a Giant Planet Caused by Its Deep Zonal Flows
NASA Astrophysics Data System (ADS)
Kong, Dali; Zhang, Keke; Schubert, Gerald
2012-04-01
Rapidly rotating giant planets are usually marked by the existence of strong zonal flows at the cloud level. If the zonal flow is sufficiently deep and strong, it can produce hydrostatic-related gravitational anomalies through distortion of the planet's shape. This paper determines the zonal gravity coefficients, J 2n , n = 1, 2, 3, ..., via an analytical method taking into account rotation-induced shape changes by assuming that a planet has an effective uniform density and that the zonal flows arise from deep convection and extend along cylinders parallel to the rotation axis. Two different but related hydrostatic models are considered. When a giant planet is in rigid-body rotation, the exact solution of the problem using oblate spheroidal coordinates is derived, allowing us to compute the value of its zonal gravity coefficients \\bar{J}_{2n}, n=1,2,3, \\dots, without making any approximation. When the deep zonal flow is sufficiently strong, we develop a general perturbation theory for estimating the variation of the zonal gravity coefficients, \\Delta {J}_{2n}={J}_{2n}-\\bar{J}_{2n}, n=1,2,3, \\dots, caused by the effect of the deep zonal flows for an arbitrarily rapidly rotating planet. Applying the general theory to Jupiter, we find that the deep zonal flow could contribute up to 0.3% of the J 2 coefficient and 0.7% of J 4. It is also found that the shape-driven harmonics at the 10th zonal gravity coefficient become dominant, i.e., \\Delta {J}_{2n} \\,{\\ge}\\, \\bar{J}_{2n} for n >= 5.
Transport in zonal flows in analogous geophysical and plasma systems
NASA Astrophysics Data System (ADS)
del-Castillo-Negrete, Diego
1999-11-01
Zonal flows occur naturally in the oceans and the atmosphere of planets. Important examples include the zonal flows in Jupiter, the stratospheric polar jet in Antarctica, and oceanic jets like the Gulf Stream. These zonal flows create transport barriers that have a crucial influence on mixing and confinement (e.g. the ozone depletion in Antarctica). Zonal flows also give rise to long-lasting vortices (e.g. the Jupiter red spot) by shear instability. Because of this, the formation and stability of zonal flows and their role on transport have been problems of great interest in geophysical fluid dynamics. On the other hand, zonal flows have also been observed in fusion plasmas and their impact on the reduction of transport has been widely recognized. Based on the well-known analogy between Rossby waves in quasigeostrophic flows and drift waves in magnetically confined plasmas, I will discuss the relevance to fusion plasmas of models and experiments recently developed in geophysical fluid dynamics. Also, the potential application of plasma physics ideas to geophysical flows will be discussed. The role of shear in the suppression of transport and the effect of zonal flows on the statistics of transport will be studied using simplified models. It will be shown how zonal flows induce large particle displacements that can be characterized as Lévy flights, and that the trapping effect of vortices combined with the zonal flows gives rise to anomalous diffusion and Lévy (non-Gaussian) statistics. The models will be compared with laboratory experiments and with atmospheric and oceanographic qualitative observations.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
NASA Astrophysics Data System (ADS)
Makowski, J.; Chambers, D. P.; Bonin, J. A.
2012-12-01
Previous studies have suggested that ocean bottom pressure (OBP) can be used to measure the transport variability of the Antarctic Circumpolar Current (ACC). Using OBP data from the JPL ECCO model and the Gravity Recovery and Climate Experiment (GRACE), we examine the zonal transport variability of the ACC integrated between the major fronts between 2003-2010. The JPL ECCO data are used to determine average front positions for the time period studies, as well as where transport is mainly zonal. Statistical analysis will be conducted to determine the uncertainty of the GRACE observations using a simulated data set. We will also begin looking at low frequency changes and how coherent transport variability is from region to region of the ACC. Correlations with bottom pressure south of the ACC and the average basin transports will also be calculated to determine the probability of using bottom pressure south of the ACC as a means for describing the ACC dynamics and transport.
NASA Astrophysics Data System (ADS)
Arruda, Daniela C. S.; Sobral, J. H. A.; Abdu, M. A.; Castilho, Vivian M.; Takahashi, H.; Medeiros, A. F.; Buriti, R. A.
2006-01-01
This work presents equatorial ionospheric plasma bubble zonal drift velocity observations and their comparison with model calculations. The bubble zonal velocities were measured using airglow OI630 nm all-sky digital images and the model calculations were performed taking into account flux-tube integrated Pedersen conductivity and conductivity weighted neutral zonal winds. The digital images were obtained from an all-sky imaging system operated over the low-latitude station Cachoeira Paulista (Geogr. 22.5S, 45W, dip angle 31.5S) during the period from October 1998 to August 2000. Out of the 138 nights of imager observation, 29 nights with the presence of plasma bubbles are used in this study. These 29 nights correspond to geomagnetically rather quiet days (∑K P < 24+) and were grouped according to season. During the early night hours, the calculated zonal drift velocities were found to be larger than the experimental values. The best matching between the calculated and observed zonal velocities were seen to be for a few hours around midnight. The model calculation showed two humps around 20 LT and 24 LT that were not present in the data. Average decelerations obtained from linear regression between 20 LT and 24 LT were found to be: (a) Spring 1998, -8.61 ms -1 h -1; (b) Summer 1999, -0.59 ms -1 h -1; (c) Spring 1999, -11.72 ms -1 h -1; and (d) Summer 2000, -8.59 ms -1 h -1. Notice that Summer and Winter here correspond to southern hemisphere Summer and Winter, not northern hemisphere.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Accounting for uncertainty in health economic decision models by using model averaging
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-01-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329
AIR QUALITY SIMULATION MODEL PERFORMANCE FOR ONE-HOUR AVERAGES
If a one-hour standard for sulfur dioxide were promulgated, air quality dispersion modeling in the vicinity of major point sources would be an important air quality management tool. Would currently available dispersion models be suitable for use in demonstrating attainment of suc...
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.
Onorante, Luca; Raftery, Adrian E
2016-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*
Onorante, Luca; Raftery, Adrian E.
2015-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859
Corporate Average Fuel Economy Compliance and Effects Modeling System Documentation
DOT National Transportation Integrated Search
2009-04-01
The Volpe National Transportation Systems Center (Volpe Center) of the United States Department of Transportation's Research and Innovative Technology Administration has developed a modeling system to assist the National Highway Traffic Safety Admini...
Exhumed Blueschists and Eclogites: Hotter Than the Average Model
NASA Astrophysics Data System (ADS)
Penniston-Dorland, S.; Kohn, M. J.; Manning, C. E.
2014-12-01
The maximum-pressure P-T conditions (Pmax-T) of exhumed subduction-related metamorphic rocks are compared to estimates of P-T conditions predicted by computational thermal models of subduction systems. While the range of proposed models encompasses most Pmax-T, most models are 200-400°C too cold. In general, discrepancies are greatest for Pmax < 2 GPa where only a few of the highest-T modeled paths overlap petrologic observations. Comparison among published models suggests several possible explanations for these differences. Variables that affect temperatures within the subduction zone include the timing of subduction initiation relative to metamorphism, age of the subducting oceanic crust, rate of convergence, and the dip angle of the subducting plate. An additional factor is whether subducted material is constrained to move coherently with the incoming plate or whether it convects within the plate interface. Higher temperatures are predicted for relatively young subducting crust, slow convergence rates, and shallow subduction dips. Simulations in which material from the subducted slab decouples from the slab and rises buoyantly into an overlying weak layer (e.g. hydrated mantle) also result in higher temperatures for exhumed oceanic crust. Our compilation and comparison suggest either that most models are missing one or more important controls on heat sources and heat transfer or that exhumed blueschists and eclogites are more buoyant than typical subducted rocks.
A simple depth-averaged model for dry granular flow
NASA Astrophysics Data System (ADS)
Hung, Chi-Yao; Stark, Colin P.; Capart, Herve
Granular flow over an erodible bed is an important phenomenon in both industrial and geophysical settings. Here we develop a depth-averaged theory for dry erosive flows using balance equations for mass, momentum and (crucially) kinetic energy. We assume a linearized GDR-Midi rheology for granular deformation and Coulomb friction along the sidewalls. The theory predicts the kinematic behavior of channelized flows under a variety of conditions, which we test in two sets of experiments: (1) a linear chute, where abrupt changes in tilt drive unsteady uniform flows; (2) a rotating drum, to explore steady non-uniform flow. The theoretical predictions match the experimental results well in all cases, without the need to tune parameters or invoke an ad hoc equation for entrainment at the base of the flow. Here we focus on the drum problem. A dimensionless rotation rate (related to Froude number) characterizes flow geometry and accounts not just for spin rate, drum radius and gravity, but also for grain size, wall friction and channel width. By incorporating Coriolis force the theory can treat behavior under centrifuge-induced enhanced gravity. We identify asymptotic flow regimes at low and high dimensionless rotation rates that exhibit distinct power-law scaling behaviors.
Bayesian averaging over Decision Tree models for trauma severity scoring.
Schetinin, V; Jakaite, L; Krzanowski, W
2018-01-01
Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.
Averaging principle for second-order approximation of heterogeneous models with homogeneous models
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-01-01
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569
Averaging principle for second-order approximation of heterogeneous models with homogeneous models.
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-11-27
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
On radiating baroclinic instability of zonally varying flow
NASA Technical Reports Server (NTRS)
Finley, Catherine A.; Nathan, Terrence R.
1993-01-01
A quasi-geostrophic, two-layer, beta-plane model is used to study the baroclinic instability characteristics of a zonally inhomogeneous flow. It is assumed that the disturbance varied slowly in the cross-stream direction, and the stability problem was formulated as a 1D initial value problem. Emphasis is placed on determining how the vertically averaged wind, local maximum in vertical wind shear, and length of the locally supercritical region combine to yield local instabilities. Analysis of the local disturbance energetics reveals that, for slowly varying basic states, the baroclinic energy conversion predominates within the locally unstable region. Using calculations of the basic state tendencies, it is shown that the net effect of the local instabilities is to redistribute energy from the baroclinic to the barotropic component of the basic state flow.
Future Effects of Southern Hemisphere Stratospheric Zonal Asymmetries on Climate
NASA Astrophysics Data System (ADS)
Stone, K.; Solomon, S.; Kinnison, D. E.; Fyfe, J. C.
2017-12-01
Stratospheric zonal asymmetries in the Southern Hemisphere have been shown to have significant influences on both stratospheric and tropospheric dynamics and climate. Accurate representation of stratospheric ozone in particular is important for realistic simulation of the polar vortex strength and temperature trends. This is therefore also important for stratospheric ozone change's effect on the troposphere, both through modulation of the Southern Annular Mode (SAM), and more localized climate. Here, we characterization the impact of future changes in Southern Hemisphere zonal asymmetry on tropospheric climate, including changes to future tropospheric temperature, and precipitation. The separate impacts of increasing GHGs and ozone recovery on the zonal asymmetric influence on the surface are also investigated. For this purpose, we use a variety of models, including Chemistry Climate Model Initiative simulations from the Community Earth System Model, version 1, with the Whole Atmosphere Community Climate Model (CESM1(WACCM)) and the Australian Community Climate and Earth System Simulator-Chemistry Climate Model (ACCESS-CCM). These models have interactive chemistry and can therefore more accurately represent the zonally asymmetric nature of the stratosphere. The CESM1(WACCM) and ACCESS-CCM models are also compared to simulations from the Canadian Can2ESM model and CESM-Large Ensemble Project (LENS) that have prescribed ozone to further investigate the importance of simulating stratospheric zonal asymmetry.
Drouot, T.; Gravier, E.; Reveille, T.
This paper presents a study of zonal flows generated by trapped-electron mode and trapped-ion mode micro turbulence as a function of two plasma parameters—banana width and electron temperature. For this purpose, a gyrokinetic code considering only trapped particles is used. First, an analytical equation giving the predicted level of zonal flows is derived from the quasi-neutrality equation of our model, as a function of the density fluctuation levels and the banana widths. Then, the influence of the banana width on the number of zonal flows occurring in the system is studied using the gyrokinetic code. Finally, the impact of themore » temperature ratio T{sub e}/T{sub i} on the reduction of zonal flows is shown and a close link is highlighted between reduction and different gyro-and-bounce-average ion and electron density fluctuation levels. This reduction is found to be due to the amplitudes of gyro-and-bounce-average density perturbations n{sub e} and n{sub i} gradually becoming closer, which is in agreement with the analytical results given by the quasi-neutrality equation.« less
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Subsurface Zonal and Meridional Flows from SDO/HMI
NASA Astrophysics Data System (ADS)
Komm, Rudolf; Howe, Rachel; Hill, Frank
2016-10-01
We study the solar-cycle variation of the zonal and meridional flows in the near-surface layers of the solar convection zone from the surface to a depth of about 16 Mm. The flows are determined from SDO/HMI Dopplergrams using the HMI ring-diagram pipeline. The zonal and meridional flows vary with the solar cycle. Bands of faster-than-average zonal flows together with more-poleward-than-average meridional flows move from mid-latitudes toward the equator during the solar cycle and are mainly located on the equatorward side of the mean latitude of solar magnetic activity. Similarly, bands of slower-than-average zonal flows together with less-poleward-than-average meridional flows are located on the poleward side of the mean latitude of activity. Here, we will focus on the variation of these flows at high latitudes (poleward of 50 degree) that are now accessible using HMI data. We will present the latest results.
A time-averaged regional model of the Hermean magnetic field
NASA Astrophysics Data System (ADS)
Thébault, E.; Langlais, B.; Oliveira, J. S.; Amit, H.; Leclercq, L.
2018-03-01
This paper presents the first regional model of the magnetic field of Mercury developed with mathematical continuous functions. The model has a horizontal spatial resolution of about 830 km at the surface of the planet, and it is derived without any a priori information about the geometry of the internal and external fields or regularization. It relies on an extensive dataset of the MESSENGER's measurements selected over its entire orbital lifetime between 2011 and 2015. A first order separation between the internal and the external fields over the Northern hemisphere is achieved under the assumption that the magnetic field measurements are acquired in a source free region within the magnetospheric cavity. When downward continued to the core-mantle boundary, the model confirms some of the general structures observed in previous studies such as the dominance of zonal field, the location of the North magnetic pole, and the global absence of significant small scale structures. The transformation of the regional model into a global spherical harmonic one provides an estimate for the axial quadrupole to axial dipole ratio of about g20/g10 = 0.27 . This is much lower than previous estimates of about 0.40. We note that it is possible to obtain a similar ratio provided that more weight is put on the location of the magnetic equator and less elsewhere.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
NASA Astrophysics Data System (ADS)
Nakada, Masao; Okuno, Jun'ichi
2017-06-01
Secular variations in zonal harmonics of Earth's geopotential based on the satellite laser ranging observations, {\\dot{J}_n}, contain important information about the Earth's deformation due to the glacial isostatic adjustment (GIA) and recent melting of glaciers and the Greenland and Antarctic ice sheets. Here, we examine the GIA-induced {\\dot{J}_n}, \\dot{J}_n^{GIA} (2 ≤ n ≤ 6), derived from the available geopotential zonal secular rate and recent melting taken from the IPCC 2013 Report (AR5) to explore the possibility of additional information on the depth-dependent lower-mantle viscosity and GIA ice model inferred from the analyses of the \\dot{J}_2^{GIA} and relative sea level changes. The sensitivities of the \\dot{J}_n^{GIA} to lower-mantle viscosity and GIA ice model with a global averaged eustatic sea level (ESL) of ∼130 m indicate that the secular rates for n = 3 and 4 are mainly caused by the viscous response of the lower mantle to the melting of the Antarctic ice sheet regardless of GIA ice models adopted in this study. Also, the analyses of the \\dot{J}_n^{GIA} based on the available geopotential zonal secular rates indicate that permissible lower-mantle viscosity structure satisfying even zonal secular rates of n = 2, 4 and 6 is obtained for the GIA ice model with an Antarctic ESL component of ∼20 or ∼30 m, but there is no viscosity solution satisfying \\dot{J}_3^{GIA} and \\dot{J}_5^{GIA} values. Moreover, the inference model for the lower-mantle viscosity and GIA ice model from each odd zonal secular rate is distinctly different from that satisfying GIA-induced even zonal secular rate. The discrepancy between the inference models for the even and odd zonal secular rates may partly be attributed to uncertainties of the geopotential zonal secular rates for n > 2 and particularly those for odd zonal secular rates due to weakness in the orbital geometry. If this problem is overcome at least for the secular rates of n < 5, then the analyses of
NASA Astrophysics Data System (ADS)
Li, T.; Ban, C.; Fang, X.; Li, J.; Wu, Z.; Xiong, J.; Feng, W.; Plane, J. M. C.
2017-12-01
The University of Science and Technology of China narrowband sodium temperature/wind lidar, located in Hefei, China (32°N, 117°E), was installed in November 2011 and have made routine nighttime measurements since January 2012. We obtained 154 nights ( 1400 hours) of vertical profiles of temperature, sodium density, and zonal wind, and 83 nights ( 800 hours) of vertical flux of gravity wave (GW) zonal momentum in the mesopause region (80-105 km) during the period of 2012 to 2016. In temperature, it is likely that the diurnal tide dominates below 100 km in spring, while the semidiurnal tide dominates above 100 km throughout the year. A clear semiannual variation in temperature is revealed near 90 km, likely related to the tropical mesospheric semiannual oscillation (MSAO). The variability of sodium density is positively correlated with temperature, suggesting that in addition to dynamics, the chemistry may also play an important role in the formation of sodium atoms. The observed sodium peak density is 1000 cm-3 higher than that simulated by the model. In zonal wind, the diurnal tide dominates in both spring and fall, while semidiurnal tide dominates in winter. The observed semiannual variation in zonal wind near 90 km is out-of-phase with that in temperature, consistent with tropical MSAO. The GW zonal momentum flux is mostly westward in fall and winter, anti-correlated with eastward zonal wind. The annual mean flux averaged over 87-97 km is -0.3 m2/s2 (westward), anti-correlated with eastward zonal wind of 10 m/s. The comparisons of lidar results with those observed by satellite, nearby radar, and simulated by model show generally good agreements.
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the
Predictability of Zonal Means During Boreal Summer
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Suarez, Max J.; Pegion, Philip J.; Kistler, Michael A.; Kumar, Arun; Einaudi, Franco (Technical Monitor)
2001-01-01
This study examines the predictability of seasonal means during boreal summer. The results are based on ensembles of June-July-August (JJA) simulations (started in mid May) carried out with the NASA Seasonal-to-Interannual Prediction Project (NSIPP-1) atmospheric general circulation model (AGCM) forced with observed sea surface temperatures (SSTS) and sea ice for the years 1980-1999. We find that the predictability of the JJA extra-tropical height field is primarily in the zonal mean component of the response to the SST anomalies. This contrasts with the cold season (January-February-March) when the predictability of seasonal means in the boreal extratropics is primarily in the wave component of the El Nino/Southern Oscillation (ENSO) response. Two patterns dominate the interannual variability of the ensemble mean JJA zonal mean height field. One has maximum variance in the tropical/subtropical upper troposphere, while the other has substantial variance in middle latitudes of both hemispheres. Both are symmetric with respect to the equator. A regression analysis suggests that the tropical/subtropical pattern is associated with SST anomalies in the far eastern tropical Pacific and the Indian Ocean, while the middle latitude pattern is forced by SST anomalies in the tropical Pacific just east of the dateline. The two leading zonal height patterns are reproduced in model runs forced with the two leading JJA SST patterns of variability. A comparison with observations shows a signature of the middle latitude pattern that is consistent with the occurrence of dry and wet summers over the United States. We hypothesize that both patterns, while imposing only weak constraints on extratropical warm season continental-scale climates, may play a role in the predilection for drought or pluvial conditions.
The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool
Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Application of Bayesian model averaging to measurements of the primordial power spectrum
NASA Astrophysics Data System (ADS)
Parkinson, David; Liddle, Andrew R.
2010-11-01
Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940
Numerical simulation on zonal disintegration in deep surrounding rock mass.
Chen, Xuguang; Wang, Yuan; Mei, Yu; Zhang, Xin
2014-01-01
Zonal disintegration have been discovered in many underground tunnels with the increasing of embedded depth. The formation mechanism of such phenomenon is difficult to explain under the framework of traditional rock mechanics, and the fractured shape and forming conditions are unclear. The numerical simulation was carried out to research the generating condition and forming process of zonal disintegration. Via comparing the results with the geomechanical model test, the zonal disintegration phenomenon was confirmed and its mechanism is revealed. It is found to be the result of circular fracture which develops within surrounding rock mass under the high geostress. The fractured shape of zonal disintegration was determined, and the radii of the fractured zones were found to fulfill the relationship of geometric progression. The numerical results were in accordance with the model test findings. The mechanism of the zonal disintegration was revealed by theoretical analysis based on fracture mechanics. The fractured zones are reportedly circular and concentric to the cavern. Each fracture zone ruptured at the elastic-plastic boundary of the surrounding rocks and then coalesced into the circular form. The geometric progression ratio was found to be related to the mechanical parameters and the ground stress of the surrounding rocks.
Numerical Simulation on Zonal Disintegration in Deep Surrounding Rock Mass
Chen, Xuguang; Wang, Yuan; Mei, Yu; Zhang, Xin
2014-01-01
Zonal disintegration have been discovered in many underground tunnels with the increasing of embedded depth. The formation mechanism of such phenomenon is difficult to explain under the framework of traditional rock mechanics, and the fractured shape and forming conditions are unclear. The numerical simulation was carried out to research the generating condition and forming process of zonal disintegration. Via comparing the results with the geomechanical model test, the zonal disintegration phenomenon was confirmed and its mechanism is revealed. It is found to be the result of circular fracture which develops within surrounding rock mass under the high geostress. The fractured shape of zonal disintegration was determined, and the radii of the fractured zones were found to fulfill the relationship of geometric progression. The numerical results were in accordance with the model test findings. The mechanism of the zonal disintegration was revealed by theoretical analysis based on fracture mechanics. The fractured zones are reportedly circular and concentric to the cavern. Each fracture zone ruptured at the elastic-plastic boundary of the surrounding rocks and then coalesced into the circular form. The geometric progression ratio was found to be related to the mechanical parameters and the ground stress of the surrounding rocks. PMID:24592166
Zonal Flows and Turbulence in Fluids and Plasmas
Parker, Jeffrey
2014-09-01
In geophysical and plasma contexts, zonal flows are well known to arise out of turbulence. We elucidate the transition from statistically homogeneous turbulence without zonal flows to statistically inhomogeneous turbulence with steady zonal flows. Starting from the Hasegawa--Mima equation, we employ both the quasilinear approximation and a statistical average, which retains a great deal of the qualitative behavior of the full system. Within the resulting framework known as CE2, we extend recent understanding of the symmetry-breaking `zonostrophic instability'. Zonostrophic instability can be understood in a very general way as the instability of some turbulent background spectrum to a zonally symmetricmore » coherent mode. As a special case, the background spectrum can consist of only a single mode. We find that in this case the dispersion relation of zonostrophic instability from the CE2 formalism reduces exactly to that of the 4-mode truncation of generalized modulational instability. We then show that zonal flows constitute pattern formation amid a turbulent bath. Zonostrophic instability is an example of a Type Is instability of pattern-forming systems. The broken symmetry is statistical homogeneity. Near the bifurcation point, the slow dynamics of CE2 are governed by a well-known amplitude equation, the real Ginzburg-Landau equation. The important features of this amplitude equation, and therefore of the CE2 system, are multiple. First, the zonal flow wavelength is not unique. In an idealized, infinite system, there is a continuous band of zonal flow wavelengths that allow a nonlinear equilibrium. Second, of these wavelengths, only those within a smaller subband are stable. Unstable wavelengths must evolve to reach a stable wavelength; this process manifests as merging jets. These behaviors are shown numerically to hold in the CE2 system, and we calculate a stability diagram. The stability diagram is in agreement with direct numerical simulations of the
Zonal flows and turbulence in fluids and plasmas
NASA Astrophysics Data System (ADS)
Parker, Jeffrey Bok-Cheung
In geophysical and plasma contexts, zonal flows are well known to arise out of turbulence. We elucidate the transition from statistically homogeneous turbulence without zonal flows to statistically inhomogeneous turbulence with steady zonal flows. Starting from the Hasegawa--Mima equation, we employ both the quasilinear approximation and a statistical average, which retains a great deal of the qualitative behavior of the full system. Within the resulting framework known as CE2, we extend recent understanding of the symmetry-breaking 'zonostrophic instability'. Zonostrophic instability can be understood in a very general way as the instability of some turbulent background spectrum to a zonally symmetric coherent mode. As a special case, the background spectrum can consist of only a single mode. We find that in this case the dispersion relation of zonostrophic instability from the CE2 formalism reduces exactly to that of the 4-mode truncation of generalized modulational instability. We then show that zonal flows constitute pattern formation amid a turbulent bath. Zonostrophic instability is an example of a Type I s instability of pattern-forming systems. The broken symmetry is statistical homogeneity. Near the bifurcation point, the slow dynamics of CE2 are governed by a well-known amplitude equation, the real Ginzburg-Landau equation. The important features of this amplitude equation, and therefore of the CE2 system, are multiple. First, the zonal flow wavelength is not unique. In an idealized, infinite system, there is a continuous band of zonal flow wavelengths that allow a nonlinear equilibrium. Second, of these wavelengths, only those within a smaller subband are stable. Unstable wavelengths must evolve to reach a stable wavelength; this process manifests as merging jets. These behaviors are shown numerically to hold in the CE2 system, and we calculate a stability diagram. The stability diagram is in agreement with direct numerical simulations of the quasilinear
Waif goodbye! Average-size female models promote positive body image and appeal to consumers.
Diedrichs, Phillippa C; Lee, Christina
2011-10-01
Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.
An Approach to Average Modeling and Simulation of Switch-Mode Systems
ERIC Educational Resources Information Center
Abramovitz, A.
2011-01-01
This paper suggests a pedagogical approach to teaching the subject of average modeling of PWM switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The paper discusses the derivation of PSPICE/ORCAD-compatible average models of the switch-mode power stages, their software implementation, and…
The role of zonal flows in disc gravito-turbulence
NASA Astrophysics Data System (ADS)
Vanon, R.
2018-07-01
The work presented here focuses on the role of zonal flows in the self-sustenance of gravito-turbulence in accretion discs. The numerical analysis is conducted using a bespoke pseudo-spectral code in fully compressible, non-linear conditions. The disc in question, which is modelled using the shearing sheet approximation, is assumed to be self-gravitating, viscous, and thermally diffusive; a constant cooling time-scale is also considered. Zonal flows are found to emerge at the onset of gravito-turbulence and they remain closely linked to the turbulent state. A cycle of zonal flow formation and destruction is established, mediated by a slow mode instability (which allows zonal flows to grow) and a non-axisymmetric instability (which disrupts the zonal flow), which is found to repeat numerous times. It is in fact the disruptive action of the non-axisymmetric instability to form new leading and trailing shearing waves, allowing energy to be extracted from the background flow and ensuring the self-sustenance of the gravito-turbulent regime.
The role of zonal flows in disc gravito-turbulence
NASA Astrophysics Data System (ADS)
Vanon, R.
2018-04-01
The work presented here focuses on the role of zonal flows in the self-sustenance of gravito-turbulence in accretion discs. The numerical analysis is conducted using a bespoke pseudo-spectral code in fully compressible, non-linear conditions. The disc in question, which is modelled using the shearing sheet approximation, is assumed to be self-gravitating, viscous, and thermally diffusive; a constant cooling timescale is also considered. Zonal flows are found to emerge at the onset of gravito-turbulence and they remain closely linked to the turbulent state. A cycle of zonal flow formation and destruction is established, mediated by a slow mode instability (which allows zonal flows to grow) and a non-axisymmetric instability (which disrupts the zonal flow), which is found to repeat numerous times. It is in fact the disruptive action of the non-axisymmetric instability to form new leading and trailing shearing waves, allowing energy to be extracted from the background flow and ensuring the self-sustenance of the gravito-turbulent regime.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
Diedrichs, Phillippa C; Lee, Christina
2010-06-01
Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Castiglioni, S.; Toth, E.
2009-04-01
In the calibration procedure of continuously-simulating models, the hydrologist has to choose which part of the observed hydrograph is most important to fit, either implicitly, through the visual agreement in manual calibration, or explicitly, through the choice of the objective function(s). Changing the objective functions it is in fact possible to emphasise different kind of errors, giving them more weight in the calibration phase. The objective functions used for calibrating hydrological models are generally of the quadratic type (mean squared error, correlation coefficient, coefficient of determination, etc) and are therefore oversensitive to high and extreme error values, that typically correspond to high and extreme streamflow values. This is appropriate when, like in the majority of streamflow forecasting applications, the focus is on the ability to reproduce potentially dangerous flood events; on the contrary, if the aim of the modelling is the reproduction of low and average flows, as it is the case in water resource management problems, this may result in a deterioration of the forecasting performance. This contribution presents the results of a series of automatic calibration experiments of a continuously-simulating rainfall-runoff model applied over several real-world case-studies, where the objective function is chosen so to highlight the fit of average and low flows. In this work a simple conceptual model will be used, of the lumped type, with a relatively low number of parameters to be calibrated. The experiments will be carried out for a set of case-study watersheds in Central Italy, covering an extremely wide range of geo-morphologic conditions and for whom at least five years of contemporary daily series of streamflow, precipitation and evapotranspiration estimates are available. Different objective functions will be tested in calibration and the results will be compared, over validation data, against those obtained with traditional squared
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
NASA Astrophysics Data System (ADS)
Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.
Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.
Rethinking wave-kinetic theory applied to zonal flows
NASA Astrophysics Data System (ADS)
Parker, Jeffrey
2017-10-01
Over the past two decades, a number of studies have employed a wave-kinetic theory to describe fluctuations interacting with zonal flows. Recent work has uncovered a defect in this wave-kinetic formulation: the system is dominated by the growth of (arbitrarily) small-scale zonal structures. Theoretical calculations of linear growth rates suggest, and nonlinear simulations confirm, that this system leads to the concentration of zonal flow energy in the smallest resolved scales, irrespective of the numerical resolution. This behavior results from the assumption that zonal flows are extremely long wavelength, leading to the neglect of key terms responsible for conservation of enstrophy. A corrected theory, CE2-GO, is presented; it is free of these errors yet preserves the intuitive phase-space mathematical structure. CE2-GO properly conserves enstrophy as well as energy, and yields accurate growth rates of zonal flow. Numerical simulations are shown to be well-behaved and not dependent on box size. The steady-state limit simplifies into an exact wave-kinetic form which offers the promise of deeper insight into the behavior of wavepackets. The CE2-GO theory takes its place in a hierarchy of models as the geometrical-optics reduction of the more complete cumulant-expansion statistical theory CE2. The new theory represents the minimal statistical description, enabling an intuitive phase-space formulation and an accurate description of turbulence-zonal flow dynamics. This work was supported by an NSF Graduate Research Fellowship, a US DOE Fusion Energy Sciences Fellowship, and US DOE Contract Nos. DE-AC52-07NA27344 and DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.
SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure
Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.
2017-01-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341
SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.
Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S
2017-03-01
A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.
Free-free opacity in dense plasmas with an average atom model
Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; ...
2017-02-28
A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.
Free-free opacity in dense plasmas with an average atom model
Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick
A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.
A Comparison of Averaged and Full Models to Study the Third-Body Perturbation
Solórzano, Carlos Renato Huaura; Prado, Antonio Fernando Bertachini de Almeida
2013-01-01
The effects of a third-body travelling in a circular orbit around a main body on a massless satellite that is orbiting the same main body are studied under two averaged models, single and double, where expansions of the disturbing function are made, and the full restricted circular three-body problem. The goal is to compare the behavior of these two averaged models against the full problem for long-term effects, in order to have some knowledge of their differences. The single averaged model eliminates the terms due to the short period of the spacecraft. The double average is taken over the mean motion of the satellite and the mean motion of the disturbing body, so removing both short period terms. As an example of the methods, an artificial satellite around the Earth perturbed by the Moon is used. A detailed study of the effects of different initial conditions in the orbit of the spacecraft is made. PMID:24319348
A comparison of averaged and full models to study the third-body perturbation.
Solórzano, Carlos Renato Huaura; Prado, Antonio Fernando Bertachini de Almeida
2013-01-01
The effects of a third-body travelling in a circular orbit around a main body on a massless satellite that is orbiting the same main body are studied under two averaged models, single and double, where expansions of the disturbing function are made, and the full restricted circular three-body problem. The goal is to compare the behavior of these two averaged models against the full problem for long-term effects, in order to have some knowledge of their differences. The single averaged model eliminates the terms due to the short period of the spacecraft. The double average is taken over the mean motion of the satellite and the mean motion of the disturbing body, so removing both short period terms. As an example of the methods, an artificial satellite around the Earth perturbed by the Moon is used. A detailed study of the effects of different initial conditions in the orbit of the spacecraft is made.
A model for closing the inviscid form of the average-passage equation system
NASA Technical Reports Server (NTRS)
Adamczyk, J. J.; Mulac, R. A.; Celestina, M. L.
1985-01-01
A mathematical model is proposed for closing or mathematically completing the system of equations which describes the time average flow field through the blade passages of multistage turbomachinery. These equations referred to as the average passage equation system govern a conceptual model which has proven useful in turbomachinery aerodynamic design and analysis. The closure model is developed so as to insure a consistency between these equations and the axisymmetric through flow equations. The closure model was incorporated into a computer code for use in simulating the flow field about a high speed counter rotating propeller and a high speed fan stage. Results from these simulations are presented.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Impact of Stratospheric Ozone Zonal Asymmetries on the Tropospheric Circulation
NASA Technical Reports Server (NTRS)
Tweedy, Olga; Waugh, Darryn; Li, Feng; Oman, Luke
2015-01-01
The depletion and recovery of Antarctic ozone plays a major role in changes of Southern Hemisphere (SH) tropospheric climate. Recent studies indicate that the lack of polar ozone asymmetries in chemistry climate models (CCM) leads to a weaker and warmer Antarctic vortex, and smaller trends in the tropospheric mid-latitude jet and the surface pressure. However, the tropospheric response to ozone asymmetries is not well understood. In this study we report on a series of integrations of the Goddard Earth Observing System Chemistry Climate Model (GEOS CCM) to further examine the effect of zonal asymmetries on the state of the stratosphere and troposphere. Integrations with the full, interactive stratospheric chemistry are compared against identical simulations using the same CCM except that (1) the monthly mean zonal mean stratospheric ozone from first simulation is prescribed and (2) ozone is relaxed to the monthly mean zonal mean ozone on a three day time scale. To analyze the tropospheric response to ozone asymmetries, we examine trends and quantify the differences in temperatures, zonal wind and surface pressure among the integrations.
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
Saturn’s gravitational field induced by its equatorially antisymmetric zonal winds
NASA Astrophysics Data System (ADS)
Kong, Dali; Zhang, Keke; Schubert, Gerald; Anderson, John D.
2018-05-01
The cloud-level zonal winds of Saturn are marked by a substantial equatorially antisymmetric component with a speed of about 50ms‑1 which, if they are sufficiently deep, can produce measurable odd zonal gravitational coefficients ΔJ 2k+1, k = 1, 2, 3, 4. This study, based on solutions of the thermal-gravitational wind equation, provides a theoretical basis for interpreting the odd gravitational coefficients of Saturn in terms of its equatorially antisymmetric zonal flow. We adopt a Saturnian model comprising an ice-rock core, a metallic dynamo region and an outer molecular envelope. We use an equatorially antisymmetric zonal flow that is parameterized, confined in the molecular envelope and satisfies the solvability condition required for the thermal-gravitational wind equation. The structure and amplitude of the zonal flow at the cloud level are chosen to be consistent with observations of Saturn. We calculate the odd zonal gravitational coefficients ΔJ 2k+1, k = 1, 2, 3, 4 by regarding the depth of the equatorially antisymmetric winds as a parameter. It is found that ΔJ 3 is ‑4.197 × 10‑8 if the zonal winds extend about 13 000 km downward from the cloud tops while it is ‑0.765 × 10‑8 if the depth is about 4000 km. The depth/profile of the equatorially antisymmetric zonal winds can eventually be estimated when the high-precision measurements of the Cassini Grand Finale become available.
Cycle-averaged dynamics of a periodically driven, closed-loop circulation model
NASA Technical Reports Server (NTRS)
Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.
2005-01-01
Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.
An averaging battery model for a lead-acid battery operating in an electric car
NASA Technical Reports Server (NTRS)
Bozek, J. M.
1979-01-01
A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.
Roberts, Steven; Martin, Michael A
2010-01-01
Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit
An improved car-following model with two preceding cars' average speed
NASA Astrophysics Data System (ADS)
Yu, Shao-Wei; Shi, Zhong-Ke
2015-01-01
To better describe cooperative car-following behaviors under intelligent transportation circumstances and increase roadway traffic mobility, the data of three successive following cars at a signalized intersection of Jinan in China were obtained and employed to explore the linkage between two preceding cars' average speed and car-following behaviors. The results indicate that two preceding cars' average velocity has significant effects on the following car's motion. Then an improved car-following model considering two preceding cars' average velocity was proposed and calibrated based on full velocity difference model and some numerical simulations were carried out to study how two preceding cars' average speed affected the starting process and the traffic flow evolution process with an initial small disturbance, the results indicate that the improved car-following model can qualitatively describe the impacts of two preceding cars' average velocity on traffic flow and that taking two preceding cars' average velocity into account in designing the control strategy for the cooperative adaptive cruise control system can improve the stability of traffic flow, suppress the appearance of traffic jams and increase the capacity of signalized intersections.
Eastern Tropical Pacific Precipitation Response to Zonal SPCZ events
NASA Astrophysics Data System (ADS)
Durán-Quesada, A. M.; Lintner, B. R.
2014-12-01
Extreme El Niño events and warming conditions in the eastern tropical Pacific have been linked to pronounced spatial displacements of the South Pacific Convergence Zone known as "zonal SPCZ" events.. Using a global dataset of Lagrangian back trajectories computed with the FLEXPART model for the period 1980-2013, comprehensive analysis of the 3D circulation characteristics associated with the SPCZ is undertaken. Ten days history of along-trajectory specific humidity, potential vorticity and temperature are reconstructed for zonal SPCZ events as well as other states,, with differences related to El Niño intensity and development stage as well as the state of the Western Hemisphere Warm Pool. How zonal events influence precipitation over the Eastern Tropical Pacific is examined using back trajectories, reanalysis, TRMM precipitation, and additional satellite derived cloud information. It is found that SPCZ displacements are associated with enhanced convection over the Eastern Tropical Pacific in good agreement with prior work. The connection between intensification of precipitation over the eastern Tropical Pacific during zonal events and suppression of rainfall over the Maritime continent is also described.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Abdol-Hamid, Khaled S.
2005-01-01
Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.
NASA Astrophysics Data System (ADS)
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2017-01-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere
NASA Astrophysics Data System (ADS)
Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.; Houweling, Sander; De Bruine, Marko; Stiller, Gabriele P.; Haenel, Florian J.; Plieninger, Johannes; Bousquet, Philippe; Yin, Yi; Saunois, Marielle; Walker, Kaley A.; Deutscher, Nicholas M.; Griffith, David W. T.; Blumenstock, Thomas; Hase, Frank; Warneke, Thorsten; Wang, Zhiting; Kivi, Rigel; Robinson, John
2016-09-01
The distribution of methane (CH4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH4 mixing ratio (XCH4), which is being measured increasingly for the assessment of CH4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH4 accurately for estimating surface emissions from XCH4. Simulations from three CTMs are tested against XCH4 observations from the Total Carbon Column Network (TCCON). We analyze how the model-TCCON agreement in XCH4 depends on the model representation of stratospheric CH4 distributions. Model equivalents of TCCON XCH4 are computed with stratospheric CH4 fields from both the model simulations and from satellite-based CH4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH4 fields in place of model simulations improves the model-TCCON XCH4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH4 fields adjusted to ACE-FTS reduces the average XCH4 bias for ACTM (3.3 ppb), but increases the average XCH4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of
Mean-field velocity difference model considering the average effect of multi-vehicle interaction
NASA Astrophysics Data System (ADS)
Guo, Yan; Xue, Yu; Shi, Yin; Wei, Fang-ping; Lü, Liang-zhong; He, Hong-di
2018-06-01
In this paper, a mean-field velocity difference model(MFVD) is proposed to describe the average effect of multi-vehicle interactions on the whole road. By stability analysis, the stability condition of traffic system is obtained. Comparison with stability of full velocity-difference (FVD) model and the completeness of MFVD model are discussed. The mKdV equation is derived from MFVD model through nonlinear analysis to reveal the traffic jams in the form of the kink-antikink density wave. Then the numerical simulation is performed and the results illustrate that the average effect of multi-vehicle interactions plays an important role in effectively suppressing traffic jam. The increase strength of the mean-field velocity difference in MFVD model can rapidly reduce traffic jam and enhance the stability of traffic system.
Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames
NASA Astrophysics Data System (ADS)
Schlup, Jason; Blanquart, Guillaume
2018-03-01
The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
On the fast zonal transport of the STS-121 space shuttle exhaust plume in the lower thermosphere
NASA Astrophysics Data System (ADS)
Yue, Jia; Liu, Han-Li; Meier, R. R.; Chang, Loren; Gu, Sheng-Yang; Russell, James, III
2013-03-01
Meier et al. (2011) reported rapid eastward transport of the STS-121 space shuttle (launch: July 4, 2006) main engine plume in the lower thermosphere, observed in hydrogen Lyman α images by the GUVI instrument onboard the TIMED satellite. In order to study the mechanism of the rapid zonal transport, diagnostic tracer calculations are performed using winds from the Thermosphere Ionosphere Mesosphere Electrodynamics General Circulation Model (TIME-GCM) simulation of July, 2006. It is found that the strong eastward jet at heights of 100-110 km, where the exhaust plume was deposited, results in a persistent eastward tracer motion with an average velocity of 45 m/s. This is generally consistent with, though faster than, the prevailing eastward shuttle plume movement with daily mean velocity of 30 m/s deduced from the STS-121 GUVI observation. The quasi-two-day wave (QTDW) was not included in the numerical simulation because it was found not to be large. Its absence, however, might be partially responsible for insufficient meridional transport to move the tracers away from the fast jet in the simulation. The current study and our model results from Yue and Liu (2010) explain two very different shuttle plume transport scenarios (STS-121 and STS-107 (launch: January 16, 2003), respectively): we conclude that lower thermospheric dynamics is sufficient to account for both very fast zonal motion (zonal jet in the case of STS-121) and very fast meridional motion to polar regions (large QTDW in the case of STS-107).
Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, N. C.; Taylor, P. C.
2014-12-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to
Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere
Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.; ...
2016-09-28
The distribution of methane (CH 4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH 4 mixing ratio (XCH 4), which is being measured increasingly for the assessment of CH 4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH 4 accurately for estimating surface emissions from XCH 4. Simulations from three CTMs are tested against XCH 4 observations from the Total Carbon Column Network (TCCON). We analyze how the model–TCCON agreement in XCH 4 depends on the model representation of stratospheric CH 4 distributions.more » Model equivalents of TCCON XCH 4 are computed with stratospheric CH 4 fields from both the model simulations and from satellite-based CH 4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH 4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH 4 fields in place of model simulations improves the model–TCCON XCH 4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH 4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH 4 fields adjusted to ACE-FTS reduces the average XCH 4 bias for ACTM (3.3 ppb), but increases the average XCH 4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH 4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH 4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model
Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere
Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.
The distribution of methane (CH 4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH 4 mixing ratio (XCH 4), which is being measured increasingly for the assessment of CH 4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH 4 accurately for estimating surface emissions from XCH 4. Simulations from three CTMs are tested against XCH 4 observations from the Total Carbon Column Network (TCCON). We analyze how the model–TCCON agreement in XCH 4 depends on the model representation of stratospheric CH 4 distributions.more » Model equivalents of TCCON XCH 4 are computed with stratospheric CH 4 fields from both the model simulations and from satellite-based CH 4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH 4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH 4 fields in place of model simulations improves the model–TCCON XCH 4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH 4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH 4 fields adjusted to ACE-FTS reduces the average XCH 4 bias for ACTM (3.3 ppb), but increases the average XCH 4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH 4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH 4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model
Bounded relative motion under zonal harmonics perturbations
NASA Astrophysics Data System (ADS)
Baresi, Nicola; Scheeres, Daniel J.
2017-04-01
The problem of finding natural bounded relative trajectories between the different units of a distributed space system is of great interest to the astrodynamics community. This is because most popular initialization methods still fail to establish long-term bounded relative motion when gravitational perturbations are involved. Recent numerical searches based on dynamical systems theory and ergodic maps have demonstrated that bounded relative trajectories not only exist but may extend up to hundreds of kilometers, i.e., well beyond the reach of currently available techniques. To remedy this, we introduce a novel approach that relies on neither linearized equations nor mean-to-osculating orbit element mappings. The proposed algorithm applies to rotationally symmetric bodies and is based on a numerical method for computing quasi-periodic invariant tori via stroboscopic maps, including extra constraints to fix the average of the nodal period and RAAN drift between two consecutive equatorial plane crossings of the quasi-periodic solutions. In this way, bounded relative trajectories of arbitrary size can be found with great accuracy as long as these are allowed by the natural dynamics and the physical constraints of the system (e.g., the surface of the gravitational attractor). This holds under any number of zonal harmonics perturbations and for arbitrary time intervals as demonstrated by numerical simulations about an Earth-like planet and the highly oblate primary of the binary asteroid (66391) 1999 KW4.
A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China
Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin
2014-01-01
Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046
The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process
ERIC Educational Resources Information Center
Murphy, Daniel L.; Pituch, Keenan A.
2009-01-01
The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…
DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN
Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...
Properties of bright solitons in averaged and unaveraged models for SDG fibres
NASA Astrophysics Data System (ADS)
Kumar, Ajit; Kumar, Atul
1996-04-01
Using the slowly varying envelope approximation and averaging over the fibre cross-section the evolution equation for optical pulses in semiconductor-doped glass (SDG) fibres is derived from the nonlinear wave equation. Bright soliton solutions of this equation are obtained numerically and their properties are studied and compared with those of the bright solitons in the unaveraged model.
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
NASA Astrophysics Data System (ADS)
Yamazaki, Y. H.; Skeet, D. R.; Read, P. L.
2004-04-01
We have been developing a new three-dimensional general circulation model for the stratosphere and troposphere of Jupiter based on the dynamical core of a portable version of the Unified Model of the UK Meteorological Office. Being one of the leading terrestrial GCMs, employed for operational weather forecasting and climate research, the Unified Model has been thoroughly tested and performance tuned for both vector and parallel computers. It is formulated as a generalized form of the standard primitive equations to handle a thick atmosphere, using a scaled pressure as the vertical coordinate. It is able to accurately simulate the dynamics of a three-dimensional fully compressible atmosphere on the whole or a part of a spherical shell at high spatial resolution in all three directions. Using the current version of the GCM, we examine the characteristics of the Jovian winds in idealized configurations based on the observed vertical structure of temperature. Our initial focus is on the evolution of isolated eddies in the mid-latitudes. Following a brief theoretical investigation of the vertical structure of the atmosphere, limited-area cyclic channel domains are used to numerically investigate the nonlinear evolution of the mid-latitude winds. First, the evolution of deep and shallow cyclones and anticyclones are tested in the atmosphere at rest to identify a preferred horizontal and vertical structure of the vortices. Then, the dependency of the migration characteristics of the vortices are investigated against modelling parameters to find that it is most sensitive to the horizontal diffusion. We also examine the hydrodynamical stability of observed subtropical jets in both northern and southern hemispheres in the three-dimensional nonlinear model as initial value problems. In both cases, it was found that the prominent jets are unstable at various scales and that vorteces of various sizes are generated including those comparable to the White Ovals and the Great Red
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
NASA Astrophysics Data System (ADS)
Olson, R.; Evans, J. P.; Fan, Y.
2015-12-01
NARCliM (NSW/ACT Regional Climate Modelling Project) is a regional climate project for Australia and the surrounding region. It dynamically downscales 4 General Circulation Models (GCMs) using three Regional Climate Models (RCMs) to provide climate projections for the CORDEX-AustralAsia region at 50 km resolution, and for south-east Australia at 10 km resolution. The project differs from previous work in the level of sophistication of model selection. Specifically, the selection process for GCMs included (i) conducting literature review to evaluate model performance, (ii) analysing model independence, and (iii) selecting models that span future temperature and precipitation change space. RCMs for downscaling the GCMs were chosen based on their performance for several precipitation events over South-East Australia, and on model independence.Bayesian Model Averaging (BMA) provides a statistically consistent framework for weighing the models based on their likelihood given the available observations. These weights are used to provide probability distribution functions (pdfs) for model projections. We develop a BMA framework for constructing probabilistic climate projections for spatially-averaged variables from the NARCliM project. The first step in the procedure is smoothing model output in order to exclude the influence of internal climate variability. Our statistical model for model-observations residuals is a homoskedastic iid process. Comparing RCMs with Australian Water Availability Project (AWAP) observations is used to determine model weights through Monte Carlo integration. Posterior pdfs of statistical parameters of model-data residuals are obtained using Markov Chain Monte Carlo. The uncertainty in the properties of the model-data residuals is fully accounted for when constructing the projections. We present the preliminary results of the BMA analysis for yearly maximum temperature for New South Wales state planning regions for the period 2060-2079.
NASA Technical Reports Server (NTRS)
2000-01-01
This movie is a manipulated sequence showing motions in Jupiter's atmosphere over the course of five days beginning Oct. 1, 2000, as seen by a camera on NASA's Cassini spacecraft, using a blue filter.
Beginning with seven images taken at uneven time intervals, this sequence was made by using information on wind speeds derived from actual Jupiter images to create evenly spaced time steps throughout. The final result is a smooth movie sequence consisting of both real and false frames.
The view is of the opposite side of the planet from Jupiter's Great Red Spot. The region shown reaches from 50 degrees north to 50 degrees south of Jupiter's equator, and extends 100 degrees east-to-west, about one-quarter of Jupiter's circumference. The smallest features are about 500 kilometers (about 300 miles) across.
Towards the end of the sequence, a shadow appears from one of Jupiter's moons, Europa.
The movie shows the remains of a historic merger that began several years ago, when three white oval storms that had existed for 60 years merged into two, then one. The resulting oval is visible in the lower left portion of the movie.
The movie also shows zonal jets that circle the planet on constant latitudes. Winds seen moving toward the left (westward) correspond to features that are rotating a little slower than Jupiter's magnetic field, and winds moving the opposite direction correspond to features that are rotating a little faster than the magnetic field. Since Jupiter has no solid surface, the rotation of the magnetic field is the point of reference for the rotation of the planet.
Cassini is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.
Detectability of planetary characteristics in disk-averaged spectra. I: The Earth model.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Fishbein, Evan; Turnbull, Margaret; Bibring, Jean-Pierre
2006-02-01
Over the next 2 decades, NASA and ESA are planning a series of space-based observatories to detect and characterize extrasolar planets. This first generation of observatories will not be able to spatially resolve the terrestrial planets detected. Instead, these planets will be characterized by disk-averaged spectroscopy. To assess the detectability of planetary characteristics in disk-averaged spectra, we have developed a spatially and spectrally resolved model of the Earth. This model uses atmospheric and surface properties from existing observations and modeling studies as input, and generates spatially resolved high-resolution synthetic spectra using the Spectral Mapping Atmospheric Radiative Transfer model. Synthetic spectra were generated for a variety of conditions, including cloud coverage, illumination fraction, and viewing angle geometry, over a wavelength range extending from the ultraviolet to the farinfrared. Here we describe the model and validate it against disk-averaged visible to infrared observations of the Earth taken by the Mars Global Surveyor Thermal Emission Spectrometer, the ESA Mars Express Omega instrument, and ground-based observations of earthshine reflected from the unilluminated portion of the Moon. The comparison between the data and model indicates that several atmospheric species can be identified in disk-averaged Earth spectra, and potentially detected depending on the wavelength range and resolving power of the instrument. At visible wavelengths (0.4-0.9 microm) O3, H2O, O2, and oxygen dimer [(O2)2] are clearly apparent. In the mid-infrared (5-20 microm) CO2, O3, and H2O are present. CH4, N2O, CO2, O3, and H2O are visible in the near-infrared (1-5 microm). A comprehensive three-dimensional model of the Earth is needed to produce a good fit with the observations.
NASA Astrophysics Data System (ADS)
Yuksel, Heba; Davis, Christopher C.
2006-09-01
Intensity fluctuations at the receiver in free space optical (FSO) communication links lead to a received power variance that depends on the size of the receiver aperture. Increasing the size of the receiver aperture reduces the power variance. This effect of the receiver size on power variance is called aperture averaging. If there were no aperture size limitation at the receiver, then there would be no turbulence-induced scintillation. In practice, there is always a tradeoff between aperture size, transceiver weight, and potential transceiver agility for pointing, acquisition and tracking (PAT) of FSO communication links. We have developed a geometrical simulation model to predict the aperture averaging factor. This model is used to simulate the aperture averaging effect at given range by using a large number of rays, Gaussian as well as uniformly distributed, propagating through simulated turbulence into a circular receiver of varying aperture size. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. For each statistical representation of the atmosphere, the three-dimensional trajectory of each ray is analyzed using geometrical optics. These Monte Carlo techniques have proved capable of assessing the aperture averaging effect, in particular, the quantitative expected reduction in intensity fluctuations with increasing aperture diameter. In addition, beam wander results have demonstrated the range-cubed dependence of mean-squared beam wander. An effective turbulence parameter can also be determined by correlating beam wander behavior with the path length.
Analyzing average and conditional effects with multigroup multilevel structural equation models
Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf
2014-01-01
Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668
Variability in daily, zonal mean lower-stratospheric temperatures
NASA Technical Reports Server (NTRS)
Christy, John R.; Drouilhet, S. James, Jr.
1994-01-01
Satellite data from the microwave sounding unit (MSU) channel 4, when carefully merged, provide daily zonal anomalies of lower-stratosphere temperature with a level of precision between 0.01 and 0.08 C per 2.5 deg latitude band. Global averages of these daily zonal anomalies reveal the prominent warming events due to volcanic aerosol in 1982 (El Chichon) and 1991 (Mt. Pinatubo), which are on the order of 1 C. The quasibiennial oscillation (QBO) may be extracted from these zonal data by applying a spatial filter between 15 deg N and 15 deg S latitude, which resembles the meridional curvature. Previously published relationships between the QBO and the north polar stratospheric temperatures during northern winter are examined but were not found to be reproduced in the MSU4 data. Sudden stratospheric warmings in the north polar region are represented in the MSU4 data for latitudes poleward of 70 deg N. In the Southern Hemisphere, there appears to be a moderate relationship between total ozone concentration and MSU4 temperatures, though it has been less apparent in 1991 and 1992. In terms of empirical modes of variability, the authors find a strong tendency in EOF 1 (39.2% of the variance) for anomalies in the Northern Hemisphere polar regions to be counterbalanced by anomalies equatorward of 40 deg N and 40 deg S latitudes. In addition, most of the modes revealed significant power in the 15-20 day period band.
Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control over a Hump Model
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2006-01-01
The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.
Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control Over a Hump Model
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2006-01-01
The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.
Turbulence, transport, and zonal flows in the Madison symmetric torus reversed-field pinch
NASA Astrophysics Data System (ADS)
Williams, Z. R.; Pueschel, M. J.; Terry, P. W.; Hauff, T.
2017-12-01
The robustness and the effect of zonal flows in trapped electron mode (TEM) turbulence and Ion Temperature Gradient (ITG) turbulence in the reversed-field pinch (RFP) are investigated from numerical solutions of the gyrokinetic equations with and without magnetic external perturbations introduced to model tearing modes. For simulations without external magnetic field perturbations, zonal flows produce a much larger reduction of transport for the density-gradient-driven TEM turbulence than they do for the ITG turbulence. Zonal flows are studied in detail to understand the nature of their strong excitation in the RFP and to gain insight into the key differences between the TEM- and ITG-driven regimes. The zonal flow residuals are significantly larger in the RFP than in tokamak geometry due to the low safety factor. Collisionality is seen to play a significant role in the TEM zonal flow regulation through the different responses of the linear growth rate and the size of the Dimits shift to collisionality, while affecting the ITG only minimally. A secondary instability analysis reveals that the TEM turbulence drives zonal flows at a rate that is twice that of the ITG turbulence. In addition to interfering with zonal flows, the magnetic perturbations are found to obviate an energy scaling relation for fast particles.
Spatial averaging of a dissipative particle dynamics model for active suspensions
NASA Astrophysics Data System (ADS)
Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot
2018-03-01
Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.
A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by
Weighed scalar averaging in LTB dust models: part II. A formalism of exact perturbations
NASA Astrophysics Data System (ADS)
Sussman, Roberto A.
2013-03-01
We examine the exact perturbations that arise from the q-average formalism that was applied in the preceding article (part I) to Lemaître-Tolman-Bondi (LTB) models. By introducing an initial value parametrization, we show that all LTB scalars that take an FLRW ‘look-alike’ form (frequently used in the literature dealing with LTB models) follow as q-averages of covariant scalars that are common to FLRW models. These q-scalars determine for every averaging domain a unique FLRW background state through Darmois matching conditions at the domain boundary, though the definition of this background does not require an actual matching with an FLRW region (Swiss cheese-type models). Local perturbations describe the deviation from the FLRW background state through the local gradients of covariant scalars at the boundary of every comoving domain, while non-local perturbations do so in terms of the intuitive notion of a ‘contrast’ of local scalars with respect to FLRW reference values that emerge from q-averages assigned to the whole domain or the whole time slice in the asymptotic limit. We derive fluid flow evolution equations that completely determine the dynamics of the models in terms of the q-scalars and both types of perturbations. A rigorous formalism of exact spherical nonlinear perturbations is defined over the FLRW background state associated with the q-scalars, recovering the standard results of linear perturbation theory in the appropriate limit. We examine the notion of the amplitude and illustrate the differences between local and non-local perturbations by qualitative diagrams and through an example of a cosmic density void that follows from the numeric solution of the evolution equations.
Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver
NASA Astrophysics Data System (ADS)
Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur
2017-12-01
Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.
Extra compressibility terms for Favre-averaged two-equation models of inhomogeneous turbulent flows
NASA Technical Reports Server (NTRS)
Rubesin, Morris W.
1990-01-01
Forms of extra-compressibility terms that result from use of Favre averaging of the turbulence transport equations for kinetic energy and dissipation are derived. These forms introduce three new modeling constants, a polytropic coefficient that defines the interrelationships of the pressure, density, and enthalpy fluctuations and two constants in the dissipation equation that account for the non-zero pressure-dilitation and mean pressure gradients.
Modeling of structural uncertainties in Reynolds-averaged Navier-Stokes closures
NASA Astrophysics Data System (ADS)
Emory, Michael; Larsson, Johan; Iaccarino, Gianluca
2013-11-01
Estimation of the uncertainty in numerical predictions by Reynolds-averaged Navier-Stokes closures is a vital step in building confidence in such predictions. An approach to model-form uncertainty quantification that does not assume the eddy-viscosity hypothesis to be exact is proposed. The methodology for estimation of uncertainty is demonstrated for plane channel flow, for a duct with secondary flows, and for the shock/boundary-layer interaction over a transonic bump.
On the tertiary instability formalism of zonal flows in magnetized plasmas
NASA Astrophysics Data System (ADS)
Rath, F.; Peeters, A. G.; Buchholz, R.; Grosshauser, S. R.; Seiferling, F.; Weikl, A.
2018-05-01
This paper investigates the so-called tertiary instabilities driven by the zonal flow in gyro-kinetic tokamak core turbulence. The Kelvin Helmholtz instability is first considered within a 2D fluid model and a threshold in the zonal flow wave vector kZF>kZF,c for instability is found. This critical scale is related to the breaking of the rotational symmetry by flux-surfaces, which is incorporated into the modified adiabatic electron response. The stability of undamped Rosenbluth-Hinton zonal flows is then investigated in gyro-kinetic simulations. Absolute instability, in the sense that the threshold zonal flow amplitude tends towards zero, is found above a zonal flow wave vector kZF,cρi≈1.3 ( ρi is the ion thermal Larmor radius), which is comparable to the 2D fluid results. Large scale zonal flows with kZF
Zhao, Kaiguang; Valle, Denis; Popescu, Sorin
2013-05-15
Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 speciesmore » across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.« less
A Tidally Averaged Sediment-Transport Model for San Francisco Bay, California
Lionberger, Megan A.; Schoellhamer, David H.
2009-01-01
A tidally averaged sediment-transport model of San Francisco Bay was incorporated into a tidally averaged salinity box model previously developed and calibrated using salinity, a conservative tracer (Uncles and Peterson, 1995; Knowles, 1996). The Bay is represented in the model by 50 segments composed of two layers: one representing the channel (>5-meter depth) and the other the shallows (0- to 5-meter depth). Calculations are made using a daily time step and simulations can be made on the decadal time scale. The sediment-transport model includes an erosion-deposition algorithm, a bed-sediment algorithm, and sediment boundary conditions. Erosion and deposition of bed sediments are calculated explicitly, and suspended sediment is transported by implicitly solving the advection-dispersion equation. The bed-sediment model simulates the increase in bed strength with depth, owing to consolidation of fine sediments that make up San Francisco Bay mud. The model is calibrated to either net sedimentation calculated from bathymetric-change data or measured suspended-sediment concentration. Specified boundary conditions are the tributary fluxes of suspended sediment and suspended-sediment concentration in the Pacific Ocean. Results of model calibration and validation show that the model simulates the trends in suspended-sediment concentration associated with tidal fluctuations, residual velocity, and wind stress well, although the spring neap tidal suspended-sediment concentration variability was consistently underestimated. Model validation also showed poor simulation of seasonal sediment pulses from the Sacramento-San Joaquin River Delta at Point San Pablo because the pulses enter the Bay over only a few days and the fate of the pulses is determined by intra-tidal deposition and resuspension that are not included in this tidally averaged model. The model was calibrated to net-basin sedimentation to calculate budgets of sediment and sediment-associated contaminants. While
Hui, Shisheng; Chen, Lizhang; Liu, Fuqiang; Ouyang, Yanhao
2015-12-01
To establish multiple seasonal autoregressive integrated moving average model(ARIMA) according to mumps disease incidence in Hunan province, and to predict the mumps incidence from May 2015 to April 2016 in Hunan province by the model. The data were downloaded from "Disease Surveillance Information Reporting Management System" in China Information System for Disease Control and Prevention. The monthly incidence of mumps in Hunan province was collected from January 2004 to April 2015 according to the onset date, including clinical diagnosis and laboratory confirmed cases. The predictive analysis method was the ARIMA model in SPSS 18.0 software, the ARIMA model was established on the monthly incidence of mumps from January 2004 to April 2014, and the date from May 2014 to April 2015 was used as the testing sample, Box-Ljung Q test was used to test the residual of the selected model. Finally, the monthly incidence of mumps from May 2015 to April 2016 was predicted by the model. The peak months of the mumps incidence were May to July every year, and the secondary peak months were November to January of the following year, during January 2004 to April 2014 in Hunan province. After the data sequence was handled by smooth sequence, model identification, establishment and diagnosis, the ARIMA(2,1,1) × (0,1,1)(12) was established, Box-Ljung Q test found, Q=8.40, P=0.868, the residual sequence was white noise, the established model to the data information extraction was complete, the model was reasonable. The R(2) value of the model fitting degree was 0.871, and the value of BIC was -1.646, while the average absolute error of the predicted value and the actual value was 0.025/100 000, the average relative error was 13.004%. The relative error of the model for the prediction of the mumps incidence in Hunan province was small, and the predicting results were reliable. Using the ARIMA(2,1,1) ×(0,1,1)(12) model to predict the mumps incidence from April 2016 to May 2015 in
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Shahab, Azin
In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Phase averaging method for the modeling of the multiprobe and cutaneous cryosurgery
NASA Astrophysics Data System (ADS)
E Shilnikov, K.; Kudryashov, N. A.; Y Gaiur, I.
2017-12-01
In this paper we consider the problem of planning and optimization of the cutaneous and multiprobe cryosurgery operations. An explicit scheme based on the finite volume approximation of phase averaged Pennes bioheat transfer model is applied. The flux relaxation method is used for the stability improvement of scheme. Skin tissue is considered as strongly inhomogeneous media. Computerized planning tool is tested on model cryotip-based and cutaneous cryosurgery problems. For the case of cutaneous cryosurgery the method of an additional freezing element mounting is studied as an approach to optimize the cellular necrosis front propagation.
Geomagnetic field model for the last 5 My: time-averaged field and secular variation
NASA Astrophysics Data System (ADS)
Hatakeyama, Tadahiro; Kono, Masaru
2002-11-01
Structure of the geomagnetic field has bee studied by using the paleomagetic direction data of the last 5 million years obtained from lava flows. The method we used is the nonlinear version, similar to the works of Gubbins and Kelly [Nature 365 (1993) 829], Johnson and Constable [Geophys. J. Int. 122 (1995) 488; Geophys. J. Int. 131 (1997) 643], and Kelly and Gubbins [Geophys. J. Int. 128 (1997) 315], but we determined the time-averaged field (TAF) and the paleosecular variation (PSV) simultaneously. As pointed out in our previous work [Earth Planet. Space 53 (2001) 31], the observed mean field directions are affected by the fluctuation of the field, as described by the PSV model. This effect is not excessively large, but cannot be neglected while considering the mean field. We propose that the new TAF+PSV model is a better representation of the ancient magnetic field, since both the average and fluctuation of the field are consistently explained. In the inversion procedure, we used direction cosines instead of inclinations and declinations, as the latter quantities show singularity or unstable behavior at the high latitudes. The obtained model gives reasonably good fit to the observed means and variances of direction cosines. In the TAF model, the geocentric axial dipole term ( g10) is the dominant component; it is much more pronounced than that in the present magnetic field. The equatorial dipole component is quite small, after averaging over time. The model shows a very smooth spatial variation; the nondipole components also seem to be averaged out quite effectively over time. Among the other coefficients, the geocentric axial quadrupole term ( g20) is significantly larger than the other components. On the other hand, the axial octupole term ( g30) is much smaller than that in a TAF model excluding the PSV effect. It is likely that the effect of PSV is most clearly seen in this term, which is consistent with the conclusion reached in our previous work. The PSV
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Statistical properties of Charney-Hasegawa-Mima zonal flows
Anderson, Johan, E-mail: anderson.johan@gmail.com; Botha, G. J. J.
2015-05-15
A theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent plasma transport events in unforced zonal flows is provided within the Charney-Hasegawa-Mima (CHM) model. The governing equation is solved numerically with various prescribed density gradients that are designed to produce different configurations of parallel and anti-parallel streams. Long-lasting vortices form whose flow is governed by the zonal streams. It is found that the numerically generated PDFs can be matched with analytical predictions of PDFs based on the instanton method by removing the autocorrelations from the time series. In many instances, the statistics generated by the CHM dynamics relaxesmore » to Gaussian distributions for both the electrostatic and vorticity perturbations, whereas in areas with strong nonlinear interactions it is found that the PDFs are exponentially distributed.« less
General analytic results on averaging Lemaître-Tolman-Bondi models
NASA Astrophysics Data System (ADS)
Sussman, Roberto A.
2010-12-01
An effective acceleration, which mimics the effect of dark energy, may arise in the context of Buchert's scalar averaging formalism. We examine the conditions for such an acceleration to occur in the asymptotic radial range in generic spherically symmetric Lemaître-Tolman-Bondi (LTB) dust models. By looking at the behavior of covariant scalars along space slices orthogonal to the 4-velocity, we show that this effective acceleration occurs in a class of models with negative spatial curvature that are asymptotically convergent to sections of Minkowski spacetime. As a consequence, the boundary conditions that favor LTB models with an effective acceleration are not a void inhomogeneity embedded in a homogeneous FLRW background (Swiss cheese models), but a local void or clump embedded in a large cosmic void region represented by asymptotically Minkowski conditions.
Sampaio, Luis Rafael L; Borges, Lucas T N; Silva, Joyse M F; de Andrade, Francisca Roselin O; Barbosa, Talita M; Oliveira, Tatiana Q; Macedo, Danielle; Lima, Ricardo F; Dantas, Leonardo P; Patrocinio, Manoel Cláudio A; do Vale, Otoni C; Vasconcelos, Silvânia M M
2018-02-01
The use of ketamine (Ket) as a pharmacological model of schizophrenia is an important tool for understanding the main mechanisms of glutamatergic regulated neural oscillations. Thus, the aim of the current study was to evaluate Ket-induced changes in the average spectral power using the hippocampal quantitative electroencephalography (QEEG). To this end, male Wistar rats were submitted to a stereotactic surgery for the implantation of an electrode in the right hippocampus. After three days, the animals were divided into four groups that were treated for 10 consecutive days with Ket (10, 50, or 100 mg/kg). Brainwaves were captured on the 1st or 10th day, respectively, to acute or repeated treatments. The administration of Ket (10, 50, or 100 mg/kg), compared with controls, induced changes in the hippocampal average spectral power of delta, theta, alpha, gamma low or high waves, after acute or repeated treatments. Therefore, based on the alterations in the average spectral power of hippocampal waves induced by Ket, our findings might provide a basis for the use of hippocampal QEEG in animal models of schizophrenia. © 2017 Société Française de Pharmacologie et de Thérapeutique.
Gong, Qi; Schaubel, Douglas E
2017-03-01
Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.
Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting
NASA Astrophysics Data System (ADS)
Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.
2018-04-01
Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.
Simulation of Two-Phase Flow Based on a Thermodynamically Constrained Averaging Theory Flow Model
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Dye, A. L.; McClure, J. E.; Farthing, M. W.; Gray, W. G.; Miller, C. T.
2014-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for two-fluid-phase flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as interfacial areas, contact angles, interfacial tension, and curvatures; and dynamics of interface movement and relaxation to an equilibrium state. In order to render the TCAT model solvable, certain closure relations are needed to relate fluid pressure, interfacial areas, curvatures, and relaxation rates. In this work, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instance from a hierarchy of two-fluid-phase flow models that emerge from the theory. We show the closure problem that must be solved. Using recent results from high-resolution microscale simulations, we advance a set of closure relations that produce a closed model. Lastly, we use locally conservative spatial discretization and higher order temporal discretization methods to approximate the solution to this new model and compare the solution to the traditional model.
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Miller, C. T.; Dye, A. L.; Gray, W. G.; McClure, J. E.; Rybak, I.
2015-12-01
The thermodynamically constrained averaging theory (TCAT) has been usedto formulate general classes of porous medium models, including newmodels for two-fluid-phase flow. The TCAT approach provides advantagesthat include a firm connection between the microscale, or pore scale,and the macroscale; a thermodynamically consistent basis; explicitinclusion of factors such as interfacial areas, contact angles,interfacial tension, and curvatures; and dynamics of interface movementand relaxation to an equilibrium state. In order to render the TCATmodel solvable, certain closure relations are needed to relate fluidpressure, interfacial areas, curvatures, and relaxation rates. In thiswork, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instancefrom a hierarchy of two-fluid-phase flow models that emerge from thetheory. We show the closure problem that must be solved. Using recentresults from high-resolution microscale simulations, we advance a set ofclosure relations that produce a closed model. Lastly, we solve the model using a locally conservative numerical scheme and compare the TCAT model to the traditional model.
Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun
2015-02-01
Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy
2016-10-18
There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy
There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less
DOT National Transportation Integrated Search
1997-04-18
Section 32902(a) of title 49, United States Code, requires the Secretary of Transportation to prescribe by regulation, at least 18 months in advance of each model year, average fuel economy standards (known as "Corporate Average Fuel Economy" or "CAF...
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Nonstationary Gravity Wave Forcing of the Stratospheric Zonal Mean Wind
NASA Technical Reports Server (NTRS)
Alexander, M. J.; Rosenlof, K. H.
1996-01-01
The role of gravity wave forcing in the zonal mean circulation of the stratosphere is discussed. Starting from some very simple assumptions about the momentum flux spectrum of nonstationary (non-zero phase speed) waves at forcing levels in the troposphere, a linear model is used to calculate wave propagation through climatological zonal mean winds at solstice seasons. As the wave amplitudes exceed their stable limits, a saturation criterion is imposed to account for nonlinear wave breakdown effects, and the resulting vertical gradient in the wave momentum flux is then used to estimate the mean flow forcing per unit mass. Evidence from global, assimilated data sets are used to constrain these forcing estimates. The results suggest the gravity-wave-driven force is accelerative (has the same sign as the mean wind) throughout most of the stratosphere above 20 km. The sense of the gravity wave forcing in the stratosphere is thus opposite to that in the mesosphere, where gravity wave drag is widely believed to play a principal role in decelerating the mesospheric jets. The forcing estimates are further compared to existing gravity wave parameterizations for the same climatological zonal mean conditions. Substantial disagreement is evident in the stratosphere, and we discuss the reasons for the disagreement. The results suggest limits on typical gravity wave amplitudes near source levels in the troposphere at solstice seasons. The gravity wave forcing in the stratosphere appears to have a substantial effect on lower stratospheric temperatures during southern hemisphere summer and thus may be relevant to climate.
NASA Astrophysics Data System (ADS)
Olson, R.; An, S. I.
2016-12-01
Atlantic Meridional Overturning Circulation (AMOC) in the ocean might slow down in the future, which can lead to a host of climatic effects in North Atlantic and throughout the world. Despite improvements in climate models and availability of new observations, AMOC projections remain uncertain. Here we constrain CMIP5 multi-model ensemble output with observations of a recently developed AMOC index to provide improved Bayesian predictions of future AMOC. Specifically, we first calculate yearly AMOC index loosely based on Rahmstorf et al. (2015) for years 1880—2004 for both observations, and the CMIP5 models for which relevant output is available. We then assign a weight to each model based on a Bayesian Model Averaging method that accounts for differential model skill in terms of both mean state and variability. We include the temporal autocorrelation in climate model errors, and account for the uncertainty in the parameters of our statistical model. We use the weights to provide future weighted projections of AMOC, and compare them to un-weighted ones. Our projections use bootstrapping to account for uncertainty in internal AMOC variability. We also perform spectral and other statistical analyses to show that AMOC index variability, both in models and in observations, is consistent with red noise. Our results improve on and complement previous work by using a new ensemble of climate models, a different observational metric, and an improved Bayesian weighting method that accounts for differential model skill at reproducing internal variability. Reference: Rahmstorf, S., Box, J. E., Feulner, G., Mann, M. E., Robinson, A., Rutherford, S., & Schaffernicht, E. J. (2015). Exceptional twentieth-century slowdown in atlantic ocean overturning circulation. Nature Climate Change, 5(5), 475-480. doi:10.1038/nclimate2554
Contribution of zonal harmonics to gravitational moment
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.
1991-01-01
It is presently demonstrated that a recursive vector-dyadic expression for the contribution of a zonal harmonic of degree n to the gravitational moment about a small body's center-of-mass is obtainable with a procedure that involves twice differentiating a celestial body's gravitational potential with respect to a vector. The recursive property proceeds from taking advantage of a recursion relation for Legendre polynomials which appear in the gravitational potential. The contribution of the zonal harmonic of degree 2 is consistent with the gravitational moment exerted by an oblate spheroid.
Contribution of zonal harmonics to gravitational moment
NASA Astrophysics Data System (ADS)
Roithmayr, Carlos M.
1991-02-01
It is presently demonstrated that a recursive vector-dyadic expression for the contribution of a zonal harmonic of degree n to the gravitational moment about a small body's center-of-mass is obtainable with a procedure that involves twice differentiating a celestial body's gravitational potential with respect to a vector. The recursive property proceeds from taking advantage of a recursion relation for Legendre polynomials which appear in the gravitational potential. The contribution of the zonal harmonic of degree 2 is consistent with the gravitational moment exerted by an oblate spheroid.
Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...
NASA Astrophysics Data System (ADS)
Inatsu, Masaru; Mukougawa, Hitoshi; Xie, Shang-Ping
2003-10-01
Midwinter storm track response to zonal variations in midlatitude sea surface temperatures (SSTs) has been investigated using an atmospheric general circulation model under aquaplanet and perpetual-January conditions. Zonal wavenumber-1 SST variations with a meridionally confined structure are placed at various latitudes. Having these SST variations centered at 30°N leads to a zonally localized storm track, while the storm track becomes nearly zonally uniform when the same SST forcing is moved farther north at 40° and 50°N. Large (small) baroclinic energy conversion north of the warm (cold) SST anomaly near the axis of the storm track (near 40°N) is responsible for the large (small) storm growth. The equatorward transfer of eddy kinetic energy by the ageostrophic motion and the mechanical damping are important to diminish the storm track activity in the zonal direction.Significant stationary eddies form in the upper troposphere, with a ridge (trough) northeast of the warm (cold) SST anomaly at 30°N. Heat and vorticity budget analyses indicate that zonally localized condensational heating in the storm track is the major cause for these stationary eddies, which in turn exert a positive feedback to maintain the localized storm track by strengthening the vertical shear near the surface. These results indicate an active role of synoptic eddies in inducing deep, tropospheric-scale response to midlatitude SST variations. Finally, the application of the model results to the real atmosphere is discussed.
NASA Astrophysics Data System (ADS)
Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel
2014-05-01
Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.
Zonal subdivision of marine sequences: achievements and discrepancies
NASA Astrophysics Data System (ADS)
Gladenkov, Yuri
2010-05-01
It was 150 years ago when a notion of zone was introduced into stratigraphy. By the present time zonal units with a duration of 0.3-3.0 M.y. in average have been established virtually for all systems and stages of the Phanerozoic. Their quantity reached 300. It is not a chance that zonal stratigraphy is considered to be one of the most significant achievement of the modern geology. There are different interpretations of essence and goals of zonal stratigraphy, techniques of separation of zones, and evaluation of zones as stratigraphic units. Particularly it is reflected in International Stratigraphic Guide (Murphy, Salvador, 1999), Russian Stratigraphic Code (Zhamoida, 2006), and a number of stratigraphic reports of the last years. It concerns different approaches to: (a) establishment of different types of zones (biostratigraphic zones and chronozones, oppel-zones and biohorizons, etc.); (b) assessment of spatial distribution of zones (global or provincial) and a role of sedimentological factor; (c) definition of zones as stratigraphic units (relationships with geostratigraphic units of the standard and regional scales). The latest publications show that because of the different interpretations of zones, authors should explain usage of certain type of zone (for example, when they use the terms "interval-zone" or "assemblage-zone", what limitations stem from application of datum-levels, and others). It is common opinion, that biostratigraphic zones used widely by paleontologists and stratigraphers cannot be a final goal of stratigraphy although they provide a base for solution of many important problems (definition of certain stratigraphic levels, correlation of different biofacies, and others). At the same time, the most important stratigraphic units are chronozones, which correspond to stages or phases of geological evolutio of basins and are marked by distinct fossil assemblages and other properties (magnetic and other characteristics) in the type sections
Statistical Model Analysis of (n,p) Cross Sections and Average Energy For Fission Neutron Spectrum
Odsuren, M.; Khuukhenkhuu, G.
2011-06-28
Investigation of charged particle emission reaction cross sections for fast neutrons is important to both nuclear reactor technology and the understanding of nuclear reaction mechanisms. In particular, the study of (n,p) cross sections is necessary to estimate radiation damage due to hydrogen production, nuclear heating and transmutations in the structural materials of fission and fusion reactors. On the other hand, it is often necessary in practice to evaluate the neutron cross sections of the nuclides for which no experimental data are available.Because of this, we carried out the systematical analysis of known experimental (n,p) and (n,a) cross sections for fastmore » neutrons and observed a systematical regularity in the wide energy interval of 6-20 MeV and for broad mass range of target nuclei. To explain this effect using the compound, pre-equilibrium and direct reaction mechanisms some formulae were deduced. In this paper, in the framework of the statistical model known experimental (n,p) cross sections averaged over the thermal fission neutron spectrum of U-235 are analyzed. It was shown that the experimental data are satisfactorily described by the statistical model. Also, in the case of (n,p) cross sections the effective average neutron energy for fission spectrum of U-235 was found to be around 3 MeV.« less
A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms
NASA Technical Reports Server (NTRS)
Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.
1992-01-01
A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.
Lagrangian-averaged model for magnetohydrodynamic turbulence and the absence of bottlenecks.
Pietarila Graham, Jonathan; Mininni, Pablo D; Pouquet, Annick
2009-07-01
We demonstrate that, for the case of quasiequipartition between the velocity and the magnetic field, the Lagrangian-averaged magnetohydrodynamics (LAMHD) alpha model reproduces well both the large-scale and the small-scale properties of turbulent flows; in particular, it displays no increased (superfilter) bottleneck effect with its ensuing enhanced energy spectrum at the onset of the subfilter scales. This is in contrast to the case of the neutral fluid in which the Lagrangian-averaged Navier-Stokes alpha model is somewhat limited in its applications because of the formation of spatial regions with no internal degrees of freedom and subsequent contamination of superfilter-scale spectral properties. We argue that, as the Lorentz force breaks the conservation of circulation and enables spectrally nonlocal energy transfer (associated with Alfvén waves), it is responsible for the absence of a viscous bottleneck in magnetohydrodynamics (MHD), as compared to the fluid case. As LAMHD preserves Alfvén waves and the circulation properties of MHD, there is also no (superfilter) bottleneck found in LAMHD, making this method capable of large reductions in required numerical degrees of freedom; specifically, we find a reduction factor of approximately 200 when compared to a direct numerical simulation on a large grid of 1536;{3} points at the same Reynolds number.
Analytical network-averaging of the tube model: Strain-induced crystallization in natural rubber
NASA Astrophysics Data System (ADS)
Khiêm, Vu Ngoc; Itskov, Mikhail
2018-07-01
In this contribution, we extend the analytical network-averaging concept (Khiêm and Itskov, 2016) to phase transition during strain-induced crystallization of natural rubber. To this end, a physically-based constitutive model describing the nonisothermal strain-induced crystallization is proposed. Accordingly, the spatial arrangement of polymer subnetworks is driven by crystallization nucleation and consequently alters the mesoscopic deformation measures. The crystallization growth is elucidated by diffusion of chain segments into crystal nuclei. The crystallization results in a change of temperature and an evolution of heat source. By this means, not only the crystallization kinetics but also the Gough-Joule effect are thoroughly described. The predictive capability of the constitutive model is illustrated by comparison with experimental data for natural rubbers undergoing strain-induced crystallization. All measurable values such as stress, crystallinity and heat source are utilized for the comparison.
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang
2014-05-01
Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter
Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo; ...
2017-12-25
A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less
A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter
Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo
A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less
NASA Astrophysics Data System (ADS)
Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe
2018-03-01
This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to ∼1 MeV.
SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging
Diamant, A; Ybarra, N; Seuntjens, J
2016-06-15
Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigatedmore » a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is
A Bayesian model averaging approach with non-informative priors for cost-effectiveness analyses.
Conigliani, Caterina
2010-07-20
We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavy-tailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging (BMA) in the particular case of weak prior informations about the unknown parameters of the different models involved in the procedure. The main consequence of this assumption is that the marginal densities required by BMA are undetermined. However, in accordance with the theory of partial Bayes factors and in particular of fractional Bayes factors, we suggest replacing each marginal density with a ratio of integrals that can be efficiently computed via path sampling. Copyright (c) 2010 John Wiley & Sons, Ltd.
Reynolds-Averaged Turbulence Model Assessment for a Highly Back-Pressured Isolator Flowfield
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Middleton, Troy F.; Wilson, L. G.
2012-01-01
The use of computational fluid dynamics in scramjet engine component development is widespread in the existing literature. Unfortunately, the quantification of model-form uncertainties is rarely addressed with anything other than sensitivity studies, requiring that the computational results be intimately tied to and calibrated against existing test data. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Due to ground test facility limitations, this expanded role is believed to be a requirement by some in the test and evaluation community if scramjet engines are to be given serious consideration as a viable propulsion device. An effort has been initiated at the NASA Langley Research Center to validate several turbulence closure models used for Reynolds-averaged simulations of scramjet isolator flows. The turbulence models considered were the Menter BSL, Menter SST, Wilcox 1998, Wilcox 2006, and the Gatski-Speziale explicit algebraic Reynolds stress models. The simulations were carried out using the VULCAN computational fluid dynamics package developed at the NASA Langley Research Center. A procedure to quantify the numerical errors was developed to account for discretization errors in the validation process. This procedure utilized the grid convergence index defined by Roache as a bounding estimate for the numerical error. The validation data was collected from a mechanically back-pressured constant area (1 2 inch) isolator model with an isolator entrance Mach number of 2.5. As expected, the model-form uncertainty was substantial for the shock-dominated, massively separated flowfield within the isolator as evidenced by a 6 duct height variation in shock train length depending on the turbulence model employed. Generally speaking, the turbulence models that did not include an explicit stress limiter more closely
Shao, Kan; Small, Mitchell J
2011-10-01
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Duc, Hiep Nguyen; Rivett, Kelly; MacSween, Katrina; Le-Anh, Linh
2017-01-01
Rainfall in New South Wales (NSW), located in the southeast of the Australian continent, is known to be influenced by four major climate drivers: the El Niño/Southern Oscillation (ENSO), the Interdecadal Pacific Oscillation (IPO), the Southern Annular Mode (SAM) and the Indian Ocean Dipole (IOD). Many studies have shown the influences of ENSO, IPO modulation, SAM and IOD on rainfall in Australia and on southeast Australia in particular. However, only limited work has been undertaken using a multiple regression framework to examine the extent of the combined effect of these climate drivers on rainfall. This paper analysed the role of these combined climate drivers and their interaction on the rainfall in NSW using Bayesian Model Averaging (BMA) to account for model uncertainty by considering each of the linear models across the whole model space which is equal to the set of all possible combinations of predictors to find the model posterior probabilities and their expected predictor coefficients. Using BMA for linear regression models, we are able to corroborate and confirm the results from many previous studies. In addition, the method gives the ranking order of importance and the probability of the association of each of the climate drivers and their interaction on the rainfall at a site. The ability to quantify the relative contribution of the climate drivers offers the key to understand the complex interaction of drivers on rainfall, or lack of rainfall in a region, such as the three big droughts in southeastern Australia which have been the subject of discussion and debate recently on their causes.
Atomic structure data based on average-atom model for opacity calculations in astrophysical plasmas
NASA Astrophysics Data System (ADS)
Trzhaskovskaya, M. B.; Nikulin, V. K.
2018-03-01
Influence of the plasmas parameters on the electron structure of ions in astrophysical plasmas is studied on the basis of the average-atom model in the local thermodynamic equilibrium approximation. The relativistic Dirac-Slater method is used for the electron density estimation. The emphasis is on the investigation of an impact of the plasmas temperature and density on the ionization stages required for calculations of the plasmas opacities. The level population distributions and level energy spectra are calculated and analyzed for all ions with 6 ≤ Z ≤ 32 occurring in astrophysical plasmas. The plasma temperature range 2 - 200 eV and the density range 2 - 100 mg/cm3 are considered. The validity of the method used is supported by good agreement between our values of ionization stages for a number of ions, from oxygen up to uranium, and results obtained earlier by various methods among which are more complicated procedures.
Kubo–Greenwood approach to conductivity in dense plasmas with average atom models
Starrett, C. E.
2016-04-13
In this study, a new formulation of the Kubo–Greenwood conductivity for average atom models is given. The new formulation improves upon previous treatments by explicitly including the ionic-structure factor. Calculations based on this new expression lead to much improved agreement with ab initio results for DC conductivity of warm dense hydrogen and beryllium, and for thermal conductivity of hydrogen. We also give and test a slightly modified Ziman–Evans formula for the resistivity that includes a non-free electron density of states, thus removing an ambiguity in the original Ziman–Evans formula. Again, results based on this expression are in good agreement withmore » ab initio simulations for warm dense beryllium and hydrogen. However, for both these expressions, calculations of the electrical conductivity of warm dense aluminum lead to poor agreement at low temperatures compared to ab initio simulations.« less
MAIN software for density averaging, model building, structure refinement and validation
Turk, Dušan
2013-01-01
MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458
Large deviations of a long-time average in the Ehrenfest urn model
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Zilber, Pini
2018-05-01
Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .
NASA Astrophysics Data System (ADS)
Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.
2009-02-01
In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (μCT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.
NASA Astrophysics Data System (ADS)
Soltanzadeh, I.; Azadi, M.; Vakili, G. A.
2011-07-01
Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.
2018-01-01
Natural hazards (events that may cause actual disasters) are established in the literature as major causes of various massive and destructive problems worldwide. The occurrences of earthquakes, floods and heat waves affect millions of people through several impacts. These include cases of hospitalisation, loss of lives and economic challenges. The focus of this study was on the risk reduction of the disasters that occur because of extremely high temperatures and heat waves. Modelling average maximum daily temperature (AMDT) guards against the disaster risk and may also help countries towards preparing for extreme heat. This study discusses the use of the r largest order statistics approach of extreme value theory towards modelling AMDT over the period of 11 years, that is, 2000–2010. A generalised extreme value distribution for r largest order statistics is fitted to the annual maxima. This is performed in an effort to study the behaviour of the r largest order statistics. The method of maximum likelihood is used in estimating the target parameters and the frequency of occurrences of the hottest days is assessed. The study presents a case study of South Africa in which the data for the non-winter season (September–April of each year) are used. The meteorological data used are the AMDT that are collected by the South African Weather Service and provided by Eskom. The estimation of the shape parameter reveals evidence of a Weibull class as an appropriate distribution for modelling AMDT in South Africa. The extreme quantiles for specified return periods are estimated using the quantile function and the best model is chosen through the use of the deviance statistic with the support of the graphical diagnostic tools. The Entropy Difference Test (EDT) is used as a specification test for diagnosing the fit of the models to the data.
NASA Astrophysics Data System (ADS)
Zilberter, Ilya Alexandrovich
In this work, a hybrid Large Eddy Simulation / Reynolds-Averaged Navier Stokes (LES/RANS) turbulence model is applied to simulate two flows relevant to directed energy applications. The flow solver blends the Menter Baseline turbulence closure near solid boundaries with a Lenormand-type subgrid model in the free-stream with a blending function that employs the ratio of estimated inner and outer turbulent length scales. A Mach 2.2 mixing nozzle/diffuser system representative of a gas laser is simulated under a range of exit pressures to assess the ability of the model to predict the dynamics of the shock train. The simulation captures the location of the shock train responsible for pressure recovery but under-predicts the rate of pressure increase. Predicted turbulence production at the wall is found to be highly sensitive to the behavior of the RANS turbulence model. A Mach 2.3, high-Reynolds number, three-dimensional cavity flow is also simulated in order to compute the wavefront aberrations of an optical beam passing thorough the cavity. The cavity geometry is modeled using an immersed boundary method, and an auxiliary flat plate simulation is performed to replicate the effects of the wind-tunnel boundary layer on the computed optical path difference. Pressure spectra extracted on the cavity walls agree with empirical predictions based on Rossiter's formula. Proper orthogonal modes of the wavefront aberrations in a beam originating from the cavity center agree well with experimental data despite uncertainty about in flow turbulence levels and boundary layer thicknesses over the wind tunnel window. Dynamic mode decomposition of a planar wavefront spanning the cavity reveals that wavefront distortions are driven by shear layer oscillations at the Rossiter frequencies; these disturbances create eddy shocklets that propagate into the free-stream, creating additional optical wavefront distortion.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
Staebler, G. M.; Candy, J.; Howard, N. T.
2016-06-15
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...
2016-06-29
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less
Longitudinal variability in Jupiter's zonal winds derived from multi-wavelength HST observations
NASA Astrophysics Data System (ADS)
Johnson, Perianne E.; Morales-Juberías, Raúl; Simon, Amy; Gaulme, Patrick; Wong, Michael H.; Cosentino, Richard G.
2018-06-01
Multi-wavelength Hubble Space Telescope (HST) images of Jupiter from the Outer Planets Atmospheres Legacy (OPAL) and Wide Field Coverage for Juno (WFCJ) programs in 2015, 2016, and 2017 are used to derive wind profiles as a function of latitude and longitude. Wind profiles are typically zonally averaged to reduce measurement uncertainties. However, doing this destroys any variations of the zonal-component of winds in the longitudinal direction. Here, we present the results derived from using a "sliding-window" correlation method. This method adds longitudinal specificity, and allows for the detection of spatial variations in the zonal winds. Spatial variations are identified in two jets: 1 at 17 ° N, the location of a prominent westward jet, and the other at 7 ° S, the location of the chevrons. Temporal and spatial variations at the 24°N jet and the 5-μm hot spots are also examined.
Warren, Joshua L.; Schuck-Paim, Cynthia; Lustig, Roger; Lewnard, Joseph A.; Fuentes, Rodrigo; Bruhn, Christian A. W.; Taylor, Robert J.; Simonsen, Lone; Weinberger, Daniel M.
2017-01-01
Background: Pneumococcal conjugate vaccines (PCVs) prevent invasive pneumococcal disease and pneumonia. However, some low-and middle-income countries have yet to introduce PCV into their immunization programs due, in part, to lack of certainty about the potential impact. Assessing PCV benefits is challenging because specific data on pneumococcal disease are often lacking, and it can be difficult to separate the effects of factors other than the vaccine that could also affect pneumococcal disease rates. Methods: We assess PCV impact by combining Bayesian model averaging with change-point models to estimate the timing and magnitude of vaccine-associated changes, while controlling for seasonality and other covariates. We applied our approach to monthly time series of age-stratified hospitalizations related to pneumococcal infection in children younger 5 years of age in the United States, Brazil, and Chile. Results: Our method accurately detected changes in data in which we knew true and noteworthy changes occurred, i.e., in simulated data and for invasive pneumococcal disease. Moreover, 24 months after the vaccine introduction, we detected reductions of 14%, 9%, and 9% in the United States, Brazil, and Chile, respectively, in all-cause pneumonia (ACP) hospitalizations for age group 0 to <1 years of age. Conclusions: Our approach provides a flexible and sensitive method to detect changes in disease incidence that occur after the introduction of a vaccine or other intervention, while avoiding biases that exist in current approaches to time-trend analyses. PMID:28767518
Energy gain calculations in Penning fusion systems using a bounce-averaged Fokker-Planck model
NASA Astrophysics Data System (ADS)
Chacón, L.; Miley, G. H.; Barnes, D. C.; Knoll, D. A.
2000-11-01
In spherical Penning fusion devices, a spherical cloud of electrons, confined in a Penning-like trap, creates the ion-confining electrostatic well. Fusion energy gains for these systems have been calculated in optimistic conditions (i.e., spherically uniform electrostatic well, no collisional ion-electron interactions, single ion species) using a bounce-averaged Fokker-Planck (BAFP) model. Results show that steady-state distributions in which the Maxwellian ion population is dominant correspond to lowest ion recirculation powers (and hence highest fusion energy gains). It is also shown that realistic parabolic-like wells result in better energy gains than square wells, particularly at large well depths (>100 kV). Operating regimes with fusion power to ion input power ratios (Q-value) >100 have been identified. The effect of electron losses on the Q-value has been addressed heuristically using a semianalytic model, indicating that large Q-values are still possible provided that electron particle losses are kept small and well depths are large.
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Nikjoo, Hooshang; Dicello, John F.; Pisacane, Vincent; Cucinotta, Francis A.
2007-01-01
The purpose of this work is to test our theoretical model for the interpretation of radiation data measured in space. During the space missions astronauts are exposed to the complex field of radiation type and kinetic energies from galactic cosmic rays (GCR), trapped protons, and sometimes solar particle events (SPEs). The tissue equivalent proportional counter (TEPC) is a simple time-dependent approach for radiation monitoring for astronauts on board the International Space Station. Another and a newer approach to Microdosimetry is the use of silicon-on-insulator (SOI) technology launched on the MidSTAR-1 mission in low Earth orbit (LEO). In the radiation protection practice, the average quality factor of a radiation field is defined as a function of linear energy transfer (LET), Q(sub ave)(LET). However, TEPC measures the average quality factor as a function of the lineal energy y, Q(sub ave)(y), defined as the average energy deposition in a volume divided by the average chord length of the volume. Lineal energy, y, deviates from LET due to energy straggling, delta-ray escape or entry, and nuclear fragments produced in the detector volume. Monte Carlo track structure simulation was employed to obtain the response of a TEPC irradiated with charged particle for an equivalent site diameter of 1 micron of wall-less counter. The calculated data of the energy absorption in the wall-less counter were compiled for various y values for several ion types at various discrete projectile energy levels. For the simulation of TEPC response from the mixed radiation environments inside a spacecraft, such as, Space Shuttle and International Space Station, the complete microdosimetric TEPC response, f( y, E, Z), were calculated with the Monte Carlo theoretical results by using the first order Lagrangian interpolation for a monovariate function at a given y value (y = 0.1 keV/micron 5000 keV/micron) at any projectile energy level (E = 0.01 MeV/u to 50,000 MeV/u) of each specific
NASA Astrophysics Data System (ADS)
Rahardiantoro, S.; Sartono, B.; Kurnia, A.
2017-03-01
In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.
Zonal wavefront sensing with enhanced spatial resolution.
Pathak, Biswajit; Boruah, Bosanta R
2016-12-01
In this Letter, we introduce a scheme to enhance the spatial resolution of a zonal wavefront sensor. The zonal wavefront sensor comprises an array of binary gratings implemented by a ferroelectric spatial light modulator (FLCSLM) followed by a lens, in lieu of the array of lenses in the Shack-Hartmann wavefront sensor. We show that the fast response of the FLCSLM device facilitates quick display of several laterally shifted binary grating patterns, and the programmability of the device enables simultaneous capturing of each focal spot array. This eventually leads to a wavefront estimation with an enhanced spatial resolution without much sacrifice on the sensor frame rate, thus making the scheme suitable for high spatial resolution measurement of transient wavefronts. We present experimental and numerical simulation results to demonstrate the importance of the proposed wavefront sensing scheme.
Lifetime maps for orbits around Callisto using a double-averaged model
NASA Astrophysics Data System (ADS)
Cardoso dos Santos, Josué; Carvalho, Jean P. S.; Prado, Antônio F. B. A.; Vilhena de Moraes, Rodolpho
2017-12-01
The present paper studies the lifetime of orbits around a moon that is in orbit around its mother planet. In the context of the inner restricted three-body problem, the dynamical model considered in the present study uses the double-averaged dynamics of a spacecraft moving around a moon under the gravitational pulling of a disturbing third body in an elliptical orbit. The non-uniform distribution of the mass of the moon is also considered. Applications are performed using numerical experiments for the Callisto-spacecraft-Jupiter system, and lifetime maps for different values of the eccentricity of the disturbing body (Jupiter) are presented, in order to investigate the role of this parameter in these maps. The idea is to simulate a system with the same physical parameters as the Jupiter-Callisto system, but with larger eccentricities. These maps are also useful for validation and improvements in the results available in the literature, such as to find conditions to extend the available time for a massless orbiting body to be in highly inclined orbits under gravitational disturbances coming from the other bodies of the system.
Peng, Xian; Yuan, Han; Chen, Wufan; Ding, Lei
2017-01-01
Continuous loop averaging deconvolution (CLAD) is one of the proven methods for recovering transient auditory evoked potentials (AEPs) in rapid stimulation paradigms, which requires an elaborated stimulus sequence design to attenuate impacts from noise in data. The present study aimed to develop a new metric in gauging a CLAD sequence in terms of noise gain factor (NGF), which has been proposed previously but with less effectiveness in the presence of pink (1/f) noise. We derived the new metric by explicitly introducing the 1/f model into the proposed time-continuous sequence. We selected several representative CLAD sequences to test their noise property on typical EEG recordings, as well as on five real CLAD electroencephalogram (EEG) recordings to retrieve the middle latency responses. We also demonstrated the merit of the new metric in generating and quantifying optimized sequences using a classic genetic algorithm. The new metric shows evident improvements in measuring actual noise gains at different frequencies, and better performance than the original NGF in various aspects. The new metric is a generalized NGF measurement that can better quantify the performance of a CLAD sequence, and provide a more efficient mean of generating CLAD sequences via the incorporation with optimization algorithms. The present study can facilitate the specific application of CLAD paradigm with desired sequences in the clinic. PMID:28414803
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
NASA Astrophysics Data System (ADS)
Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.
2013-10-01
Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.
Nonlinear saturation of the slab ITG instability and zonal flow generation with fully kinetic ions
NASA Astrophysics Data System (ADS)
Miecnikowski, Matthew T.; Sturdevant, Benjamin J.; Chen, Yang; Parker, Scott E.
2018-05-01
Fully kinetic turbulence models are of interest for their potential to validate or replace gyrokinetic models in plasma regimes where the gyrokinetic expansion parameters are marginal. Here, we demonstrate fully kinetic ion capability by simulating the growth and nonlinear saturation of the ion-temperature-gradient instability in shearless slab geometry assuming adiabatic electrons and including zonal flow dynamics. The ion trajectories are integrated using the Lorentz force, and the cyclotron motion is fully resolved. Linear growth and nonlinear saturation characteristics show excellent agreement with analogous gyrokinetic simulations across a wide range of parameters. The fully kinetic simulation accurately reproduces the nonlinearly generated zonal flow. This work demonstrates nonlinear capability, resolution of weak gradient drive, and zonal flow physics, which are critical aspects of modeling plasma turbulence with full ion dynamics.
Taylor, Brian A.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason
2009-01-01
The authors investigated the performance of the iterative Steiglitz–McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (≤16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer–Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR)≥5 for echo train lengths (ETLs)≥4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and∕or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with ≥4 echoes and for T2* (<1.0%) with ≥7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire ≤16 echoes for one- and two-peak systems. Preliminary ex vivo and in vivo
NASA Astrophysics Data System (ADS)
Wöhling, T.; Schöniger, A.; Geiges, A.; Nowak, W.; Gayler, S.
2013-12-01
The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), we analyze the changes in posterior model weights and posterior model choice uncertainty when more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. Using a Bootstrap Filter (BF), the models were then conditioned on field measurements of soil moisture, matric potential, leaf-area index, and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at a field site at the Swabian Alb in Southwestern Germany. Following our new method, we derived model weights when using all data or different subsets thereof. We discuss to which degree the posterior mean outperforms the prior mean and all
Dynamics of zonal shear collapse with hydrodynamic electrons
NASA Astrophysics Data System (ADS)
Hajjar, R. J.; Diamond, P. H.; Malkov, M. A.
2018-06-01
This paper presents a theory for the collapse of the edge zonal shear layer, as observed at the density limit at low β. This paper investigates the scaling of the transport and mean profiles with the adiabaticity parameter α, with special emphasizes on fluxes relevant to zonal flow (ZF) generation. We show that the adiabaticity parameter characterizes the strength of production of zonal flows and so determines the state of turbulence. A 1D reduced model that self-consistently describes the spatiotemporal evolution of the mean density n ¯ , the azimuthal flow v¯ y , and the turbulent potential enstrophy ɛ=⟨(n˜ -∇2ϕ˜ ) 2/2 ⟩ —related to fluctuation intensity—is presented. Quasi-linear analysis determines how the particle flux Γn and vorticity flux Π=-χy∇2vy+Πre s scale with α, in both hydrodynamic and adiabatic regimes. As the plasma response passes from adiabatic (α > 1) to hydrodynamic (α < 1), the particle flux Γn is enhanced and the turbulent viscosity χy increases. However, the residual flux Πres—which drives the flow—drops with α. As a result, the mean vorticity gradient ∇2v¯ y=Πre s/χy —representative of the strength of the shear—also drops. The shear layer then collapses and turbulence is enhanced. The collapse is due to a decrease in ZF production, not an increase in damping. A physical picture for the onset of collapse is presented. The findings of this paper are used to motivate an explanation of the phenomenology of low β density limit evolution. A change from adiabatic ( α=kz2vth 2/(|ω|νei)>1 ) to hydrodynamic (α < 1) electron dynamics is associated with the density limit.
Implementing Multidisciplinary and Multi-Zonal Applications Using MPI
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.
1995-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. Unfortunately, simple message passing models, like Intel's NX library, only allow point-to-point and global communication within a single system-defined partition. This makes implementation of these applications quite difficult, if not impossible. In this report it is shown that the new Message Passing Interface (MPI) standard is a viable portable library for implementing the message passing portion of multidisciplinary applications. Further, with the extension of a portable loader, fully portable multidisciplinary application programs can be developed. Finally, the performance of MPI is compared to that of some native message passing libraries. This comparison shows that MPI can be implemented to deliver performance commensurate with native message libraries.
A model for late Archean chemical weathering and world average river water
NASA Astrophysics Data System (ADS)
Hao, Jihua; Sverjensky, Dimitri A.; Hazen, Robert M.
2017-01-01
Interpretations of the geologic record of late Archean near-surface environments depend very strongly on an understanding of weathering and resultant riverine transport to the oceans. The late Archean atmosphere is widely recognized to be anoxic (pO2,g =10-5 to 10-13 bars; pH2,g =10-3 to 10-5 bars). Detrital siderite (FeCO3), pyrite (FeS2), and uraninite (UO2) in late Archean sedimentary rocks also suggest anoxic conditions. However, whether the observed detrital minerals could have been thermodynamically stable during weathering and riverine transport under such an atmosphere remains untested. Similarly, interpretations of fluctuations recorded by trace metals and isotopes are hampered by a lack of knowledge of the chemical linkages between the atmosphere, weathering, riverine transport, and the mineralogical record. In this study, we used theoretical reaction path models to simulate the chemistry involved in rainwater and weathering processes under present-day and hypothetical Archean atmospheric boundary conditions. We included new estimates of the thermodynamic properties of Fe(II)-smectites as well as smectite and calcite solid solutions. Simulation of present-day weathering of basalt + calcite by world-average rainwater produced hematite, kaolinite, Na-Mg-saponite, and chalcedony after 10-4 moles of reactant minerals kg-1 H2O were destroyed. Combination of the resultant water chemistry with results for granitic weathering produced a water composition comparable to present-day world average river water (WARW). In contrast, under late Archean atmospheric conditions (pCO2,g =10-1.5 and pH2,g =10-5.0 bars), weathering of olivine basalt + calcite to the same degree of reaction produced kaolinite, chalcedony, and Na-Fe(II)-rich-saponite. Late Archean weathering of tonalite-trondhjemite-granodiorite (TTG) formed Fe(II)-rich beidellite and chalcedony. Combining the waters from olivine basalt and TTG weathering resulted in a model for late Archean WARW with the
Ensemble learning and model averaging for material identification in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.
2017-05-01
In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.
NASA Technical Reports Server (NTRS)
Cappelli, Daniele; Mansour, Nagi N.
2012-01-01
Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.
Disturbance zonal and vertical plasma drifts in the Peruvian sector during solar minimum phases
NASA Astrophysics Data System (ADS)
Santos, A. M.; Abdu, M. A.; Souza, J. R.; Sobral, J. H. A.; Batista, I. S.
2016-03-01
In the present work, we investigate the behavior of the equatorial F region zonal plasma drifts over the Peruvian region under magnetically disturbed conditions during two solar minimum epochs, one of them being the recent prolonged solar activity minimum. The study utilizes the vertical and zonal components of the plasma drifts measured by the Jicamarca (11.95°S; 76.87°W) incoherent scatter radar during two events that occurred on 10 April 1997 and 24 June 2008 and model calculation of the zonal drift in a realistic ionosphere simulated by the Sheffield University Plasmasphere-Ionosphere Model-INPE. Two main points are focused: (1) the connection between electric fields and plasma drifts under prompt penetration electric field during a disturbed periods and (2) anomalous behavior of daytime zonal drift in the absence of any magnetic storm. A perfect anticorrelation between vertical and zonal drifts was observed during the night and in the initial and growth phases of the magnetic storm. For the first time, based on a realistic low-latitude ionosphere, we will show, on a detailed quantitative basis, that this anticorrelation is driven mainly by a vertical Hall electric field induced by the primary zonal electric field in the presence of an enhanced nighttime E region ionization. It is shown that an increase in the field line-integrated Hall-to-Pedersen conductivity ratio (∑H/∑P), which can arise from precipitation of energetic particles in the region of the South American Magnetic Anomaly, is capable of explaining the observed anticorrelation between the vertical and zonal plasma drifts. Evidence for the particle ionization is provided from the occurrence of anomalous sporadic E layers over the low-latitude station, Cachoeira Paulista (22.67°S; 44.9°W)—Brazil. It will also be shown that the zonal plasma drift reversal to eastward in the afternoon two hours earlier than its reference quiet time pattern is possibly caused by weakening of the zonal wind
Zonal flow generation and its feedback on turbulence production in drift wave turbulence
Pushkarev, Andrey V.; Bos, Wouter J. T.; Nazarenko, Sergey V.
2013-04-15
Plasma turbulence described by the Hasegawa-Wakatani equations is simulated numerically for different models and values of the adiabaticity parameter C. It is found that for low values of C turbulence remains isotropic, zonal flows are not generated and there is no suppression of the meridional drift waves and particle transport. For high values of C, turbulence evolves towards highly anisotropic states with a dominant contribution of the zonal sector to the kinetic energy. This anisotropic flow leads to a decrease of turbulence production in the meridional sector and limits the particle transport across the mean isopycnal surfaces. This behavior allowsmore » to consider the Hasegawa-Wakatani equations a minimal PDE model, which contains the drift-wave/zonal-flow feedback loop mechanism.« less
Triple Cascade Behavior in Quasigeostrophic and Drift Turbulence and Generation of Zonal Jets
Nazarenko, Sergey; Quinn, Brenda
2009-09-11
We study quasigeostrophic (QG) and plasma drift turbulence within the Charney-Hasegawa-Mima (CHM) model. We focus on the zonostrophy, an extra invariant in the CHM model, and on its role in the formation of zonal jets. We use a generalized Fjoertoft argument for the energy, enstrophy, and zonostrophy and show that they cascade anisotropically into nonintersecting sectors in k space with the energy cascading towards large zonal scales. Using direct numerical simulations of the CHM equation, we show that zonostrophy is well conserved, and the three invariants cascade as predicted by the Fjoertoft argument.
Residual zonal flows in tokamaks and stellarators at arbitrary wavelengths
NASA Astrophysics Data System (ADS)
Monreal, Pedro; Calvo, Iván; Sánchez, Edilberto; Parra, Félix I.; Bustos, Andrés; Könies, Axel; Kleiber, Ralf; Görler, Tobias
2016-04-01
In the linear collisionless limit, a zonal potential perturbation in a toroidal plasma relaxes, in general, to a non-zero residual value. Expressions for the residual value in tokamak and stellarator geometries, and for arbitrary wavelengths, are derived. These expressions involve averages over the lowest order particle trajectories, that typically cannot be evaluated analytically. In this work, an efficient numerical method for the evaluation of such expressions is reported. It is shown that this method is faster than direct gyrokinetic simulations performed with the Gene and EUTERPE codes. Calculations of the residual value in stellarators are provided for much shorter wavelengths than previously available in the literature. Electrons must be treated kinetically in stellarators because, unlike in tokamaks, kinetic electrons modify the residual value even at long wavelengths. This effect, that had already been predicted theoretically, is confirmed by gyrokinetic simulations.
Drift-wave turbulence and zonal flow generation.
Balescu, R
2003-10-01
Drift-wave turbulence in a plasma is analyzed on the basis of the wave Liouville equation, describing the evolution of the distribution function of wave packets (quasiparticles) characterized by position x and wave vector k. A closed kinetic equation is derived for the ensemble-averaged part of this function by the methods of nonequilibrium statistical mechanics. It has the form of a non-Markovian advection-diffusion equation describing coupled diffusion processes in x and k spaces. General forms of the diffusion coefficients are obtained in terms of Lagrangian velocity correlations. The latter are calculated in the decorrelation trajectory approximation, a method recently developed for an accurate measure of the important trapping phenomena of particles in the rugged electrostatic potential. The analysis of individual decorrelation trajectories provides an illustration of the fragmentation of drift-wave structures in the radial direction and the generation of long-wavelength structures in the poloidal direction that are identified as zonal flows.
The role of zonal flows in reactive fluid closures
NASA Astrophysics Data System (ADS)
Jan, WEILAND
2018-07-01
We will give an overview of results obtained by our reactive fluid model. It is characterised as a fluid model where all moments with sources in the experiment are kept. Furthermore, full account is taken for the highest moments appearing in unexpanded denominators also including full toroidicity. It has been demonstrated that the strength of zonal flows is dramatically larger in reactive fluid closures than in those which involve dissipation. This gives a direct connection between the fluid closure and the level of excitation of turbulence. This is because zonal flows are needed to absorb the inverse cascade in quasi 2D turbulence. This also explains the similarity in structure of the transport coefficients in our model with a reactive closure in the energy equation and models which have a reactive closure because of zero ion temperature such as the Hasegawa–Wakatani model. Our exact reactive closure unifies several well-known features of tokamak experiments such as the L–H transition, internal transport barriers and the nonlinear Dimits upshift of the critical gradient for onset of transport. It also gives transport of the same level as that in nonlinear gyrokinetic codes. Since these include the kinetic resonance this confirms the validity of the thermodynamic properties of our model. Furthermore, we can show that while a strongly nonlinear model is needed in kinetic theory a quasilinear model is sufficient in the fluid description. Thus our quasilinear fluid model will be adequate for treating all relevant problems in bulk transport. This is finally confirmed by the reproduction by the model of the experimental power scaling of the confinement time τ E ∼ P ‑2/3. This confirms the validity of our reactive fluid model. This also gives credibility to our ITER simulations including the H-mode barrier. A new result is here, that alpha heating strongly reduces the slope of the H-mode barrier. This should significantly reduce the effects of ELM’s.
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)
2002-01-01
A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.
Compensation for use of monthly-averaged winds in numerical modeling
NASA Technical Reports Server (NTRS)
Parkinson, C. L.
1981-01-01
Ratios R of the monthly averaged wind speeds to the magnitudes of the monthly averaged wind vectors are presented over a 41 x 41 grid covering the southern Ocean and the Antarctic continent. The ratio is found to vary from 1 to over 1000, with an average value of 1.86. These ratios R are relevant for converting from sensible and latent heats calculated with mean monthly data to those calculated with 12 hourly data. The corresponding ratios alpha for wind stress, along with the angle deviations involved, are also presented over the same 41 x 41 grid. The values of alpha generally exceed those for R and average 2.66. Regions in zones of variable wind directions have larger R and alpha ratios, over the ice-covered portions of the southern Ocean averaging 2.74 and 4.35 for R and alpha respectively. Thus adjustments to compensate for the use of mean monthly wind velocities should be stronger for wind stress than for turbulent heats and stronger over ice covered regions than over regions with more persistent wind directions, e.g., those in the belt of mid-latitude westerlies.
Frequency-dependent behavior of the barotropic and baroclinic modes of zonal jet variability
NASA Astrophysics Data System (ADS)
Sheshadri, A.; Plumb, R. A.
2016-12-01
Stratosphere-troposphere interactions are frequently described in terms of the leading modes of variability, i.e. the annular modes. An idealized dynamical core model is used to explore the differences between the low- and high- frequency (periods greater and less than 30 days) behavior of the first two principal components of zonal mean zonal wind and eddy kinetic energy, i.e., the barotropic/baroclinic annular modes of variability of the extratropical circulation. The modes show similar spatial characteristics in the different frequency ranges considered, however the ranking of the modes switches in some cases from one range to the other. There is some cancelation in the signatures of eddy heat flux and eddy kinetic energy in the leading low-pass and high-pass filtered zonal wind mode, partly explaining their small signature in the total. At low frequencies, the first zonal wind mode describes latitudinal shifts of both the midlatitude jet and its associated storm tracks, and the persistence of zonal wind anomalies appears to be sustained primarily by a baroclinic, rather than a barotropic, feedback. On shorter time scales, the behavior is more complicated and transient.
Zonal wind observations during a geomagnetic storm
NASA Technical Reports Server (NTRS)
Miller, N. J.; Spencer, N. W.
1986-01-01
In situ measurements taken by the Wind and Temperature Spectrometer (WATS) onboard the Dynamics Explorer 2 spacecraft during a geomagnetic storm display zonal wind velocities that are reduced in the corotational direction as the storm intensifies. The data were taken within the altitudes 275 to 475 km in the dusk local time sector equatorward of the auroral region. Characteristic variations in the value of the Dst index of horizontal geomagnetic field strength are used to monitor the storm evolution. The detected global rise in atmospheric gas temperature indicates the development of thermospheric heating. Concurrent with that heating, reductions in corotational wind velocities were measured equatorward of the auroral region. Just after the sudden commencement, while thermospheric heating is intense in both hemispheres, eastward wind velocities in the northern hemisphere show reductions ranging from 500 m/s over high latitudes to 30 m/s over the geomagnetic equator. After 10 hours storm time, while northern thermospheric heating is diminishing, wind velocity reductions, distinct from those initially observed, begin to develop over southern latitudes. In the latter case, velocity reductions range from 300 m/s over the highest southern latitudes to 150 m/s over the geomagnetic equator and extend into the Northern Hemisphere. The observations highlight the interhemispheric asymmetry in the development of storm effects detected as enhanced gas temperatures and reduced eastward wind velocities. Zonal wind reductions over high latitudes can be attributed to the storm induced equatorward spread of westward polar cap plasma convection and the resulting plasma-neutral collisions. However, those collisions are less significant over low latitudes; so zonal wind reductions over low latitudes must be attributed to an equatorward extension of a thermospheric circulation pattern disrupted by high latitude collisions between neutrals transported via eastward winds and ions
Convection driven zonal flows and vortices in the major planets.
Busse, F. H.
1994-06-01
The dynamical properties of convection in rotating cylindrical annuli and spherical shells are reviewed. Simple theoretical models and experimental simulations of planetary convection through the use of the centrifugal force in the laboratory are emphasized. The model of columnar convection in a cylindrical annulus not only serves as a guide to the dynamical properties of convection in rotating sphere; it also is of interest as a basic physical system that exhibits several dynamical properties in their most simple form. The generation of zonal mean flows is discussed in some detail and examples of recent numerical computations are presented. The exploration of the parameter space for the annulus model is not yet complete and the theoretical exploration of convection in rotating spheres is still in the beginning phase. Quantitative comparisons with the observations of the dynamics of planetary atmospheres will have to await the consideration in the models of the effects of magnetic fields and the deviations from the Boussinesq approximation.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E
2011-01-01
Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modelling lidar volume-averaging and its significance to wind turbine wake measurements
NASA Astrophysics Data System (ADS)
Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.
2017-05-01
Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.
NASA Technical Reports Server (NTRS)
Kurzeja, R. J.; Haggard, K. V.; Grose, W. L.
1981-01-01
Three experiments have been performed using a three-dimensional, spectral quasi-geostrophic model in order to investigate the sensitivity of ozone transport to tropospheric orographic and thermal effects and to the zonal wind distribution. In the first experiment, the ozone distribution averaged over the last 30 days of a 60 day transport simulation was determined; in the second experiment, the transport simulation was repeated, but nonzonal orographic and thermal forcing was omitted; and in the final experiment, the simulation was conducted with the intensity and position of the stratospheric jets altered by addition of a Newtonian cooling term to the zonal-mean diabatic heating rate. Results of the three experiments are summarized by comparing the zonal-mean ozone distribution, the amplitude of eddy geopotential height, the zonal winds, and zonal-mean diabatic heating.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2014-11-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Representing the performance of cattle finished on an all forage diet in process-based whole farm system models has presented a challenge. To address this challenge, a study was done to evaluate average daily gain (ADG) predictions of the Integrated Farm System Model (IFSM) for steers consuming all-...
NASA Astrophysics Data System (ADS)
O'Brien, Enda; McKinstry, Alastair; Ralph, Adam
2015-04-01
Building on previous work presented at EGU 2013 (http://www.sciencedirect.com/science/article/pii/S1876610213016068 ), more results are available now from a different wind-farm in complex terrain in southwest Ireland. The basic approach is to interpolate wind-speed forecasts from an operational weather forecast model (i.e., HARMONIE in the case of Ireland) to the precise location of each wind-turbine, and then use Bayes Model Averaging (BMA; with statistical information collected from a prior training-period of e.g., 25 days) to remove systematic biases. Bias-corrected wind-speed forecasts (and associated power-generation forecasts) are then provided twice daily (at 5am and 5pm) out to 30 hours, with each forecast validation fed back to BMA for future learning. 30-hr forecasts from the operational Met Éireann HARMONIE model at 2.5km resolution have been validated against turbine SCADA observations since Jan. 2014. An extra high-resolution (0.5km grid-spacing) HARMONIE configuration has been run since Nov. 2014 as an extra member of the forecast "ensemble". A new version of HARMONIE with extra filters designed to stabilize high-resolution configurations has been run since Jan. 2015. Measures of forecast skill and forecast errors will be provided, and the contributions made by the various physical and computational enhancements to HARMONIE will be quantified.
A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy
Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.
2011-01-01
Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125
A hybrid Reynolds averaged/PDF closure model for supersonic turbulent combustion
NASA Technical Reports Server (NTRS)
Frankel, Steven H.; Hassan, H. A.; Drummond, J. Philip
1990-01-01
A hybrid Reynolds averaged/assumed pdf approach has been developed and applied to the study of turbulent combustion in a supersonic mixing layer. This approach is used to address the 'laminar-like' treatment of the thermochemical terms that appear in the conservation equations. Calculations were carried out for two experiments involving H2-air supersonic turbulent mixing. Two different forms of the pdf were implemented. In general, the results show modest improvement from previous calculations. Moreover, the results appear to be somewhat independent of the form of the assumed pdf.
Application of the Hilbert space average method on heat conduction models.
Michel, Mathias; Gemmer, Jochen; Mahler, Günter
2006-01-01
We analyze closed one-dimensional chains of weakly coupled many level systems, by means of the so-called Hilbert space average method (HAM). Subject to some concrete conditions on the Hamiltonian of the system, our theory predicts energy diffusion with respect to a coarse-grained description for almost all initial states. Close to the respective equilibrium, we investigate this behavior in terms of heat transport and derive the heat conduction coefficient. Thus, we are able to show that both heat (energy) diffusive behavior as well as Fourier's law follows from and is compatible with a reversible Schrödinger dynamics on the complete level of description.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Zonal structure and variability of the Western Pacific dynamic warm pool edge in CMIP5
NASA Astrophysics Data System (ADS)
Brown, Jaclyn N.; Langlais, Clothilde; Maes, Christophe
2014-06-01
The equatorial edge of the Western Pacific Warm Pool is operationally identified by one isotherm ranging between 28° and 29 °C, chosen to align with the interannual variability of strong zonal salinity gradients and the convergence of zonal ocean currents. The simulation of this edge is examined in 19 models from the World Climate Research Program Coupled Model Intercomparison Project Phase 5 (CMIP5), over the historical period from 1950 to 2000. The dynamic warm pool edge (DWPE), where the zonal currents converge, is difficult to determine from limited observations and biased models. A new analysis technique is introduced where a proxy for DWPE is determined by the isotherm that most closely correlates with the movements of the strong salinity gradient. It can therefore be a different isotherm in each model. The DWPE is simulated much closer to observations than if a direct temperature-only comparison is made. Aspects of the DWPE remain difficult for coupled models to simulate including the mean longitude, the interannual excursions, and the zonal convergence of ocean currents. Some models have only very weak salinity gradients trapped to the western side of the basin making it difficult to even identify a DWPE. The model's DWPE are generally 1-2 °C cooler than observed. In line with theory, the magnitude of the zonal migrations of the DWPE are strongly related to the amplitudes of the Nino3.4 SST index. Nevertheless, a better simulation of the mean location of the DWPE does not necessarily improve the amplitude of a model's ENSO. It is also found that in a few models (CSIROMk3.6, inmcm and inmcm4-esm) the warm pool displacements result from a net heating or cooling rather than a zonal advection of warm water. The simulation of the DWPE has implications for ENSO dynamics when considering ENSO paradigms such as the delayed action oscillator mechanism, the Advective-Reflective oscillator, and the zonal-advective feedback. These are also discussed in the context
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2009-01-01
In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.
Khangaonkar, Tarang; Yang, Zhaoqing; Kim, Tae Yun
2011-07-20
Through extensive field data collection and analysis efforts conducted since the 1950s, researchers have established an understanding of the characteristic features of circulation in Puget Sound. The pattern ranges from the classic fjordal behavior in some basins, with shallow brackish outflow and compensating inflow immediately below, to the typical two-layer flow observed in many partially mixed estuaries with saline inflow at depth. An attempt at reproducing this behavior by fitting an analytical formulation to past data is presented, followed by the application of a three-dimensional circulation and transport numerical model. The analytical treatment helped identify key physical processes and parameters,more » but quickly reconfirmed that response is complex and would require site-specific parameterization to include effects of sills and interconnected basins. The numerical model of Puget Sound, developed using unstructured-grid finite volume method, allowed resolution of the sub-basin geometric features, including presence of major islands, and site-specific strong advective vertical mixing created by bathymetry and multiple sills. The model was calibrated using available recent short-term oceanographic time series data sets from different parts of the Puget Sound basin. The results are compared against (1) recent velocity and salinity data collected in Puget Sound from 2006 and (2) a composite data set from previously analyzed historical records, mostly from the 1970s. The results highlight the ability of the model to reproduce velocity and salinity profile characteristics, their variations among Puget Sound subbasins, and tidally averaged circulation. Sensitivity of residual circulation to variations in freshwater inflow and resulting salinity gradient in fjordal sub-basins of Puget Sound is examined.« less
NASA Astrophysics Data System (ADS)
Umut Caglar, Mehmet; Pal, Ranadip
2010-10-01
The central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid.'' However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of data in the cellular level and probabilistic nature of interactions. Probabilistic models like Stochastic Master Equation (SME) or deterministic models like differential equations (DE) can be used to analyze these types of interactions. SME models based on chemical master equation (CME) can provide detailed representation of genetic regulatory system, but their use is restricted by the large data requirements and computational costs of calculations. The differential equations models on the other hand, have low calculation costs and much more adequate to generate control procedures on the system; but they are not adequate to investigate the probabilistic nature of interactions. In this work the success of the mapping between SME and DE is analyzed, and the success of a control policy generated by DE model with respect to SME model is examined. Index Terms--- Stochastic Master Equation models, Differential Equation Models, Control Policy Design, Systems biology
Text extraction via an edge-bounded averaging and a parametric character model
NASA Astrophysics Data System (ADS)
Fan, Jian
2003-01-01
We present a deterministic text extraction algorithm that relies on three basic assumptions: color/luminance uniformity of the interior region, closed boundaries of sharp edges and the consistency of local contrast. The algorithm is basically independent of the character alphabet, text layout, font size and orientation. The heart of this algorithm is an edge-bounded averaging for the classification of smooth regions that enhances robustness against noise without sacrificing boundary accuracy. We have also developed a verification process to clean up the residue of incoherent segmentation. Our framework provides a symmetric treatment for both regular and inverse text. We have proposed three heuristics for identifying the type of text from a cluster consisting of two types of pixel aggregates. Finally, we have demonstrated the advantages of the proposed algorithm over adaptive thresholding and block-based clustering methods in terms of boundary accuracy, segmentation coherency, and capability to identify inverse text and separate characters from background patches.
Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob
2013-11-01
Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.
Acute Zonal Cone Photoreceptor Outer Segment Loss
Sandhu, Harpal S.; Serrano, Leona W.; Traband, Anastasia; Lau, Marisa K.; Adamus, Grazyna; Avery, Robert A.
2017-01-01
Importance The diagnostic path presented narrows down the cause of acute vision loss to the cone photoreceptor outer segment and will refocus the search for the cause of similar currently idiopathic conditions. Objective To describe the structural and functional associations found in a patient with acute zonal occult photoreceptor loss. Design, Setting, and Participants A case report of an adolescent boy with acute visual field loss despite a normal fundus examination performed at a university teaching hospital. Main Outcomes and Measures Results of a complete ophthalmic examination, full-field flash electroretinography (ERG) and multifocal ERG, light-adapted achromatic and 2-color dark-adapted perimetry, and microperimetry. Imaging was performed with spectral-domain optical coherence tomography (SD-OCT), near-infrared (NIR) and short-wavelength (SW) fundus autofluorescence (FAF), and NIR reflectance (REF). Results The patient was evaluated within a week of the onset of a scotoma in the nasal field of his left eye. Visual acuity was 20/20 OU, and color vision was normal in both eyes. Results of the fundus examination and of SW-FAF and NIR-FAF imaging were normal in both eyes, whereas NIR-REF imaging showed a region of hyporeflectance temporal to the fovea that corresponded with a dense relative scotoma noted on light-adapted static perimetry in the left eye. Loss in the photoreceptor outer segment detected by SD-OCT co-localized with an area of dense cone dysfunction detected on light-adapted perimetry and multifocal ERG but with near-normal rod-mediated vision according to results of 2-color dark-adapted perimetry. Full-field flash ERG findings were normal in both eyes. The outer nuclear layer and inner retinal thicknesses were normal. Conclusions and Relevance Localized, isolated cone dysfunction may represent the earliest photoreceptor abnormality or a distinct entity within the acute zonal occult outer retinopathy complex. Acute zonal occult outer retinopathy
Acute Zonal Cone Photoreceptor Outer Segment Loss.
Aleman, Tomas S; Sandhu, Harpal S; Serrano, Leona W; Traband, Anastasia; Lau, Marisa K; Adamus, Grazyna; Avery, Robert A
2017-05-01
The diagnostic path presented narrows down the cause of acute vision loss to the cone photoreceptor outer segment and will refocus the search for the cause of similar currently idiopathic conditions. To describe the structural and functional associations found in a patient with acute zonal occult photoreceptor loss. A case report of an adolescent boy with acute visual field loss despite a normal fundus examination performed at a university teaching hospital. Results of a complete ophthalmic examination, full-field flash electroretinography (ERG) and multifocal ERG, light-adapted achromatic and 2-color dark-adapted perimetry, and microperimetry. Imaging was performed with spectral-domain optical coherence tomography (SD-OCT), near-infrared (NIR) and short-wavelength (SW) fundus autofluorescence (FAF), and NIR reflectance (REF). The patient was evaluated within a week of the onset of a scotoma in the nasal field of his left eye. Visual acuity was 20/20 OU, and color vision was normal in both eyes. Results of the fundus examination and of SW-FAF and NIR-FAF imaging were normal in both eyes, whereas NIR-REF imaging showed a region of hyporeflectance temporal to the fovea that corresponded with a dense relative scotoma noted on light-adapted static perimetry in the left eye. Loss in the photoreceptor outer segment detected by SD-OCT co-localized with an area of dense cone dysfunction detected on light-adapted perimetry and multifocal ERG but with near-normal rod-mediated vision according to results of 2-color dark-adapted perimetry. Full-field flash ERG findings were normal in both eyes. The outer nuclear layer and inner retinal thicknesses were normal. Localized, isolated cone dysfunction may represent the earliest photoreceptor abnormality or a distinct entity within the acute zonal occult outer retinopathy complex. Acute zonal occult outer retinopathy should be considered in patients with acute vision loss and abnormalities on NIR-REF imaging, especially if
2012 - 2016 Corporate Average Fuel Economy compliance and effects modeling system documentation
DOT National Transportation Integrated Search
2010-03-01
The Volpe National Transportation Systems Center (Volpe Center) of the United States Department of Transportation's Research and Innovative Technology Administration has developed a modeling system to assist the National Highway Traffic Safety Admini...
2017 - 2025 Corporate Average Fuel Economy Compliance and Effects Modeling System Documentation.
DOT National Transportation Integrated Search
2012-08-31
The Volpe National Transportation Systems Center (Volpe Center) of the United States Department of Transportations Research and Innovative Technology Administration has developed a modeling system to assist the National Highway Traffic Safety Admi...
A data-driven model for estimating industry average numbers of hospital security staff.
Vellani, Karim H; Emery, Robert J; Reingle Gonzalez, Jennifer M
2015-01-01
In this article the authors report the results of an expanded survey, financed by the International Healthcare Security and Safety Foundation (IHSSF), applied to the development of a model for determining the number of security officers required by a hospital.
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.
2011-09-01
Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.
2014-04-01
as a function of the pulse duty cycle PDC is [1]: ∆C/N0 = 20 log(1 − PDC ) (1) PDC , PW × PRF (2) where PW represents the pulse width (sec) and PRF is...corresponding degradation in C/N0 should now be modeled as ∆C/N0 = 20 log(1 − PDCLIM) (3) PDCLIM , PDC τobs TTC . (4) The degradation model of Eqn. 3 and 4...cycle that is the product of the duty cycle of the pulsed waveform ( PDC ) and the duty cycle of the of the gating waveform (τobs/TTC). While such a model
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.
Shinzato, Takashi
2015-01-01
In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model
Shinzato, Takashi
2015-01-01
In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach. PMID:26225761
NASA Astrophysics Data System (ADS)
Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem
2018-05-01
In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...
Michael J. Erickson; Brian A. Colle; Joseph J. Charney
2012-01-01
The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
Numerous urban canopy schemes have recently been developed for mesoscale models in order to approximate the drag and turbulent production effects of a city on the air flow. However, little data exists by which to evaluate the efficacy of the schemes since "area-averaged&quo...
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
About the coupling of turbulence closure models with averaged Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Vandromme, D.; Ha Minh, H.
1986-01-01
The MacCormack implicit predictor-corrector model (1981) for numerical solution of the coupled Navier-Stokes equations for turbulent flows is extended to nonconservative multiequation turbulence models, as well as the inclusion of second-order Reynolds stress turbulence closure. A scalar effective pressure turbulent contribution to the pressure field is defined to approximate the effects of the Reynolds stress in strongly sheared flows. The Jacobian matrices of the transport equations are diagonalized to reduce the required computer memory and run time. Techniques are defined for including turbulence in the diagonalization. Application of the method is demonstrated with solutions generated for transonic nozzle flow and for the interaction between a supersonic flat plate boundary layer and a 12 deg compression-expansion ramp.
Modeling the average shortest-path length in growth of word-adjacency networks
NASA Astrophysics Data System (ADS)
Kulig, Andrzej; DroŻdŻ, Stanisław; Kwapień, Jarosław; OświÈ©cimka, Paweł
2015-03-01
We investigate properties of evolving linguistic networks defined by the word-adjacency relation. Such networks belong to the category of networks with accelerated growth but their shortest-path length appears to reveal the network size dependence of different functional form than the ones known so far. We thus compare the networks created from literary texts with their artificial substitutes based on different variants of the Dorogovtsev-Mendes model and observe that none of them is able to properly simulate the novel asymptotics of the shortest-path length. Then, we identify the local chainlike linear growth induced by grammar and style as a missing element in this model and extend it by incorporating such effects. It is in this way that a satisfactory agreement with the empirical result is obtained.
Tsyganenko, Nikolai A.; Johnson, Catherine L.; Philpott, Lydia C.; Anderson, Brian J.; Al Asad, Manar M.; Solomon, Sean C.; McNutt, Ralph L.
2015-01-01
Abstract Accurate knowledge of Mercury's magnetospheric magnetic field is required to understand the sources of the planet's internal field. We present the first model of Mercury's magnetospheric magnetic field confined within a magnetopause shape derived from Magnetometer observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft. The field of internal origin is approximated by a dipole of magnitude 190 nT RM 3, where RM is Mercury's radius, offset northward by 479 km along the spin axis. External field sources include currents flowing on the magnetopause boundary and in the cross‐tail current sheet. The cross‐tail current is described by a disk‐shaped current near the planet and a sheet current at larger (≳ 5 RM) antisunward distances. The tail currents are constrained by minimizing the root‐mean‐square (RMS) residual between the model and the magnetic field observed within the magnetosphere. The magnetopause current contributions are derived by shielding the field of each module external to the magnetopause by minimizing the RMS normal component of the magnetic field at the magnetopause. The new model yields improvements over the previously developed paraboloid model in regions that are close to the magnetopause and the nightside magnetic equatorial plane. Magnetic field residuals remain that are distributed systematically over large areas and vary monotonically with magnetic activity. Further advances in empirical descriptions of Mercury's magnetospheric external field will need to account for the dependence of the tail and magnetopause currents on magnetic activity and additional sources within the magnetosphere associated with Birkeland currents and plasma distributions near the dayside magnetopause. PMID:27656335
Korth, Haje; Tsyganenko, Nikolai A; Johnson, Catherine L; Philpott, Lydia C; Anderson, Brian J; Al Asad, Manar M; Solomon, Sean C; McNutt, Ralph L
2015-06-01
Accurate knowledge of Mercury's magnetospheric magnetic field is required to understand the sources of the planet's internal field. We present the first model of Mercury's magnetospheric magnetic field confined within a magnetopause shape derived from Magnetometer observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft. The field of internal origin is approximated by a dipole of magnitude 190 nT R M 3 , where R M is Mercury's radius, offset northward by 479 km along the spin axis. External field sources include currents flowing on the magnetopause boundary and in the cross-tail current sheet. The cross-tail current is described by a disk-shaped current near the planet and a sheet current at larger (≳ 5 R M ) antisunward distances. The tail currents are constrained by minimizing the root-mean-square (RMS) residual between the model and the magnetic field observed within the magnetosphere. The magnetopause current contributions are derived by shielding the field of each module external to the magnetopause by minimizing the RMS normal component of the magnetic field at the magnetopause. The new model yields improvements over the previously developed paraboloid model in regions that are close to the magnetopause and the nightside magnetic equatorial plane. Magnetic field residuals remain that are distributed systematically over large areas and vary monotonically with magnetic activity. Further advances in empirical descriptions of Mercury's magnetospheric external field will need to account for the dependence of the tail and magnetopause currents on magnetic activity and additional sources within the magnetosphere associated with Birkeland currents and plasma distributions near the dayside magnetopause.
Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M
2013-06-01
Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ∼12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that
NASA Astrophysics Data System (ADS)
Hu, Shujuan; Cheng, Jianbo; Xu, Ming; Chou, Jifan
2018-04-01
The three-pattern decomposition of global atmospheric circulation (TPDGAC) partitions three-dimensional (3D) atmospheric circulation into horizontal, meridional and zonal components to study the 3D structures of global atmospheric circulation. This paper incorporates the three-pattern decomposition model (TPDM) into primitive equations of atmospheric dynamics and establishes a new set of dynamical equations of the horizontal, meridional and zonal circulations in which the operator properties are studied and energy conservation laws are preserved, as in the primitive equations. The physical significance of the newly established equations is demonstrated. Our findings reveal that the new equations are essentially the 3D vorticity equations of atmosphere and that the time evolution rules of the horizontal, meridional and zonal circulations can be described from the perspective of 3D vorticity evolution. The new set of dynamical equations includes decomposed expressions that can be used to explore the source terms of large-scale atmospheric circulation variations. A simplified model is presented to demonstrate the potential applications of the new equations for studying the dynamics of the Rossby, Hadley and Walker circulations. The model shows that the horizontal air temperature anomaly gradient (ATAG) induces changes in meridional and zonal circulations and promotes the baroclinic evolution of the horizontal circulation. The simplified model also indicates that the absolute vorticity of the horizontal circulation is not conserved, and its changes can be described by changes in the vertical vorticities of the meridional and zonal circulations. Moreover, the thermodynamic equation shows that the induced meridional and zonal circulations and advection transport by the horizontal circulation in turn cause a redistribution of the air temperature. The simplified model reveals the fundamental rules between the evolution of the air temperature and the horizontal, meridional
NASA Technical Reports Server (NTRS)
Kuhn, Gary D.
1988-01-01
Turbulent flows subjected to various kinds of unsteady disturbances were simulated using a large-eddy-simulation computer code for flow in a channel. The disturbances were: a normal velocity expressed as a traveling wave on one wall of the channel; staggered blowing and suction distributions on the opposite walls of the channel; and oscillations of the mean flow through the channel. The wall boundary conditions were designed to simulate the effects of wakes of a stator stage passing through a rotor channel in a turbine. The oscillating flow simulated the effects of a pressure pulse moving over the rotor blade boundary layer. The objective of the simulations was to provide better understanding of the effects of time-dependent disturbances on the turbulence of a boundary layer and of the underlying physical phenomena regarding the basic interaction between the turbulence and external disturbances of the type found in turbomachinery. Results showed that turbulence is sensitive to certain ranges of frequencies of disturbances. However, no direct connection was found between the frequency of imposed disturbances and characteristic burst frequency of turbulence. New insight into the nature of turbulence at high frequencies was found. The viscous phenomena near solid walls was found to be the dominant influence for high frequency perturbations. At high frequencies, the turbulence was found to be undisturbed, remaining the same as for the steady mean flow. A transition range exists between the high frequency range and the low, or quasi-steady, range in which the turbulence is not predictable by either quasi-steady models or the steady flow model. The limiting lowest frequency for use of the steady flow turbulence model is that for which the viscous Stokes layer based on the blade passing frequency is thicker than the laminar sublayer.
Influence of Boussinesq coefficient on depth-averaged modelling of rapid flows
NASA Astrophysics Data System (ADS)
Yang, Fan; Liang, Dongfang; Xiao, Yang
2018-04-01
The traditional Alternating Direction Implicit (ADI) scheme has been proven to be incapable of modelling trans-critical flows. Its inherent lack of shock-capturing capability often results in spurious oscillations and computational instabilities. However, the ADI scheme is still widely adopted in flood modelling software, and various special treatments have been designed to stabilise the computation. Modification of the Boussinesq coefficient to adjust the amount of fluid inertia is a numerical treatment that allows the ADI scheme to be applicable to rapid flows. This study comprehensively examines the impact of this numerical treatment over a range of flow conditions. A shock-capturing TVD-MacCormack model is used to provide reference results. For unsteady flows over a frictionless bed, such as idealised dam-break floods, the results suggest that an increase in the value of the Boussinesq coefficient reduces the amplitude of the spurious oscillations. The opposite is observed for steady rapid flows over a frictional bed. Finally, a two-dimensional urban flooding phenomenon is presented, involving unsteady flow over a frictional bed. The results show that increasing the value of the Boussinesq coefficient can significantly reduce the numerical oscillations and reduce the predicted area of inundation. In order to stabilise the ADI computations, the Boussinesq coefficient could be judiciously raised or lowered depending on whether the rapid flow is steady or unsteady and whether the bed is frictional or frictionless. An increase in the Boussinesq coefficient generally leads to overprediction of the propagating speed of the flood wave over a frictionless bed, but the opposite is true when bed friction is significant.
Following a trend with an exponential moving average: Analytical results for a Gaussian model
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.; Serror, Jeremy
2014-01-01
We investigate how price variations of a stock are transformed into profits and losses (P&Ls) of a trend following strategy. In the frame of a Gaussian model, we derive the probability distribution of P&Ls and analyze its moments (mean, variance, skewness and kurtosis) and asymptotic behavior (quantiles). We show that the asymmetry of the distribution (with often small losses and less frequent but significant profits) is reminiscent to trend following strategies and less dependent on peculiarities of price variations. At short times, trend following strategies admit larger losses than one may anticipate from standard Gaussian estimates, while smaller losses are ensured at longer times. Simple explicit formulas characterizing the distribution of P&Ls illustrate the basic mechanisms of momentum trading, while general matrix representations can be applied to arbitrary Gaussian models. We also compute explicitly annualized risk adjusted P&L and strategy turnover to account for transaction costs. We deduce the trend following optimal timescale and its dependence on both auto-correlation level and transaction costs. Theoretical results are illustrated on the Dow Jones index.
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-03-30
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
NASA Astrophysics Data System (ADS)
Smolenskaya, N. M.; Smolenskii, V. V.
2018-01-01
The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.
NASA Astrophysics Data System (ADS)
Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.
2018-03-01
This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.
NASA Astrophysics Data System (ADS)
Han, Fengshan; Wu, Xinli; Li, Xia; Zhu, Dekang
2018-02-01
Zonal disintegration phenomenon was found in deep mining roadway surrounding rock. It seriously affects the safety of mining and underground engineering and it may lead to the occurrence of natural disasters. in deep mining roadway surrounding rock, tectonic stress in deep mining roadway rock mass, horizontal stress is much greater than the vertical stress, When the direction of maximum principal stress is parallel to the axis of the roadway in deep mining, this is the main reasons for Zonal disintegration phenomenon. Using ABAQUS software to numerical simulation of the three-dimensional model of roadway rupture formation process systematically, and the study shows that when The Direction of maximum main stress in deep underground mining is along the roadway axial direction, Zonal disintegration phenomenon in deep underground mining is successfully reproduced by our numerical simulation..numerical simulation shows that using ABAQUA simulation can reproduce Zonal disintegration phenomenon and the formation process of damage of surrounding rock can be reproduced. which have important engineering practical significance.
Graham, Jonathan Pietarila; Mininni, Pablo D; Pouquet, Annick
2005-10-01
We present direct numerical simulations and Lagrangian averaged (also known as alpha model) simulations of forced and free decaying magnetohydrodynamic turbulence in two dimensions. The statistics of sign cancellations of the current at small scales is studied using both the cancellation exponent and the fractal dimension of the structures. The alpha model is found to have the same scaling behavior between positive and negative contributions as the direct numerical simulations. The alpha model is also able to reproduce the time evolution of these quantities in free decaying turbulence. At large Reynolds numbers, an independence of the cancellation exponent with the Reynolds numbers is observed.
Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry
Huang, Lei; Xue, Junpeng; Gao, Bo
2017-06-14
There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less
Wang, Kewei; Song, Wentao; Li, Jinping; Lu, Wu; Yu, Jiangang; Han, Xiaofeng
2016-05-01
The aim of this study is to forecast the incidence of bacillary dysentery with a prediction model. We collected the annual and monthly laboratory data of confirmed cases from January 2004 to December 2014. In this study, we applied an autoregressive integrated moving average (ARIMA) model to forecast bacillary dysentery incidence in Jiangsu, China. The ARIMA (1, 1, 1) × (1, 1, 2)12 model fitted exactly with the number of cases during January 2004 to December 2014. The fitted model was then used to predict bacillary dysentery incidence during the period January to August 2015, and the number of cases fell within the model's CI for the predicted number of cases during January-August 2015. This study shows that the ARIMA model fits the fluctuations in bacillary dysentery frequency, and it can be used for future forecasting when applied to bacillary dysentery prevention and control. © 2016 APJPH.
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1993-01-01
New turbulence modeling options recently implemented for the 3-D version of Proteus, a Reynolds-averaged compressible Navier-Stokes code, are described. The implemented turbulence models include: the Baldwin-Lomax algebraic model, the Baldwin-Barth one-equation model, the Chien k-epsilon model, and the Launder-Sharma k-epsilon model. Features of this turbulence modeling package include: well documented and easy to use turbulence modeling options, uniform integration of turbulence models from different classes, automatic initialization of turbulence variables for calculations using one- or two-equation turbulence models, multiple solid boundaries treatment, and fully vectorized L-U solver for one- and two-equation models. Validation test cases include the incompressible and compressible flat plate turbulent boundary layers, turbulent developing S-duct flow, and glancing shock wave/turbulent boundary layer interaction. Good agreement is obtained between the computational results and experimental data. Sensitivity of the compressible turbulent solutions with the method of y(sup +) computation, the turbulent length scale correction, and some compressibility corrections are examined in detail. The test cases show that the highly optimized one-and two-equation turbulence models can be used in routine 3-D Navier-Stokes computations with no significant increase in CPU time as compared with the Baldwin-Lomax algebraic model.
NASA Astrophysics Data System (ADS)
Dimbylow, Peter
2005-09-01
Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.
Dimbylow, Peter
2005-09-07
Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
2013-05-22
This plot shows the concentration of carbon dioxide in Earth mid-troposphere at various latitudes as measured by NASA Aqua satellite. The colored lines represent different latitude bands that circle Earth, called zones.
The Mass and Angular Momentum Balance of the Zonally-Averaged Global Circulation.
1981-01-01
2 2SFSf F SO~ ABSTRACT (0m-l n oerse~e side Ht moeoeei md fdeutly’ by Week now"*ee Li ATTACHED hO u 47 EDI TIo OP 1 NOV6 Ies OBSOLETE UNCLASS 82 09 28...eddies, and transient circulat.@ns, respec tively. rigeree 12 and 13 displa the vertical sad meridional distribution of relat’.ve angular momentum trass...the transient component is the dominant mode of angular momentum transport in January. It is poleward at virtually all latitudes in each hemisphere
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
NASA Astrophysics Data System (ADS)
Santos, Ângela M.; Abdu, Mangalathayil A.; Souza, Jonas R.; Batista, Inez S.; Sobral, José H. A.
2017-11-01
The influence of the recent deep and prolonged solar minimum on the daytime zonal and vertical plasma drift velocities during quiet time is investigated in this work. Analyzing the data obtained from incoherent scatter radar from Jicamarca (11.95° S, 76.87° W) we observe an anomalous behavior of the zonal plasma drift during June 2008 characterized by lower than usual daytime westward drift and its early afternoon reversal to eastward. As a case study the zonal drift observed on 24 June 2008 is modeled using a realistic low-latitude ionosphere simulated by the Sheffield University Plasmasphere-Ionosphere Model-INPE (SUPIM-INPE). The results show that an anomalously low zonal wind was mainly responsible for the observed anomalous behavior in the zonal drift. A comparative study of the vertical plasma drifts obtained from magnetometer data for some periods of maximum (2000-2002) and minimum solar activity (1998, 2008, 2010) phases reveal a considerable decrease on the E-region conductivity and the dynamo electric field during 2008. However, we believe that the contribution of these characteristics to the unusual behavior of the zonal plasma drift is significantly smaller than that arising from the anomalously low zonal wind. The SUPIM-INPE result of the critical frequency of the F layer (foF2) over Jicamarca suggested a lower radiation flux than that predicted by solar irradiance model (SOLAR2000) for June 2008.
Zhang, Peng; Parenteau, Chantal; Wang, Lu; Holcombe, Sven; Kohoyda-Inglis, Carla; Sullivan, June; Wang, Stewart
2013-11-01
This study resulted in a model-averaging methodology that predicts crash injury risk using vehicle, demographic, and morphomic variables and assesses the importance of individual predictors. The effectiveness of this methodology was illustrated through analysis of occupant chest injuries in frontal vehicle crashes. The crash data were obtained from the International Center for Automotive Medicine (ICAM) database for calendar year 1996 to 2012. The morphomic data are quantitative measurements of variations in human body 3-dimensional anatomy. Morphomics are obtained from imaging records. In this study, morphomics were obtained from chest, abdomen, and spine CT using novel patented algorithms. A NASS-trained crash investigator with over thirty years of experience collected the in-depth crash data. There were 226 cases available with occupants involved in frontal crashes and morphomic measurements. Only cases with complete recorded data were retained for statistical analysis. Logistic regression models were fitted using all possible configurations of vehicle, demographic, and morphomic variables. Different models were ranked by the Akaike Information Criteria (AIC). An averaged logistic regression model approach was used due to the limited sample size relative to the number of variables. This approach is helpful when addressing variable selection, building prediction models, and assessing the importance of individual variables. The final predictive results were developed using this approach, based on the top 100 models in the AIC ranking. Model-averaging minimized model uncertainty, decreased the overall prediction variance, and provided an approach to evaluating the importance of individual variables. There were 17 variables investigated: four vehicle, four demographic, and nine morphomic. More than 130,000 logistic models were investigated in total. The models were characterized into four scenarios to assess individual variable contribution to injury risk. Scenario
Generation of zonal flows through symmetry breaking of statistical homogeneity
NASA Astrophysics Data System (ADS)
Parker, Jeffrey B.; Krommes, John A.
2014-03-01
In geophysical and plasma contexts, zonal flows (ZFs) are well known to arise out of turbulence. We elucidate the transition from homogeneous turbulence without ZFs to inhomogeneous turbulence with steady ZFs. Starting from the equation for barotropic flow on a β plane, we employ both the quasilinear approximation and a statistical average, which retains a great deal of the qualitative behavior of the full system. Within the resulting framework known as CE2, we extend recent understanding of the symmetry-breaking zonostrophic instability and show that it is an example of a Type {{\\text{I}}_{s}} instability within the pattern formation literature. The broken symmetry is statistical homogeneity. Near the bifurcation point, the slow dynamics of CE2 are governed by a well-known amplitude equation. The important features of this amplitude equation, and therefore of the CE2 system, are multiple. First, the ZF wavelength is not unique. In an idealized, infinite system, there is a continuous band of ZF wavelengths that allow a nonlinear equilibrium. Second, of these wavelengths, only those within a smaller subband are stable. Unstable wavelengths must evolve to reach a stable wavelength; this process manifests as merging jets. These behaviors are shown numerically to hold in the CE2 system. We also conclude that the stability of the equilibria near the bifurcation point, which is governed by the Eckhaus instability, is independent of the Rayleigh-Kuo criterion.
NASA Astrophysics Data System (ADS)
Hartland, Tucker; Schilling, Oleg
2017-11-01
Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang; Cao, Yang
2016-08-16
To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. A time-series study using regional death registry between 2009 and 2010. 8 districts in a large metropolitan area in Northern China. 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (-1.09 to 4.28 vs -1.08 to 3.93) and the PCs-based model (-2.23 to 4.07 vs -2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, -1.12 to 4.85 versus -1.11 versus 4.83. The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang
2016-01-01
Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727
Characteristics and Mechanisms of Zonal Oscillation of Western Pacific Subtropical High in Summer
NASA Astrophysics Data System (ADS)
Guan, W.; Ren, X.; Hu, H.
2017-12-01
The zonal oscillation of the western Pacific subtropical high (WPSH) influences the weather and climate over East Asia significantly. This study investigates the features and mechanisms of the zonal oscillation of the WPSH during summer on subseasonal time scales. The zonal oscillation index of the WPSH is defined by normalized subseasonal geopotential height anomaly at 500hPa averaged over the WPSH's western edge (110° - 140°E, 10° - 30°N). The index shows a predominant oscillation with a period of 10-40 days. Large positive index indicates a strong anticyclonic anomaly over East Asia and its coastal region south of 30°N at both 850hPa and 500hPa. The WPSH stretches more westward accompanied by warmer SST anomalies beneath the western edge of the WPSH. Meanwhile, above-normal precipitation is seen over the Yangtze-Huaihe river basin and below-normal precipitation over the south of the Yangtze River. Negative index suggests a more eastward position of WPSH. The anomalies in circulation and SST for negative index are almost the mirror image of those for the positive index. In early summer, the zonal shift of the WPSH is affected by both the East Asia/Pacific (EAP) teleconnection pattern and the Silk road pattern (SRP). The positive (negative) phase of the EAP pattern is characterized by a low-level anticyclonic (cyclonic) anomaly over the subtropical western Pacific, indicating the western extension (eastward retreat) of the WPSH. Comparing with the EAP pattern, the SRP forms an upper-level anticyclonic (cyclonic) anomaly in mid-latitudes of East Asia, and then leads to the westward (eastward) movement of the WPSH. In late summer, the zonal shift of the WPSH is mainly affected by the EAP pattern, because the EAP pattern in late summer is stronger than that in early summer. The zonal shift of the WPSH is also influenced by the subseasonal air-sea interaction locally. During the early stage of WPSH's westward stretch, the local SST anomaly in late summer is
Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An
2010-01-01
To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.
Ding, Z; Wang, K; Li, J; Cong, X
2001-12-01
The oscillatory shear index (OSI) was developed based on the hypothesis that intimal hyperplasia was correlated with oscillatory shear stresses. However, the validity of the OSI was in question since the correlation between intimal thickness and the OSI at the side walls of the sinus in the Y-shaped model of the average human carotid bifurcation (Y-AHCB) was weak. The objectives of this paper are to examine whether the reason for the weak correlation lies in the deviation in geometry of Y-AHCB from real human carotid bifurcation, and whether this correlation is clearly improved in the tuning-fork-shaped model of the average human carotid bifurcation (TF-AHCB). The geometry of the TF-AHCB model was based on observation and statistical analysis of specimens from 74 cadavers. The flow fields in both models were studied and compared by using flow visualization methods under steady flow conditions and by using laser Doppler anemometer (LDA) under pulsatile flow conditions. The TF-shaped geometry leads to a more complex flow field than the Y-shaped geometry. This added complexity includes strengthened helical movements in the sinus, new flow separation zone, and directional changes in the secondary flow patterns. The results show that the OSI-values at the side walls of the sinus in the TF-shaped model were more than two times as large as those in the Y-shaped model. This study confirmed the stronger correlation between the OSI and intimal thickness in the tuning-fork geometry of human carotid bifurcation, and the TF-AHCB model is a significant improvement over the traditional Y-shaped model.
Zonally Asymmetric Ozone and the Morphology of the Planetary Waveguide
2011-07-15
sections for the 271 troposphere , J. Atmos. Sci., 37, 2600-2616. 272 Eyring, V., et al. (2007), Multimodel projections of stratospheric ozone ...GEOPHYSICAL RESEARCH LETTERS, VOL. ???, XXXX, DOI:10.1029/, JULY 15, 2011 Zonally asymmetric ozone and the morphology of the 1 planetary waveguide...that zonally asymmetric 6 ozone (ZAO) profoundly changes the morphology of the Northern Hemisphere planetary 7 waveguide (PWG). ZAO causes the PWG to
Iverson, Richard M.; George, David L.
2014-01-01
To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.
Fernandez, F.G.A.; Camacho, F.G.; Perez, J.A.S.
1997-09-05
A mathematical model to estimate the solar irradiance profile and average light intensity inside a tubular photobioreactor under outdoor conditions is proposed, requiring only geographic, geometric, and solar position parameters. First, the length of the path into the culture traveled by any direct or disperse ray of light was calculated as the function of three variables: day of year, solar hour, and geographic latitude. Then, the phenomenon of light attenuation by biomass was studied considering Lambert-Beer`s law (only considering absorption) and the monodimensional model of Cornet et al. (1900) (considering absorption and scattering phenomena). Due to the existence of differentialmore » wavelength absorption, none of the literature models are useful for explaining light attenuation by the biomass. Therefore, an empirical hyperbolic expression is proposed. The equations to calculate light path length were substituted in the proposed hyperbolic expression, reproducing light intensity data obtained in the center of the loop tubes. The proposed model was also likely to estimate the irradiance accurately at any point inside the culture. Calculation of the local intensity was thus extended to the full culture volume in order to obtain the average irradiance, showing how the higher biomass productivities in a Phaeodactylum tricornutum UTEX 640 outdoor chemostat culture could be maintained by delaying light limitation.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-01
...EPA and NHTSA, on behalf of the Department of Transportation, are issuing this joint proposal to further reduce greenhouse gas emissions and improve fuel economy for light-duty vehicles for model years 2017-2025. This proposal extends the National Program beyond the greenhouse gas and corporate average fuel economy standards set for model years 2012-2016. On May 21, 2010, President Obama issued a Presidential Memorandum requesting that NHTSA and EPA develop through notice and comment rulemaking a coordinated National Program to reduce greenhouse gas emissions of light-duty vehicles for model years 2017- 2025. This proposal, consistent with the President's request, responds to the country's critical need to address global climate change and to reduce oil consumption. NHTSA is proposing Corporate Average Fuel Economy standards under the Energy Policy and Conservation Act, as amended by the Energy Independence and Security Act, and EPA is proposing greenhouse gas emissions standards under the Clean Air Act. These standards apply to passenger cars, light-duty trucks, and medium- duty passenger vehicles, and represent a continued harmonized and consistent National Program. Under the National Program for model years 2017-2025, automobile manufacturers would be able to continue building a single light-duty national fleet that satisfies all requirements under both programs while ensuring that consumers still have a full range of vehicle choices. EPA is also proposing a minor change to the regulations applicable to MY 2012-2016, with respect to air conditioner performance and measurement of nitrous oxides.
Mapping potential vorticity dynamics on saturn: Zonal mean circulation from Cassini and Voyager data
NASA Astrophysics Data System (ADS)
Read, P. L.; Conrath, B. J.; Fletcher, L. N.; Gierasch, P. J.; Simon-Miller, A. A.; Zuchowski, L. C.
2009-12-01
Maps of Ertel potential vorticity on isentropic surfaces (IPV) and quasi-geostrophic potential vorticity (QGPV) are well established in dynamical meteorology as powerful sources of insight into dynamical processes involving 'balanced' flow (i.e. geostrophic or similar). Here we derive maps of zonal mean IPV and QGPV in Saturn's upper troposphere and lower stratosphere by making use of a combination of velocity measurements, derived from the combined tracking of cloud features in images from the Voyager and Cassini missions, and thermal measurements from the Cassini Composite Infrared Spectrometer (CIRS) instrument. IPV and QGPV are mapped and compared for the entire globe between latitudes 89∘S-82∘N. As on Jupiter, profiles of zonally averaged PV show evidence for a step-like "stair-case" pattern suggestive of local PV homogenisation, separated by strong PV gradients in association with eastward jets. The northward gradient of PV (IPV or QGPV) is found to change sign in several places in each hemisphere, however, even when baroclinic contributions are taken into account. The stability criterion with respect to Arnol'd's second stability theorem may be violated near the peaks of westward jets. Visible, near-IR and thermal-IR Cassini observations have shown that these regions exhibit many prominent, large-scale eddies and waves, e.g. including 'storm alley'. This suggests the possibility that at least some of these features originate from instabilities of the background zonal flow.
NASA Astrophysics Data System (ADS)
Ottewill, J. R.; Ruszczyk, A.; Broda, D.
2017-02-01
Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
NASA Astrophysics Data System (ADS)
Marsooli, R.; Orton, P. M.; Georgas, N.; Blumberg, A. F.
2016-02-01
The Stevens Institute of Technology Estuarine and Coastal Ocean Model (sECOM) has been coupled with a more advanced surface wave model to simulate wave‒current interaction, and results have been validated in estuarine and nearshore waters. sECOM is a three‒dimensional, hydrostatic, free surface, primitive equation model. It solves the Navier‒Stokes equations and the conservation equations for temperature and salinity using a finite‒difference method on an Arakawa C‒grid with a terrain‒following (sigma) vertical coordinate and orthogonal curvilinear horizontal coordinate system. The model is coupled with the surface wave model developed by Mellor et al. (2008), which solves the spectral equation and takes into account depth and current refraction, and deep and shallow water. The wave model parameterizes the energy distribution in frequency space and the wave‒wave interaction process by using a specified spectrum shape. The coupled wave‒hydrodynamic model considers the wave‒current interaction through wave‒induced bottom stress, depth‒dependent radiation stress, and wave effects on wind‒induced surface stress. The model is validated using the data collected at a natural sandy beach at Duck, North Carolina, during the DUCK94 experiment. This test case reveals the capability of the model to simulate the wave‒current interaction in nearshore coastal systems. The model is further validated using the data collected in Jamaica Bay, a semi‒enclosed body of water located in New York City region. This test reveals the applicability of the model to estuarine systems. These validations of the model and comparisons to its prior wave model, the Great Lakes Environmental Research Laboratory (GLERL) wave model (Donelan 1977), are presented and discussed. ReferencesG.L. Mellor, M.A. Donelan, and L‒Y. Oey, 2008, A Surface Wave Model for Coupling with Numerical Ocean Circulation Models. J. Atmos. Oceanic Technol., 25, 1785‒1807.Donelan, M. A 1977. A
Zonal and tesseral harmonic coefficients for the geopotential function, from zero to 18th order
NASA Technical Reports Server (NTRS)
Kirkpatrick, J. C.
1976-01-01
Zonal and tesseral harmonic coefficients for the geopotential function are usually tabulated in normalized form to provide immediate information as to the relative significance of the coefficients in the gravity model. The normalized form of the geopotential coefficients cannot be used for computational purposes unless the gravity model has been modified to receive them. This modification is usually not done because the absolute or unnormalized form of the coefficients can be obtained from the simple mathematical relationship that relates the two forms. This computation can be quite tedious for hand calculation, especially for the higher order terms, and can be costly in terms of storage and execution time for machine computation. In this report, zonal and tesseral harmonic coefficients for the geopotential function are tabulated in absolute or unnormalized form. The report is designed to be used as a ready reference for both hand and machine calculation to save the user time and effort.
Mattucci, Stephen F E; Cronin, Duane S
2015-01-01
Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
Stock, Eileen M.; Kimbrel, Nathan A.; Meyer, Eric C.; Copeland, Laurel A.; Monte, Ralph; Zeber, John E.; Gulliver, Suzy Bird; Morissette, Sandra B.
2016-01-01
Many Veterans from the conflicts in Iraq and Afghanistan return home with physical and psychological impairments that impact their ability to enjoy normal life activities and diminish their quality of life (QoL). The present research aimed to identify predictors of QoL over an 8-month period using Bayesian model averaging (BMA), which is a statistical technique useful for maximizing power with smaller sample sizes. A sample of 117 Iraq and Afghanistan Veterans receiving care in a southwestern healthcare system was recruited, and BMA examined the impact of key demographics (e.g., age, gender), diagnoses (e.g., depression), and treatment modalities (e.g., individual therapy, medication) on QoL over time. Multiple imputation based on Gibbs sampling was employed for incomplete data (6.4% missingness). Average follow-up QoL scores were significantly lower than at baseline (73.2 initial vs 69.5 4-month and 68.3 8-month). Employment was associated with increased QoL during each follow-up, while posttraumatic stress disorder and black race were inversely related. Additionally, predictive models indicated that depression, income, treatment for a medical condition, and group psychotherapy were strong negative predictors of 4-month QoL but not 8-month QoL. PMID:24942672
Mao, Qiang; Zhang, Kai; Yan, Wu; Cheng, Chaonan
2018-05-02
The aims of this study were to develop a forecasting model for the incidence of tuberculosis (TB) and analyze the seasonality of infections in China; and to provide a useful tool for formulating intervention programs and allocating medical resources. Data for the monthly incidence of TB from January 2004 to December 2015 were obtained from the National Scientific Data Sharing Platform for Population and Health (China). The Box-Jenkins method was applied to fit a seasonal auto-regressive integrated moving average (SARIMA) model to forecast the incidence of TB over the subsequent six months. During the study period of 144 months, 12,321,559 TB cases were reported in China, with an average monthly incidence of 6.4426 per 100,000 of the population. The monthly incidence of TB showed a clear 12-month cycle, and a seasonality with two peaks occurring in January and March and a trough in December. The best-fit model was SARIMA (1,0,0)(0,1,1) 12 , which demonstrated adequate information extraction (white noise test, p>0.05). Based on the analysis, the incidence of TB from January to June 2016 were 6.6335, 4.7208, 5.8193, 5.5474, 5.2202 and 4.9156 per 100,000 of the population, respectively. According to the seasonal pattern of TB incidence in China, the SARIMA model was proposed as a useful tool for monitoring epidemics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris
2018-03-01
Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Greenblatt, David
2007-01-01
This is an expanded version of a limited-length paper that appeared at the 5th International Symposium on Turbulence and Shear Flow Phenomena by the same authors. A computational study was performed for steady and oscillatory flow control over a hump model with flow separation to assess how well the steady and unsteady Reynolds-averaged Navier-Stokes equations predict trends due to Reynolds number, control magnitude, and control frequency. As demonstrated in earlier studies, the hump model case is useful because it clearly demonstrates a failing in all known turbulence models: they under-predict the turbulent shear stress in the separated region and consequently reattachment occurs too far downstream. In spite of this known failing, three different turbulence models were employed to determine if trends can be captured even though absolute levels are not. Overall the three turbulence models showed very similar trends as experiment for steady suction, but only agreed qualitatively with some of the trends for oscillatory control.
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Zhai, Binxu; Chen, Jianguo
2018-04-18
A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of
Ghizzo, A., E-mail: alain.ghizzo@univ-lorraine.fr; Palermo, F.
We address the mechanisms underlying low-frequency zonal flow generation in turbulent system and the associated intermittent regime of ion-temperature-gradient (ITG) turbulence. This model is in connection with the recent observation of quasi periodic zonal flow oscillation at a frequency close to 2 kHz, at the low-high transition, observed in the ASDEX Upgrade [Conway et al., Phys. Rev. Lett. 106, 065001 (2011)] and EAST tokamak [Xu et al., Phys. Rev. Lett 107, 125001 (2011)]. Turbulent bursts caused by the coupling of Kelvin-Helmholtz (KH) driven shear flows with trapped ion modes (TIMs) were investigated by means of reduced gyrokinetic simulations. It was foundmore » that ITG turbulence can be regulated by low-frequency meso-scale zonal flows driven by resonant collisionless trapped ion modes (CTIMs), through parametric-type scattering, a process in competition with the usual KH instability.« less
NASA Astrophysics Data System (ADS)
Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.
2012-04-01
We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell
NASA Astrophysics Data System (ADS)
Cheng, Zhen; Chauchat, Julien; Hsu, Tian-Jian; Calantoni, Joseph
2018-01-01
A Reynolds-averaged Euler-Lagrange sediment transport model (CFDEM-EIM) was developed for steady sheet flow, where the inter-granular interactions were resolved and the flow turbulence was modeled with a low Reynolds number corrected k - ω turbulence closure modified for two-phase flows. To model the effect of turbulence on the sediment suspension, the interaction between the turbulent eddies and particles was simulated with an eddy interaction model (EIM). The EIM was first calibrated with measurements from dilute suspension experiments. We demonstrated that the eddy-interaction model was able to reproduce the well-known Rouse profile for suspended sediment concentration. The model results were found to be sensitive to the choice of the coefficient, C0, associated with the turbulence-sediment interaction time. A value C0 = 3 was suggested to match the measured concentration in the dilute suspension. The calibrated CFDEM-EIM was used to model a steady sheet flow experiment of lightweight coarse particles and yielded reasonable agreements with measured velocity, concentration and turbulence kinetic energy profiles. Further numerical experiments for sheet flow suggested that when C0 was decreased to C0 < 3, the simulation under-predicted the amount of suspended sediment in the dilute region and the Schmidt number is over-predicted (Sc > 1.0). Additional simulations for a range of Shields parameters between 0.3 and 1.2 confirmed that CFDEM-EIM was capable of predicting sediment transport rates similar to empirical formulations. Based on the analysis of sediment transport rate and transport layer thickness, the EIM and the resulting suspended load were shown to be important when the fall parameter is less than 1.25.
Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro
2017-01-01
Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.
Yoon, Yongjin; Puria, Sunil; Steele, Charles R
2009-09-05
In our previous work, the basilar membrane velocity V(BM) for a gerbil cochlea was calculated and compared with physiological measurements. The calculated V(BM) showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the V(BM) for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase.
YOON, YONGJIN; PURIA, SUNIL; STEELE, CHARLES R.
2010-01-01
In our previous work, the basilar membrane velocity VBM for a gerbil cochlea was calculated and compared with physiological measurements. The calculated VBM showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the VBM for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase. PMID:20485540
Comments on "extended zonal dislocations mediating ? ? twinning in titanium"
NASA Astrophysics Data System (ADS)
El Kadiri, Haitham; Barrett, Christopher D.
2013-09-01
In a recent paper, Li et al. (Philos. Mag. 92 (2012) p.1006) used results of atomistic simulations to advance a growth mechanism of ? ? twinning in titanium based on the concept of two elementary twinning dislocations which nucleate and glide in pairs but separately and sequentially on two neighbouring planes. This new Comment was stimulated after A. Serra, D.J. Bacon and R.C. Pond privately raised concerns on this growth model to one of the present authors, H. El Kadiri, who This was a co-author of the original paper (Philos. Mag. 92 (2012) p.1006). We repeated the simulations and obtained nearly the same simulations results as Li et al. However, after re-analysing these results, we have concluded that the extended extrinsic zonal dislocation mechanism claimed to be that for twin growth in titanium is in fact false, confirming the accuracy of the Comment by Serra et al that results of Li and co-authors were misinterpreted.
A statistical study of gyro-averaging effects in a reduced model of drift-wave transport
Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.
2016-08-25
Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less
MPIRUN: A Portable Loader for Multidisciplinary and Multi-Zonal Applications
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Woodrow, Thomas S. (Technical Monitor)
1994-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. One method for implementing the message passing portion of these codes is with the new Message Passing Interface (MPI) standard. Unfortunately, this standard only specifies the message passing portion of an application, but does not specify any portable mechanisms for loading an application. MPIRUN was developed to provide a portable means for loading MPI programs, and was specifically targeted at multidisciplinary and multi-zonal applications. Programs using MPIRUN for loading and MPI for message passing are then portable between all machines supported by MPIRUN. MPIRUN is currently implemented for the Intel iPSC/860, TMC CM5, IBM SP-1 and SP-2, Intel Paragon, and workstation clusters. Further, MPIRUN is designed to be simple enough to port easily to any system supporting MPI.
Wave kinetics of drift-wave turbulence and zonal flows beyond the ray approximation
NASA Astrophysics Data System (ADS)
Zhu, Hongxuan; Zhou, Yao; Ruiz, D. E.; Dodin, I. Y.
2018-05-01
Inhomogeneous drift-wave turbulence can be modeled as an effective plasma where drift waves act as quantumlike particles and the zonal-flow velocity serves as a collective field through which they interact. This effective plasma can be described by a Wigner-Moyal equation (WME), which generalizes the quasilinear wave-kinetic equation (WKE) to the full-wave regime, i.e., resolves the wavelength scale. Unlike waves governed by manifestly quantumlike equations, whose WMEs can be borrowed from quantum mechanics and are commonly known, drift waves have Hamiltonians very different from those of conventional quantum particles. This causes unusual phase-space dynamics that is typically not captured by the WKE. We demonstrate how to correctly model this dynamics with the WME instead. Specifically, we report full-wave phase-space simulations of the zonal-flow formation (zonostrophic instability), deterioration (tertiary instability), and the so-called predator-prey oscillations. We also show how the WME facilitates analysis of these phenomena, namely, (i) we show that full-wave effects critically affect the zonostrophic instability, particularly its nonlinear stage and saturation; (ii) we derive the tertiary-instability growth rate; and (iii) we demonstrate that, with full-wave effects retained, the predator-prey oscillations do not require zonal-flow collisional damping, contrary to previous studies. We also show how the famous Rayleigh-Kuo criterion, which has been missing in wave-kinetic theories of drift-wave turbulence, emerges from the WME.
Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe
2012-12-21
According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-13
...EPA and NHTSA are announcing a 14-day extension of the comment period for the joint proposed rules ``2017 and Later Model Year Light- Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel Economy Standards,'' published in the Federal Register on December 1, 2011 (76 FR 74854). The comment period was to end on January 30, 2012 (60 days after publication of the proposals in the Federal Register). This document extends the comment period to February 13, 2012. This extension of the comment period is provided to allow the public additional time to comment on the proposed rule. The extension of the comment period does not apply to NHTSA's Draft Environmental Impact Statement (Draft EIS), available on NHTSA's Web site at www.nhtsa.gov/fuel-economy. The comment period for NHTSA's Draft EIS closes on January 31, 2012.
NASA Astrophysics Data System (ADS)
Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua
2018-01-01
Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.
2016-01-01
OBJECTIVES The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. METHODS In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. RESULTS The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1)12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. CONCLUSIONS SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province’s prevention and control measures and in initiating rapid response measures. PMID:27866407
Selmane, Schehrazad; L'Hadj, Mohamed
2016-01-01
The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1) 12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province's prevention and control measures and in initiating rapid response measures.
NASA Technical Reports Server (NTRS)
Chaderjian, N. M.
1986-01-01
A computer code is under development whereby the thin-layer Reynolds-averaged Navier-Stokes equations are to be applied to realistic fighter-aircraft configurations. This transonic Navier-Stokes code (TNS) utilizes a zonal approach in order to treat complex geometries and satisfy in-core computer memory constraints. The zonal approach has been applied to isolated wing geometries in order to facilitate code development. Part 1 of this paper addresses the TNS finite-difference algorithm, zonal methodology, and code validation with experimental data. Part 2 of this paper addresses some numerical issues such as code robustness, efficiency, and accuracy at high angles of attack. Special free-stream-preserving metrics proved an effective way to treat H-mesh singularities over a large range of severe flow conditions, including strong leading-edge flow gradients, massive shock-induced separation, and stall. Furthermore, lift and drag coefficients have been computed for a wing up through CLmax. Numerical oil flow patterns and particle trajectories are presented both for subcritical and transonic flow. These flow simulations are rich with complex separated flow physics and demonstrate the efficiency and robustness of the zonal approach.
Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick
2007-11-01
We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy
A theory of self-organized zonal flow with fine radial structure in tokamak
NASA Astrophysics Data System (ADS)
Zhang, Y. Z.; Liu, Z. Y.; Xie, T.; Mahajan, S. M.; Liu, J.
2017-12-01
The (low frequency) zonal flow-ion temperature gradient (ITG) wave system, constructed on Braginskii's fluid model in tokamak, is shown to be a reaction-diffusion-advection system; it is derived by making use of a multiple spatiotemporal scale technique and two-dimensional (2D) ballooning theory. For real regular group velocities of ITG waves, two distinct temporal processes, sharing a very similar meso-scale radial structure, are identified in the nonlinear self-organized stage. The stationary and quasi-stationary structures reflect a particular feature of the poloidal group velocity. The equation set posed to be an initial value problem is numerically solved for JET low mode parameters; the results are presented in several figures and two movies that show the spatiotemporal evolutions as well as the spectrum analysis—frequency-wave number spectrum, auto power spectrum, and Lissajous diagram. This approach reveals that the zonal flow in tokamak is a local traveling wave. For the quasi-stationary process, the cycle of ITG wave energy is composed of two consecutive phases in distinct spatiotemporal structures: a pair of Cavitons growing and breathing slowly without long range propagation, followed by a sudden decay into many Instantons that carry negative wave energy rapidly into infinity. A spotlight onto the motion of Instantons for a given radial position reproduces a Blob-Hole temporal structure; the occurrence as well as the rapid decay of Caviton into Instantons is triggered by zero-crossing of radial group velocity. A sample of the radial profile of zonal flow contributed from 31 nonlinearly coupled rational surfaces near plasma edge is found to be very similar to that observed in the JET Ohmic phase [J. C. Hillesheim et al., Phys. Rev. Lett. 116, 165002 (2016)]. The theory predicts an interior asymmetric dipole structure associated with the zonal flow that is driven by the gradients of ITG turbulence intensity.
NASA Astrophysics Data System (ADS)
Hillesheim, Jon
2015-11-01
High spatial resolution measurements with Doppler backscattering in JET have provided new insights into the development of the edge radial electric field during pedestal formation. The characteristics of Er have been studied as a function of density at 2.5 MA plasma current and 3 T toroidal magnetic field. We observe fine-scale spatial structure in the edge Er well prior to the LH transition, consistent with stationary zonal flows. Zonal flows are a fundamental mechanism for the saturation of turbulence and this is the first direct evidence of stationary zonal flows in a tokamak. The radial wavelength of the zonal flows systematically decreases with density. The zonal flows are clearest in Ohmic conditions, weaker in L-mode, and absent in H-mode. Measurements also show that after neutral beam heating is applied, the edge Er builds up at a constant gradient into the core during L-mode, at radii where Er is mainly due to toroidal velocity. The local stability of velocity shear driven turbulence, such as the parallel velocity gradient mode, will be assessed with gyrokinetic simulations. This critical Er shear persists across the LH transition into H-mode. Surprisingly, a reduction in the apparent magnitude of the Er well depth is observed directly following the LH transition at high densities. Establishing the physics basis for the LH transition is important for projecting scalings to ITER and these observations challenge existing models based on increased Er shear or strong zonal flows as the trigger for the transition. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
Zonal Flows and Long-lived Axisymmetric Pressure Bumps in Magnetorotational Turbulence
NASA Astrophysics Data System (ADS)
Johansen, A.; Youdin, A.; Klahr, H.
2009-06-01
We study the behavior of magnetorotational turbulence in shearing box simulations with a radial and azimuthal extent up to 10 scale heights. Maxwell and Reynolds stresses are found to increase by more than a factor of 2 when increasing the box size beyond two scale heights in the radial direction. Further increase of the box size has little or no effect on the statistical properties of the turbulence. An inverse cascade excites magnetic field structures at the largest scales of the box. The corresponding 10% variation in the Maxwell stress launches a zonal flow of alternating sub- and super-Keplerian velocity. This, in turn, generates a banded density structure in geostrophic balance between pressure and Coriolis forces. We present a simplified model for the appearance of zonal flows, in which stochastic forcing by the magnetic tension on short timescales creates zonal flow structures with lifetimes of several tens of orbits. We experiment with various improved shearing box algorithms to reduce the numerical diffusivity introduced by the supersonic shear flow. While a standard finite difference advection scheme shows signs of a suppression of turbulent activity near the edges of the box, this problem is eliminated by a new method where the Keplerian shear advection is advanced in time by interpolation in Fourier space.
Computation of transonic separated wing flows using an Euler/Navier-Stokes zonal approach
NASA Technical Reports Server (NTRS)
Kaynak, Uenver; Holst, Terry L.; Cantwell, Brian J.
1986-01-01
A computer program called Transonic Navier Stokes (TNS) has been developed which solves the Euler/Navier-Stokes equations around wings using a zonal grid approach. In the present zonal scheme, the physical domain of interest is divided into several subdomains called zones and the governing equations are solved interactively. The advantages of the Zonal Grid approach are as follows: (1) the grid for any subdomain can be generated easily; (2) grids can be, in a sense, adapted to the solution; (3) different equation sets can be used in different zones; and, (4) this approach allows for a convenient data base organization scheme. Using this code, separated flows on a NACA 0012 section wing and on the NASA Ames WING C have been computed. First, the effects of turbulence and artificial dissipation models incorporated into the code are assessed by comparing the TNS results with other CFD codes and experiments. Then a series of flow cases is described where data are available. The computed results, including cases with shock-induced separation, are in good agreement with experimental data. Finally, some futuristic cases are presented to demonstrate the abilities of the code for massively separated cases which do not have experimental data.
The climatology of low-latitude ionospheric densities and zonal drifts from IMAGE-FUV.
NASA Astrophysics Data System (ADS)
Immel, T. J.; Sagawa, E.; Frey, H. U.; Mende, S. B.; Patel, J.
2004-12-01
The IMAGE satellite was the first dedicated to magnetospheric imaging, but has also provided numerous images of the nightside ionosphere with its Far-Ultraviolet (FUV) spectrographic imager. Nightside emissions of O I at 135.6-nm originating away from the aurora are due to recombination of ionospheric O+, and vary in intensity with (O+)2. IMAGE-FUV, operating in a highly elliptical orbit with apogee at middle latitudes and >7 Re altitude, measures this emission globally with 100-km resolution. During each 14.5 hour orbit, IMAGE-FUV is able to monitor nightside ionospheric densities for up to 6-7 hours. Hundreds of low-latitude ionospheric bubbles, their development and drift speed, and a variety of other dynamical variations in brightness and morphology of the equatorial anomalies have been observed during this mission. Furthermore, the average global distribution of low-latitude ionospheric plasma densities can be determined in 3 days. Imaging data collected from February through June of 2002 are used to compile a dataset containing a variety of parameters (e.g., latitude and brightness of peak plasma density, zonal bubble drift speed) which can be drawn from for climatological studies. Recent results indicate that the average ground speed of low-latitude zonal plasma drifts vary with longitude by up to 50%, and that a periodic variation in ionospheric densities with longitude suggests the influence of a lower-thermospheric non-migrating tide with wave number = 4 on ionospheric densities. An excellent correlation between zonal drift speed and the magnetic storm index Dst is also found.
Fulford, Janice M.
2003-01-01
A numerical computer model, Transient Inundation Model for Rivers -- 2 Dimensional (TrimR2D), that solves the two-dimensional depth-averaged flow equations is documented and discussed. The model uses a semi-implicit, semi-Lagrangian finite-difference method. It is a variant of the Trim model and has been used successfully in estuarine environments such as San Francisco Bay. The abilities of the model are documented for three scenarios: uniform depth flows, laboratory dam-break flows, and large-scale riverine flows. The model can start computations from a ?dry? bed and converge to accurate solutions. Inflows are expressed as source terms, which limits the use of the model to sufficiently long reaches where the flow reaches equilibrium with the channel. The data sets used by the investigation demonstrate that the model accurately propagates flood waves through long river reaches and simulates dam breaks with abrupt water-surface changes.
NASA Astrophysics Data System (ADS)
Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold
2013-09-01
The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.
NASA Technical Reports Server (NTRS)
Gao, Shou-Ting; Ping, Fan; Li, Xiao-Fan; Tao, Wei-Kuo
2004-01-01
Although dry/moist potential vorticity is a useful physical quantity for meteorological analysis, it cannot be applied to the analysis of 2D simulations. A convective vorticity vector (CVV) is introduced in this study to analyze 2D cloud-resolving simulation data associated with 2D tropical convection. The cloud model is forced by the vertical velocity, zonal wind, horizontal advection, and sea surface temperature obtained from the TOGA COARE, and is integrated for a selected 10-day period. The CVV has zonal and vertical components in the 2D x-z frame. Analysis of zonally-averaged and mass-integrated quantities shows that the correlation coefficient between the vertical component of the CVV and the sum of the cloud hydrometeor mixing ratios is 0.81, whereas the correlation coefficient between the zonal component and the sum of the mixing ratios is only 0.18. This indicates that the vertical component of the CVV is closely associated with tropical convection. The tendency equation for the vertical component of the CVV is derived and the zonally-averaged and mass-integrated tendency budgets are analyzed. The tendency of the vertical component of the CVV is determined by the interaction between the vorticity and the zonal gradient of cloud heating. The results demonstrate that the vertical component of the CVV is a cloud-linked parameter and can be used to study tropical convection.
NASA Astrophysics Data System (ADS)
Ma, Yingzhao; Yang, Yuan; Han, Zhongying; Tang, Guoqiang; Maguire, Lane; Chu, Zhigang; Hong, Yang
2018-01-01
The objective of this study is to comprehensively evaluate the new Ensemble Multi-Satellite Precipitation Dataset using the Dynamic Bayesian Model Averaging scheme (EMSPD-DBMA) at daily and 0.25° scales from 2001 to 2015 over the Tibetan Plateau (TP). Error analysis against gauge observations revealed that EMSPD-DBMA captured the spatiotemporal pattern of daily precipitation with an acceptable Correlation Coefficient (CC) of 0.53 and a Relative Bias (RB) of -8.28%. Moreover, EMSPD-DBMA outperformed IMERG and GSMaP-MVK in almost all metrics in the summers of 2014 and 2015, with the lowest RB and Root Mean Square Error (RMSE) values of -2.88% and 8.01 mm/d, respectively. It also better reproduced the Probability Density Function (PDF) in terms of daily rainfall amount and estimated moderate and heavy rainfall better than both IMERG and GSMaP-MVK. Further, hydrological evaluation with the Coupled Routing and Excess STorage (CREST) model in the Upper Yangtze River region indicated that the EMSPD-DBMA forced simulation showed satisfying hydrological performance in terms of streamflow prediction, with Nash-Sutcliffe coefficient of Efficiency (NSE) values of 0.82 and 0.58, compared to gauge forced simulation (0.88 and 0.60) at the calibration and validation periods, respectively. EMSPD-DBMA also performed a greater fitness for peak flow simulation than a new Multi-Source Weighted-Ensemble Precipitation Version 2 (MSWEP V2) product, indicating a promising prospect of hydrological utility for the ensemble satellite precipitation data. This study belongs to early comprehensive evaluation of the blended multi-satellite precipitation data across the TP, which would be significant for improving the DBMA algorithm in regions with complex terrain.
Gravitational Anomalies Caused by Zonal Winds in Jupiter
NASA Astrophysics Data System (ADS)
Schubert, G.; Kong, D.; Zhang, K.
2012-12-01
We present an accurate three-dimensional non-spherical numerical calculation of the gravitational anomalies caused by zonal winds in Jupiter. The calculation is based on a three-dimensional finite element method and accounts for the full effect of significant departure from spherical geometry caused by rapid rotation. Since the speeds of Jupiter's zonal winds are much smaller than that of its rigid-body rotation, our numerical calculation is carried out in two stages. First, we compute the non-spherical distributions of density and pressure at the equilibrium within Jupiter via a hybrid inverse approach by determining an a priori unknown coefficient in the polytropic equation of state that results in a match to the observed shape of Jupiter. Second, by assuming that Jupiter's zonal winds extend throughout the interior along cylinders parallel to the rotation axis, we compute gravitational anomalies produced by the wind-related density anomalies, providing an upper bound to the gravitational anomalies caused by the Jovian zonal winds.
Mihaescu, Mihai; Murugappan, Shanmugam; Kalra, Maninder; Khosla, Sid; Gutmark, Ephraim
2008-07-19
Computational fluid dynamics techniques employing primarily steady Reynolds-Averaged Navier-Stokes (RANS) methodology have been recently used to characterize the transitional/turbulent flow field in human airways. The use of RANS implies that flow phenomena are averaged over time, the flow dynamics not being captured. Further, RANS uses two-equation turbulence models that are not adequate for predicting anisotropic flows, flows with high streamline curvature, or flows where separation occurs. A more accurate approach for such flow situations that occur in the human airway is Large Eddy Simulation (LES). The paper considers flow modeling in a pharyngeal airway model reconstructed from cross-sectional magnetic resonance scans of a patient with obstructive sleep apnea. The airway model is characterized by a maximum narrowing at the site of retropalatal pharynx. Two flow-modeling strategies are employed: steady RANS and the LES approach. In the RANS modeling framework both k-epsilon and k-omega turbulence models are used. The paper discusses the differences between the airflow characteristics obtained from the RANS and LES calculations. The largest discrepancies were found in the axial velocity distributions downstream of the minimum cross-sectional area. This region is characterized by flow separation and large radial velocity gradients across the developed shear layers. The largest difference in static pressure distributions on the airway walls was found between the LES and the k-epsilon data at the site of maximum narrowing in the retropalatal pharynx.
Zonal PANS: evaluation of different treatments of the RANS-LES interface
NASA Astrophysics Data System (ADS)
Davidson, L.
2016-03-01
The partially Reynolds-averaged Navier-Stokes (PANS) model can be used to simulate turbulent flows either as RANS, large eddy simulation (LES) or DNS. Its main parameter is fk whose physical meaning is the ratio of the modelled to the total turbulent kinetic energy. In RANS fk = 1, in DNS fk = 0 and in LES fk takes values between 0 and 1. Three different ways of prescribing fk are evaluated for decaying grid turbulence and fully developed channel flow: fk = 0.4, fk = k3/2tot/ɛ and, from its definition, fk = k/ktot where ktot is the sum of the modelled, k, and resolved, kres, turbulent kinetic energy. It is found that the fk = 0.4 gives the best results. In Girimaji and Wallin, a method was proposed to include the effect of the gradient of fk. This approach is used at RANS- LES interface in the present study. Four different interface models are evaluated in fully developed channel flow and embedded LES of channel flow: in both cases, PANS is used as a zonal model with fk = 1 in the unsteady RANS (URANS) region and fk = 0.4 in the LES region. In fully developed channel flow, the RANS- LES interface is parallel to the wall (horizontal) and in embedded LES, it is parallel to the inlet (vertical). The importance of the location of the horizontal interface in fully developed channel flow is also investigated. It is found that the location - and the choice of the treatment at the interface - may be critical at low Reynolds number or if the interface is placed too close to the wall. The reason is that the modelled turbulent shear stress at the interface is large and hence the relative strength of the resolved turbulence is small. In RANS, the turbulent viscosity - and consequently also the modelled Reynolds shear stress - is only weakly dependent on Reynolds number. It is found in the present work that it also applies in the URANS region.
A new paradigm for predicting zonal-mean climate and climate change
NASA Astrophysics Data System (ADS)
Armour, K.; Roe, G.; Donohoe, A.; Siler, N.; Markle, B. R.; Liu, X.; Feldl, N.; Battisti, D. S.; Frierson, D. M.
2016-12-01
How will the pole-to-equator temperature gradient, or large-scale patterns of precipitation, change under global warming? Answering such questions typically involves numerical simulations with comprehensive general circulation models (GCMs) that represent the complexities of climate forcing, radiative feedbacks, and atmosphere and ocean dynamics. Yet, our understanding of these predictions hinges on our ability to explain them through the lens of simple models and physical theories. Here we present evidence that zonal-mean climate, and its changes, can be understood in terms of a moist energy balance model that represents atmospheric heat transport as a simple diffusion of latent and sensible heat (as a down-gradient transport of moist static energy, with a diffusivity coefficient that is nearly constant with latitude). We show that the theoretical underpinnings of this model derive from the principle of maximum entropy production; that its predictions are empirically supported by atmospheric reanalyses; and that it successfully predicts the behavior of a hierarchy of climate models - from a gray radiation aquaplanet moist GCM, to comprehensive GCMs participating in CMIP5. As an example of the power of this paradigm, we show that, given only patterns of local radiative feedbacks and climate forcing, the moist energy balance model accurately predicts the evolution of zonal-mean temperature and atmospheric heat transport as simulated by the CMIP5 ensemble. These results suggest that, despite all of its dynamical complexity, the atmosphere essentially responds to energy imbalances by simply diffusing latent and sensible heat down-gradient; this principle appears to explain zonal-mean climate and its changes under global warming.
Zonally Symmetric Oscillations of the Thermosphere at Planetary Wave Periods
NASA Astrophysics Data System (ADS)
Forbes, Jeffrey M.; Zhang, Xiaoli; Maute, Astrid; Hagan, Maura E.
2018-05-01
New mechanisms for imposing planetary wave (PW) variability on the ionosphere-thermosphere system are discovered in numerical experiments conducted with the National Center for Atmospheric Research thermosphere-ionosphere-electrodynamics general circulation model. First, it is demonstrated that a tidal spectrum modulated at PW periods (3-20 days) entering the ionosphere-thermosphere system near 100 km is responsible for producing ±40 m/s and ±10-15 K PW period oscillations between 110 and 150 km at low to middle latitudes. The dominant response is broadband and zonally symmetric (i.e., "S0") over a range of periods and is attributable to tidal dissipation; essentially, the ionosphere-thermosphere system "vacillates" in response to dissipation of the PW-modulated tidal spectrum. In addition, some specific westward propagating PWs such as the quasi-6-day wave are amplified by the presence of the tidal spectrum; the underlying mechanism is hypothesized to be a second-stage nonlinear interaction. The S0 total neutral mass density (ρ) response at 325 km consists of PW period fluctuations of order ±3-4%, roughly equivalent to the day-to-day variability associated with low-level geomagnetic activity. The variability in ρ over short periods (˜< 9 days) correlates with temperature changes, indicating a response of hydrostatic origin. Over longer periods ρ is also controlled by composition and mean molecular mass. While the upper-thermosphere impacts are modest, they do translate to more significant changes in the F region ionosphere.
David, Ingrid; Sánchez, Juan-Pablo; Piles, Miriam
2018-05-10
Indirect genetic effects (IGE) are important components of various traits in several species. Although the intensity of social interactions between partners likely vary over time, very few genetic studies have investigated how IGE vary over time for traits under selection in livestock species. To overcome this issue, our aim was: (1) to analyze longitudinal records of average daily gain (ADG) in rabbits subjected to a 5-week period of feed restriction using a structured antedependence (SAD) model that includes IGE and (2) to evaluate, by simulation, the response to selection when IGE are present and genetic evaluation is based on a SAD model that includes IGE or not. The direct genetic variance for ADG (g/d) increased from week 1 to 3 [from 8.03 to 13.47 (g/d) 2 ] and then decreased [6.20 (g/d) 2 at week 5], while the indirect genetic variance decreased from week 1 to 4 [from 0.43 to 0.22 (g/d) 2 ]. The correlation between the direct genetic effects of different weeks was moderate to high (ranging from 0.46 to 0.86) and tended to decrease with time interval between measurements. The same trend was observed for IGE for weeks 2 to 5 (correlations ranging from 0.62 to 0.91). Estimates of the correlation between IGE of week 1 and IGE of the other weeks did not follow the same pattern and correlations were lower. Estimates of correlations between direct and indirect effects were negative at all times. After seven generations of simulated selection, the increase in ADG from selection on EBV from a SAD model that included IGE was higher (~ 30%) than when those effects were omitted. Indirect genetic effects are larger just after mixing animals at weaning than later in the fattening period, probably because of the establishment of social hierarchy that is generally observed at that time. Accounting for IGE in the selection criterion maximizes genetic progress.
NASA Astrophysics Data System (ADS)
Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul
2015-10-01
Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.
DOT National Transportation Integrated Search
2008-06-01
The National Highway Traffic Safety Administration (NHTSA) has prepared this Draft Environmental Impact Statement (DEIS) to disclose and analyze the potential environmental impacts of the proposed new Corporate Average Fuel Economy (CAFE) standards a...
Zonal management of arsenic contaminated ground water in Northwestern Bangladesh.
Hill, Jason; Hossain, Faisal; Bagtzoglou, Amvrossios C
2009-09-01
This paper used ordinary kriging to spatially map arsenic contamination in shallow aquifers of Northwestern Bangladesh (total area approximately 35,000 km(2)). The Northwestern region was selected because it represents a relatively safer source of large-scale and affordable water supply for the rest of Bangladesh currently faced with extensive arsenic contamination in drinking water (such as the Southern regions). Hence, the work appropriately explored sustainability issues by building upon a previously published study (Hossain et al., 2007; Water Resources Management, vol. 21: 1245-1261) where a more general nation-wide assessment afforded by kriging was identified. The arsenic database for reference comprised the nation-wide survey (of 3534 drinking wells) completed in 1999 by the British Geological Survey (BGS) in collaboration with the Department of Public Health Engineering (DPHE) of Bangladesh. Randomly sampled networks of zones from this reference database were used to develop an empirical variogram and develop maps of zonal arsenic concentration for the Northwestern region. The remaining non-sampled zones from the reference database were used to assess the accuracy of the kriged maps. Two additional criteria were explored: (1) the ability of geostatistical interpolators such as kriging to extrapolate information on spatial structure of arsenic contamination beyond small-scale exploratory domains; (2) the impact of a priori knowledge of anisotropic variability on the effectiveness of geostatistically based management. On the average, the kriging method was found to have a 90% probability of successful prediction of safe zones according to the WHO safe limit of 10ppb while for the Bangladesh safe limit of 50ppb, the safe zone prediction probability was 97%. Compared to the previous study by Hossain et al. (2007) over the rest of the contaminated country side, the probability of successful detection of safe zones in the Northwest is observed to be about 25
NASA Astrophysics Data System (ADS)
Sidorik, Vadim; Miulgauzen, Daria
2017-04-01
Ecosystems of East Fennoscandia have been affected by intensive anthropogenic influence that resulted in their significant transformation. Study of ecosystems in the framework of vegetation vertical zonality disturbance as well as its recovery allows to understand the trends of anthropogenically induced changes. The aim of the present research is the comparative analysis of vegetation vertical zonality of the two uplands in East Fennoscandia which may be considered as unaffected and affected by anthropogenic impact. The objects of key studies carried out in the north-west of Kola Peninsula in the vicinity of the Pechenganikel Mining and Metallurgical Plant are represented by ecosystems of Kalkupya (h 357 m) and Hangaslachdenvara (h 284 m) uplands. They are characterized by the similarity in sequence of altitudinal belts due to the position on the northern taiga - forest-tundra boundary. Plant communities of Kalkupya upland have no visible signs of anthropogenic influence, therefore, they can be considered as model ecosystems of the area. The sequence of altitudinal belts is the following: - up to 200 m - pine subshrub and green moss ("zonal") forest replaced by mixed pine and birch forest near the upper boundary; - 200-300 m - birch crooked subshrub wood; - above 300 m - tundra subshrub and lichen communities. Ecosystems of Hangaslachdenvara upland have been damaged by air pollution (SO2, Ni, Cu emissions) of the Pechenganikel Plant. This impact has led to plant community suppression and formation of barren lands. Besides the soil cover was significantly disturbed, especially upper horizons. Burying of soil profiles, represented by Podzols (WRB, 2015), also manifested itself in the exploited part of the area. The vegetation cover of Hangaslachdenvara upland is the following: - up to 130 m - birch and aspen subshrub and grass forest instead of pine forest ("zonal"); - 130-200 m - barren lands instead of pine forest ("zonal"); - above 200 m - barren lands instead of
Low-latitude zonal and vertical ion drifts seen by DE 2
NASA Technical Reports Server (NTRS)
Coley, W. R.; Heelis, R. A.
1989-01-01
Horizontal and vertical ion drift data from the DE 2 spacecraft have been used to determine average zonal and vertical plasma flow (electric field) characteristics in the +/- 26-deg dip latitude region during a time of high solar activity. The 'average data' local time profile for an apex height bin centered at 400 km indicates westward plasma flow from 0600 to 1900 solar local time ((SLT) with a maximum westward velocity of 80 m/s in the early afternoon. There is a sharp change to eastward flow at approximately 1900 hours with an early evening peak of 170 m/s. A secondary nighttime maximum exists at 0430 SLT preceeding the reversal to westward flow. This profile is in good agreement with Jicamarca, Peru, radar measurements made under similar solar maximum conditions. Haramonic analysis indicates a net superrotation which is strongest at lower apex altitudes. The diurnal term is dominant, but higher order terms through the quatradiurnal are significant.
An, Ji-Yong; You, Zhu-Hong; Meng, Fan-Rong; Xu, Shu-Juan; Wang, Yin
2016-05-18
Protein-Protein Interactions (PPIs) play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed a freely
Influence of large-scale zonal flows on the evolution of stellar and planetary magnetic fields
NASA Astrophysics Data System (ADS)
Petitdemange, Ludovic; Schrinner, Martin; Dormy, Emmanuel; ENS Collaboration
2011-10-01
Zonal flows and magnetic field are present in various objects as accretion discs, stars and planets. Observations show a huge variety of stellar and planetary magnetic fields. Of particular interest is the understanding of cyclic field variations, as known from the sun. They are often explained by an important Ω-effect, i.e., by the stretching of field lines because of strong differential rotation. We computed the dynamo coefficients for an oscillatory dynamo model with the help of the test-field method. We argue that this model is of α2 Ω -type and here the Ω-effect alone is not responsible for its cyclic time variation. More general conditions which lead to dynamo waves in global direct numerical simulations are presented. Zonal flows driven by convection in planetary interiors may lead to secondary instabilities. We showed that a simple, modified version of the MagnetoRotational Instability, i.e., the MS-MRI can develop in planteray interiors. The weak shear yields an instability by its constructive interaction with the much larger rotation rate of planets. We present results from 3D simulations and show that 3D MS-MRI modes can generate wave pattern at the surface of the spherical numerical domain. Zonal flows and magnetic field are present in various objects as accretion discs, stars and planets. Observations show a huge variety of stellar and planetary magnetic fields. Of particular interest is the understanding of cyclic field variations, as known from the sun. They are often explained by an important Ω-effect, i.e., by the stretching of field lines because of strong differential rotation. We computed the dynamo coefficients for an oscillatory dynamo model with the help of the test-field method. We argue that this model is of α2 Ω -type and here the Ω-effect alone is not responsible for its cyclic time variation. More general conditions which lead to dynamo waves in global direct numerical simulations are presented. Zonal flows driven by convection
Global variations of zonal mean ozone during stratospheric warming events
NASA Technical Reports Server (NTRS)
Randel, William J.
1993-01-01
Eight years of Solar Backscatter Ultraviolet (SBUV) ozone data are examined to study zonal mean variations associated with stratospheric planetary wave (warming) events. These fluctuations are found to be nearly global in extent, with relatively large variations in the tropics, and coherent signatures reaching up to 50 deg in the opposite (summer) hemisphere. These ozone variations are a manifestation of the global circulation cells associated with stratospheric warming events; the ozone responds dynamically in the lower stratosphere to transport, and photochemically in the upper stratosphere to the circulation-induced temperature changes. The observed ozone variations in the tropics are of particular interest because transport is dominated by zonal-mean vertical motions (eddy flux divergences and mean meridional transports are negligible), and hence, substantial simplifications to the governing equations occur. The response of the atmosphere to these impulsive circulation changes provides a situation for robust estimates of the ozone-temperature sensitivity in the upper stratosphere.
Diffusion of Zonal Variables Using Node-Centered Diffusion Solver
Yang, T B
2007-08-06
Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirablemore » to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.« less
Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco
2018-01-01
Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.
Zonal flow dynamics and control of turbulent transport in stellarators.
Xanthopoulos, P; Mischchenko, A; Helander, P; Sugama, H; Watanabe, T-H
2011-12-09
The relation between magnetic geometry and the level of ion-temperature-gradient (ITG) driven turbulence in stellarators is explored through gyrokinetic theory and direct linear and nonlinear simulations. It is found that the ITG radial heat flux is sensitive to details of the magnetic configuration that can be understood in terms of the linear behavior of zonal flows. The results throw light on the question of how the optimization of neoclassical confinement is related to the reduction of turbulence.
Wave kinetics of drift-wave turbulence and zonal flows beyond the ray approximation
Zhu, Hongxuan; Zhou, Yao; Ruiz, D. E.
Inhomogeneous drift-wave turbulence can be modeled as an effective plasma where drift waves act as quantumlike particles and the zonal-flow velocity serves as a collective field through which they interact. This effective plasma can be described by a Wigner-Moyal equation (WME), which generalizes the quasilinear wave-kinetic equation (WKE) to the full-wave regime, i.e., resolves the wavelength scale. Unlike waves governed by manifestly quantumlike equations, whose WMEs can be borrowed from quantum mechanics and are commonly known, drift waves have Hamiltonians very different from those of conventional quantum particles. This causes unusual phase-space dynamics that is typically not captured by themore » WKE. We demonstrate how to correctly model this dynamics with the WME instead. Specifically, we report full-wave phase-space simulations of the zonal-flow formation (zonostrophic instability), deterioration (tertiary instability), and the so-called predator-prey oscillations. We also show how the WME facilitates analysis of these phenomena, namely, (i) we show that full-wave effects critically affect the zonostrophic instability, particularly its nonlinear stage and saturation; (ii) we derive the tertiary-instability growth rate; and (iii) we demonstrate that, with full-wave effects retained, the predator-prey oscillations do not require zonal-flow collisional damping, contrary to previous studies. In conclusion, we also show how the famous Rayleigh-Kuo criterion, which has been missing in wave-kinetic theories of drift-wave turbulence, emerges from the WME.« less
Saturn Ring Mass and Zonal Gravitational Harmonics Estimate at the End of the Cassini "Grand Finale"
NASA Astrophysics Data System (ADS)
Brozovic, M.; Jacobson, R. A.; Roth, D. C.
2015-12-01
"Solstice" mission is the 7-year extension of the Cassini-Huygens spacecraft exploration of the Saturn system that will culminate with the "Grand Finale". Beginning in mid-2017, the spacecraft is scheduled to execute 22 orbits that have their periapses between the innermost D-ring and the upper layers of Saturn's atmosphere. These orbits will be perturbed by the gravitational field of Saturn as well as by the rings. We present an analysis of simulated "Grand Finale" radiometric data, and we investigate their sensitivity to the ring mass and higher zonal gravitational harmonics of the planet. We model the data quantity with respect to the available coverage of the tracking stations on Earth, and we account for the times when the spacecraft is occulted either by Saturn or the rings. We also use different data weights to simulate changes in the data quality. The dynamical model of the spacecraft motion includes both gravitational and non-gravitational forces, such as the daily momentum management due to Reaction Wheel Assembly and radioisotope thermo-electric generator accelerations. We solve the equations of motion and use a weighted-least squares fit to obtain spacecraft's state vector, mass(es) of the ring or the individual rings, zonal harmonics, and non-gravitational accelerations. We also investigate some a-priori values of the A- and B-ring masses from Tiscareno et al. (2007) and Hedman et al. (2015) analyses. The preliminary results suggest that the "Grand Finale" orbits should remain sensitive to the ring mass even for GMring<2 km3/s2 and that they will also provide high accuracy estimates of the zonal harmonics J8, J10, and J12.
Wave kinetics of drift-wave turbulence and zonal flows beyond the ray approximation
Zhu, Hongxuan; Zhou, Yao; Ruiz, D. E.; ...
2018-05-29
Inhomogeneous drift-wave turbulence can be modeled as an effective plasma where drift waves act as quantumlike particles and the zonal-flow velocity serves as a collective field through which they interact. This effective plasma can be described by a Wigner-Moyal equation (WME), which generalizes the quasilinear wave-kinetic equation (WKE) to the full-wave regime, i.e., resolves the wavelength scale. Unlike waves governed by manifestly quantumlike equations, whose WMEs can be borrowed from quantum mechanics and are commonly known, drift waves have Hamiltonians very different from those of conventional quantum particles. This causes unusual phase-space dynamics that is typically not captured by themore » WKE. We demonstrate how to correctly model this dynamics with the WME instead. Specifically, we report full-wave phase-space simulations of the zonal-flow formation (zonostrophic instability), deterioration (tertiary instability), and the so-called predator-prey oscillations. We also show how the WME facilitates analysis of these phenomena, namely, (i) we show that full-wave effects critically affect the zonostrophic instability, particularly its nonlinear stage and saturation; (ii) we derive the tertiary-instability growth rate; and (iii) we demonstrate that, with full-wave effects retained, the predator-prey oscillations do not require zonal-flow collisional damping, contrary to previous studies. In conclusion, we also show how the famous Rayleigh-Kuo criterion, which has been missing in wave-kinetic theories of drift-wave turbulence, emerges from the WME.« less
Trends in the Zonal Winds over the Southern Ocean from the NCEP/NCAR Reanalysis and Scatterometers
NASA Astrophysics Data System (ADS)
Richman, J. G.
2002-12-01
The winds over the Southern Ocean for the entire 54-year (1948-2001) period of the NCEP/NCAR Reanalysis have been decomposed into Principal Components (Empirical Orthogonal Functions). The first EOF describes 83 percent of the variance in the zonal wind. The loading of the EOF shows the predominately westerly surface flow with strongest winds in the Indian sector of the Southern Ocean. The structure of this EOF is similar to the Southern Annular Mode (SAM) identified by Thompson, et al 2000. The amplitude of this EOF reveals a large trend of 4.42 cm/s/yr in the strength of the zonal wind corresponding to a nearly 50 percent increase in the wind stress over the Southern Ocean. Such a trend, if real, would be important in the dynamics of the Antarctic Circumpolar Current (ACC). Recent studies by Gille, et al. (2001), Olbers and Ivchenko (2001) and Gent et al. (2001) have shown that the transport of the ACC is correlated to the variability in the zonal wind with a monotonic increase in the transport with increasing zonal wind strength. However, errors in the data assimilation scheme for surface pressure observations on the Antarctic continent appears to have caused a spurious trend in the sea level pressure south of 40S of -0.2 hPa/yr (Hines, et al. 2000 and Marshall, 2002). The sea level pressure difference between 40S and 60S has risen by 8 hPa over the same period. This sea level pressure difference is used as a proxy for the strength of the zonal winds. Thus, the trend in the zonal wind EOF amplitude may be an artifact of model errors in the NCEP Reanalysis. To check this trend, we analyzed scatterometer winds over the Southern Ocean from the SEASAT, ERS (1 and 2), NSCAT and QuikScat satellites. The scatterometer data is not used in the NCEP Reanalysis and, thus, is an independent estimate of the winds. The SEASAT Scatterometer (SASS) operated for 90 days in July-September, 1978, while the ERS, NSCAT and QuikScat scatterometers provide a continuous dataset from
Tan, Ting; Chen, Lizhang; Liu, Fuqiang
2014-11-01
To establish multiple seasonal autoregressive integrated moving average model (ARIMA) according to the hand-foot-mouth disease incidence in Changsha, and to explore the feasibility of the multiple seasonal ARIMA in predicting the hand-foot-mouth disease incidence. EVIEWS 6.0 was used to establish multiple seasonal ARIMA according to the hand-foot- mouth disease incidence from May 2008 to August 2013 in Changsha, and the data of the hand- foot-mouth disease incidence from September 2013 to February 2014 were served as the examined samples of the multiple seasonal ARIMA, then the errors were compared between the forecasted incidence and the real value. Finally, the incidence of hand-foot-mouth disease from March 2014 to August 2014 was predicted by the model. After the data sequence was handled by smooth sequence, model identification and model diagnosis, the multiple seasonal ARIMA (1, 0, 1)×(0, 1, 1)12 was established. The R2 value of the model fitting degree was 0.81, the root mean square prediction error was 8.29 and the mean absolute error was 5.83. The multiple seasonal ARIMA is a good prediction model, and the fitting degree is good. It can provide reference for the prevention and control work in hand-foot-mouth disease.
NASA Astrophysics Data System (ADS)
Rüfenacht, R.; Kämpfer, N.; Murk, A.
2012-12-01
Today, the wind data for the upper stratosphere and lower mesosphere are commonly extrapolated using models or calculated from measurements of the temperature field, but are not measured directly. Still, such measurements would allow direct observations of dynamic processes and thus provide a better understanding of the circulation in this altitude region where the zonal wind speed reaches a maximum. Observations of middle-atmospheric winds are also expected to provide deeper insight in the coupling between the upper and the lower atmosphere, especially in the case of sudden stratospheric warming events. Furthermore, as the local chemical composition of the middle atmosphere can be measured with high accuracy, wind data could be beneficial for the interpretation of the associated transport processes. In future, middle-atmospheric wind measurements could help to improve atmospheric circulation models. Aiming to contribute to the closing of this data gap the Institute of Applied Physics of the University of Bern built a new ground-based 142 GHz Doppler-spectro-radiometer with the acronym WIRA (WInd RAdiometer) specifically designed for the measurement of middle-atmospheric wind. Currently wind speeds in five levels between 30 and 79 km can be retrieved what makes WIRA the first instrument continuously measuring profiles of horizontal wind in this altitude range. On the altitude levels where our measurement can be compared to ECMWF very good agreement has been found in the long-term statistics, with WIRA = (0.98±0.02) × ECMWF + (0.44±0.91) m/s on average, as well as in short time structures with a duration of a few days. WIRA uses a passive double sideband heterodyne receiver together with a digital Fourier transform spectrometer for the data acquisition. A big advantage of the radiometric approach is that such instruments can also operate under adverse weather conditions and thus provide a continuous time series for the given location. The optics enables the
NASA Astrophysics Data System (ADS)
Rüfenacht, Rolf; Kämpfer, Niklaus; Murk, Axel
2013-04-01
Today, the wind data for the upper stratosphere and lower mesosphere are commonly extrapolated using models or calculated from measurements of the temperature field, but are not measured directly. Still, such measurements would allow direct observations of dynamic processes and thus provide a better understanding of the circulation in this altitude region where the zonal wind speed reaches a maximum. Observations of middle-atmospheric winds are also expected to provide deeper insight in the coupling between the upper and the lower atmosphere, especially in the case of sudden stratospheric warming events. Furthermore, as the local chemical composition of the middle atmosphere can be measured with high accuracy, wind data could be beneficial for the interpretation of the associated transport processes. In future, middle-atmospheric wind measurements could help to improve atmospheric circulation models. Aiming to contribute to the closing of this data gap the Institute of Applied Physics of the University of Bern built a new ground-based 142 GHz Doppler-spectro-radiometer with the acronym WIRA (WInd RAdiometer) specifically designed for the measurement of middle-atmospheric wind. Until now wind speeds in five levels between 30 and 79 km can be retrieved what made WIRA the first instrument continuously measuring profiles of horizontal wind in this altitude range. On the altitude levels where our measurement can be compared to ECMWF very good agreement has been found in the long-term statistics, with WIRA = (0.98±0.02) × ECMWF + (0.44±0.91) m/s on average, as well as in short time structures with a duration of a few days. WIRA uses a passive heterodyne receiver together with a digital Fourier transform spectrometer for the data acquisition. A big advantage of the radiometric approach is that such instruments can also operate under adverse weather conditions and thus provide a continuous time series for the given location. The optics enables the instrument to scan a
Another look at zonal flows: Resonance, shearing, and frictionless saturation
NASA Astrophysics Data System (ADS)
Li, J. C.; Diamond, P. H.
2018-04-01
We show that shear is not the exclusive parameter that represents all aspects of flow structure effects on turbulence. Rather, wave-flow resonance enters turbulence regulation, both linearly and nonlinearly. Resonance suppresses the linear instability by wave absorption. Flow shear can weaken the resonance, and thus destabilize drift waves, in contrast to the near-universal conventional shear suppression paradigm. Furthermore, consideration of wave-flow resonance resolves the long-standing problem of how zonal flows (ZFs) saturate in the limit of weak or zero frictional drag, and also determines the ZF scale. We show that resonant vorticity mixing, which conserves potential enstrophy, enables ZF saturation in the absence of drag, and so is effective at regulating the Dimits up-shift regime. Vorticity mixing is incorporated as a nonlinear, self-regulation effect in an extended 0D predator-prey model of drift-ZF turbulence. This analysis determines the saturated ZF shear and shows that the mesoscopic ZF width scales as LZ F˜f3 /16(1-f ) 1 /8ρs5/8l03 /8 in the (relevant) adiabatic limit (i.e., τckk‖2D‖≫1 ). f is the fraction of turbulence energy coupled to ZF and l0 is the base state mixing length, absent ZF shears. We calculate and compare the stationary flow and turbulence level in frictionless, weakly frictional, and strongly frictional regimes. In the frictionless limit, the results differ significantly from conventionally quoted scalings derived for frictional regimes. To leading order, the flow is independent of turbulence intensity. The turbulence level scales as E ˜(γL/εc) 2 , which indicates the extent of the "near-marginal" regime to be γL<εc , for the case of avalanche-induced profile variability. Here, εc is the rate of dissipation of potential enstrophy and γL is the characteristic linear growth rate of fluctuations. The implications for dynamics near marginality of the strong scaling of saturated E with γL are discussed.
This paper presents a depth-averaged two-dimensional shallow water model for simulating long waves in vegetated water bodies under breaking and non-breaking conditions. The effects of rigid vegetation are modelled in the form of drag and inertia forces as sink terms in the momentum equations. The dr...
NASA Astrophysics Data System (ADS)
Berdichevsky, D. B.; Lepping, R. P.; Wu, C. C.
2016-12-01
We examine the average magnetic field magnitude (|B|) within magnetic clouds (MCs) observed over the period of 1995 to July of 2015, to understand the difference between this field magnitude and the ideal (field magnitude) |B|-profiles expected from using a static, constant-α, force-free, cylindrically symmetric model for MCs (Lepping et al. 1990, denoted as the LJB model here) in general. We classify all MCs according to an objectively assigned quality, Qo (=1,2,3, for excellent, good, and poor). There are a total of 209 MCs and 124 if only Qo=1,2 cases are considered. Average normalized field with respect to closest approach (CA) is stressed where we separate cases into four CA sectors centered at 12.5%, 37.5%, 62.5%, and 87.5% of the average radius; the averaging is done on a percent-duration basis to put all cases on the same footing. By normalized field we mean that, before averaging, the |B| for each MC at each point is divided by the field magnitude estimated for the MC's axis (Bo) as determined by the LJB model. The actual averages for the 209 and 124 MC sets are compared separately to the LJB model, after an adjustment for MC expansion, which is estimated from long-term average conditions of MCs at 1 AU using a typical speed difference of 40 km/s across the average MC. The comparison is a direct difference (average observations - model) vs. time for the four sets separately. These four difference-relationships are fitted with four quadratic curves, which have very small sigmas for the fits. Interpretation of these relationships (called Quad formulae) should provide a comprehensive view of the variation of the normalized field-magnitude throughout the average MC where we expect both front and rear compression (due to solar wind interaction) to be part of its explanation. These formulae are also being considered for modifying the LJB model. This modification is expected to be used for assistance in a scheme used for forecasting the timing and
Zonal wind indices to reconstruct United States winter precipitation during El Niño
NASA Astrophysics Data System (ADS)
Farnham, D. J.; Steinschneider, S.; Lall, U.
2017-12-01
The highly discussed 2015/16 El Niño event, which many likened to the similarly strong 1997/98 El Niño event, led to precipitation impacts over the continental United States (CONUS) inconsistent with general expectations given past events and model-based forecasts. This presents a challenge for regional water managers and others who use seasonal precipitation forecasts who previously viewed El Niño events as times of enhanced confidence in seasonal water availability and flood risk forecasts. It is therefore useful to understand the extent to which wintertime CONUS precipitation during El Niño events can be explained by seasonal sea surface temperature heating patterns and the extent to which the precipitation is a product of natural variability. In this work, we define two seasonal indices based on the zonal wind field spanning from the eastern Pacific to the western Atlantic over CONUS that can explain El Niño precipitation variation spatially throughout CONUS over 11 historic El Niño events from 1950 to 2016. The indices reconstruct El Niño event wintertime (Jan-Mar) gridded precipitation over CONUS through cross-validated regression much better than the traditional ENSO sea surface temperature indices or other known modes of variability. Lastly, we show strong relationships between sea surface temperature patterns and the phases of the zonal wind indices, which in turn suggests that some of the disparate CONUS precipitation during El Niño events can be explained by different heating patterns. The primary contribution of this work is the identification of intermediate variables (in the form of zonal wind indices) that can facilitate further studies into the distinct hydroclimatic response to specific El Niño events.
Unsteady Airfoil Flow Solutions on Moving Zonal Grids
1992-12-17
for the angle-of-attack of 15.5’, the comparisons diverge. This happens because of the different turbulence models used . At this angle- of attack, the...downstream in the wake . This vortex shedding phenomenon alters the chordwise pressure distribution on the upper surface of the airfoil resulting in higher...in- terest, turbulence modeling is used . Turbulence models are implemented with the time-averaged forms of the Navier-Stokes equations. Two widely
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
NASA Astrophysics Data System (ADS)
Balch, W. M.; Poulton, A. J.; Drapeau, D. T.; Bowler, B. C.; Windecker, L. A.; Booth, E. S.
2011-03-01
Primary production (P prim) and calcification (C calc) were measured in the eastern and central Equatorial Pacific during December 2004 and September 2005, between 110°W and 140°W. The design of the field sampling allowed partitioning of P prim and total chlorophyll a (B) between large (>3 μm) and small (0.45-3 μm) phytoplankton cells. The station locations allowed discrimination of meridional and zonal patterns. The cruises coincided with a warm El Niño Southern Oscillation (ENSO) phase and ENSO-neutral phase, respectively, which proved to be the major factors relating to the patterns of productivity. Production and biomass of large phytoplankton generally covaried with that of small cells; large cells typically accounted for 20-30% of B and 20% of P prim. Elevated biomass and primary production of all size fractions were highest along the equator as well as at the convergence zone between the North Equatorial Counter Current and the South Equatorial Current. C calc by >0.4 μm cells was 2-3% of P prim by the same size fraction, for both cruises. Biomass-normalized P prim values were, on average, slightly higher during the warm-phase ENSO period, inconsistent with a "bottom-up" control mechanism (such as nutrient supply). Another source of variability along the equator was Tropical Instability Waves (TIWs). Zonal variance in integrated phytoplankton biomass (along the equator, between 110° and 140°) was almost the same as the meridional variance across it (between 4° N and 4° S). However, the zonal variance in integrated P prim was half the variance observed meridionally. The variance in integrated C calc along the equator was half that seen meridionally during the warm ENSO phase cruise whereas during the ENSO-neutral period, it was identical. No relation could be observed between the patterns of integrated carbon fixation (P prim or C calc) and integrated nutrients (nitrate, ammonium, silicate or dissolved iron). This suggests that the factors
NASA Astrophysics Data System (ADS)
Dutta, S.; Tassi, P.; Fischer, P.; Wang, D.; Garcia, M. H.
2016-12-01
Diversions are a subset of asymmetric bifurcations, where one of the channels after bifurcation continues along the direction of the original channel, often referred to as the main-channel. Diversions are not only built for river-engineering purposes, e.g. navigational canals, channels to divert water and sediment to rebuild deltas etc.; they can also be formed naturally, e.g. chute cutoffs. Thus correct prediction of the hydrodynamics and sediment transport at a diversion is essential. One of the first extensive studies on diversion was conducted by Bulle [1926], where it was found that compared to discharge of water; a disproportionately higher amount of bed-load sediment entered the lateral-channel at the diversion. Hence, this phenomenon is known as the Bulle-Effect. Recent studies have used high-resolution Large Eddy Simulation (LES) [Dutta et al., 2016a] and Reynolds Averaged Navier-Stokes (RANS) based three-dimensional hydrodynamics model [Dutta et al., 2016b] to unravel the mechanism behind the aforementioned non-linear phenomenon. Such studies have shown that the Bulle-Effect is caused by a stark difference between the flow structure near the bottom of a channel, and near the top of a channel. These findings hint towards the possible failure of 2D shallow water based numerical models in simulating the hydrodynamics and the sediment transport at a diversion correctly. The current study analyzes the hydrodynamics and sediment transport at a 90-degree diversion across five different models of increasing complexity, starting from a 2D depth-averaged hydrodynamics model to a high-resolution LES. This comparative study will provide a clear indication of the minimum amount of complexity a model should inculcate in order to capture the Bulle-Effect relatively well. Bulle, (1926), Untersuchungen ber die geschiebeableitung bei der spaltung von wasserlufen, Technical Report, V.D.I. Verlag, Berlin, Germany Dutta et al., (2016), Large Eddy Simulation (LES) of flow and
Direct flux measurements of NH3 are expensive, time consuming, and require detailed supporting measurements of soil, vegetation, and atmospheric chemistry for interpretation and model parameterization. It is therefore often necessary to infer fluxes by combining measurements of...
Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil
2018-02-13
Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.
Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William
2014-01-01
Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853
Temporal Variability and Latitudinal Jets in Venus's Zonal Wind Profiles
NASA Astrophysics Data System (ADS)
Young, Eliot F.; Bullock, M. A.; Tavenner, T.; Coyote, S.; Murphy, J. R.
2008-09-01
We have observed Venus's night hemisphere from NASA's IRTF (Infrared Telescope Facility) during each inferior conjunction since 2001 to quantify the motion of features in Venus's lower and middle cloud decks. We now present latitudinal profiles from 11 nights, obtained in May and July 2004, February 2006 and September 2007. In about 7 of the 11 nights there are zonal jets near 45N and/or -50S, with speed differentials of 5 to 15 m/s relative to the adjacent equatorward latitude bands. These jets may be evidence of episodic Hadley cell-type circulation. About half of the nights show relatively constant velocity profiles between the latitudes of 50N to 50S, suggesting that considerable mixing is taking place between latitudes. Our most remarkable result is the temporal variability in the median zonal speeds from day to day. For example, the median velocity near the equator increases from 53 to 65 m/s over the period from July 11 - 13, 2004, and increases from 65 to 82 m/s over the period from Sept. 9 - 11, 2007. These velocity changes are too great to be due to the tracking of clouds that are in the middle vs. lower cloud deck, nor can they be caused by clouds that occupy different altitudes; a velocity variation of 25% corresponds to an altitude difference of 15 km, based on vertical profiles of zonal windspeeds from tracking of Pioneer Venus and Venera descent probes. Fifteen km is greater than the expected variation in either cloud base. VIRTIS observations of Venus's southern hemisphere were also obtained in September 2007 and should be able to corroborate or contradict the observed variations. This work was supported by NASA's Planetary Astronomy and Atmospheres programs.
Magnetic flux concentration and zonal flows in magnetorotational instability turbulence
Bai, Xue-Ning; Stone, James M., E-mail: xbai@cfa.harvard.edu
2014-11-20
Accretion disks are likely threaded by external vertical magnetic flux, which enhances the level of turbulence via the magnetorotational instability (MRI). Using shearing-box simulations, we find that such external magnetic flux also strongly enhances the amplitude of banded radial density variations known as zonal flows. Moreover, we report that vertical magnetic flux is strongly concentrated toward low-density regions of the zonal flow. Mean vertical magnetic field can be more than doubled in low-density regions, and reduced to nearly zero in high-density regions in some cases. In ideal MHD, the scale on which magnetic flux concentrates can reach a few diskmore » scale heights. In the non-ideal MHD regime with strong ambipolar diffusion, magnetic flux is concentrated into thin axisymmetric shells at some enhanced level, whose size is typically less than half a scale height. We show that magnetic flux concentration is closely related to the fact that the turbulent diffusivity of the MRI turbulence is anisotropic. In addition to a conventional Ohmic-like turbulent resistivity, we find that there is a correlation between the vertical velocity and horizontal magnetic field fluctuations that produces a mean electric field that acts to anti-diffuse the vertical magnetic flux. The anisotropic turbulent diffusivity has analogies to the Hall effect, and may have important implications for magnetic flux transport in accretion disks. The physical origin of magnetic flux concentration may be related to the development of channel flows followed by magnetic reconnection, which acts to decrease the mass-to-flux ratio in localized regions. The association of enhanced zonal flows with magnetic flux concentration may lead to global pressure bumps in protoplanetary disks that helps trap dust particles and facilitates planet formation.« less
Zonal flow generation in inertial confinement fusion implosions
Peterson, J. L.; Humbird, K. D.; Field, J. E.; ...
2017-03-06
A supervised machine learning algorithm trained on a multi-petabyte dataset of inertial confinement fusion simulations has identified a class of implosions that robustly achieve high yield, even in the presence of drive variations and hydrodynamic perturbations. These implosions are purposefully driven with a time-varying asymmetry, such that coherent flow generation during hotspot stagnation forces the capsule to self-organize into an ovoid, a shape that appears to be more resilient to shell perturbations than spherical designs. Here this new class of implosions, whose configurations are reminiscent of zonal flows in magnetic fusion devices, may offer a path to robust inertial fusion.
Zonal flow generation in inertial confinement fusion implosions
Peterson, J. L.; Humbird, K. D.; Field, J. E.
A supervised machine learning algorithm trained on a multi-petabyte dataset of inertial confinement fusion simulations has identified a class of implosions that robustly achieve high yield, even in the presence of drive variations and hydrodynamic perturbations. These implosions are purposefully driven with a time-varying asymmetry, such that coherent flow generation during hotspot stagnation forces the capsule to self-organize into an ovoid, a shape that appears to be more resilient to shell perturbations than spherical designs. Here this new class of implosions, whose configurations are reminiscent of zonal flows in magnetic fusion devices, may offer a path to robust inertial fusion.
On the long-term variability of Jupiter and Saturn zonal winds
NASA Astrophysics Data System (ADS)
Sanchez-Lavega, A.; Garcia-Melendo, E.; Hueso, R.; Barrado-Izagirre, N.; Legarreta, J.; Rojas, J. F.
2012-12-01
We present an analysis of the long-term variability of Jupiter and Saturn zonal wind profiles at their upper cloud level as retrieved from cloud motion tracking on images obtained at ground-based observatories and with different spacecraft missions since 1979, encompassing about three Jovian and one Saturn years. We study the sensitivity and variability of the zonal wind profile in both planets to major planetary-scale disturbances and to seasonal forcing. We finally discuss the implications that these results have for current model efforts to explain the global tropospheric circulation in these planets. Acknowledgements: This work has been funded by Spanish MICIIN AYA2009-10701 with FEDER support, Grupos Gobierno Vasco IT-464-07 and UPV/EHU UFI11/55. [1] Sánchez-Lavega A., et al., Icarus, 147, 405-420 (2000). [2] García-Melendo E., Sánchez LavegaA., Icarus, 152, 316-330 (2001) [3] Sánchez-Lavega A., et al., Nature, 423, 623-625 (2003). [4] García-Melendo E., et al., Geophysical Research Letters, 37, L22204 (2010).
NASA Technical Reports Server (NTRS)
Huang, Frank T.; Mayr, Hans; Russell, James; Mlynczak, Marty; Reber, Carl A.
2005-01-01
In the Numerical Spectral Model (NSM, Mayr et al., 2003), small-scale gravity waves propagating in the north/south direction can generate zonal mean (m = 0) meridional wind oscillations with periods between 2 and 4 months. These oscillations tend to be confined to low latitudes and have been interpreted to be the meridional counterpart of the wave-driven Quasi Biennial Oscillation in the zonal circulation. Wave driven meridional winds across the equator should generate, due to dynamical heating and cooling, temperature oscillations with opposite phase in the two hemispheres. We have analyzed SABER temperature measurements in the altitude range between 55 and 95 km to investigate the existence such variations. Because there are also strong tidal signatures (up to approximately 20 K) in the data, our algorithm estimates both mean values and tides together from the data. Based on SABER temperature data, the intra-annual variations with periods between 2 and 4 months can have amplitudes up to 5 K or more, depending on the altitude. Their amplitudes are in qualitative agreement with those inferred Erom UARS data (from different years). The SABER temperature variations also reveal pronounced hemispherical asymmetries, which are qualitatively consistent with wave driven meridional wind oscillations across the equator. Oscillations with similar periods have been seen in the meridional winds based on UARS data (Huang and Reber, 2003).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-15
... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Parts 85, 86, and 600 DEPARTMENT OF TRANSPORTATION National Highway Traffic Safety Administration 49 CFR Parts 523, 531, 533, 536, and 537 [EPA-HQ-OAR-2010-0799; FRL-9706-5; NHTSA-2010-0131] RIN 2060-AQ54; RIN 2127-AK79 2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions and Corporate...
Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner
2013-01-01
Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.
ERIC Educational Resources Information Center
Malloch, Douglas C.; Michael, William B.
1981-01-01
This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…
Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D
2011-12-01
Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.
NASA Technical Reports Server (NTRS)
McGuire, Tim
1998-01-01
In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.
A model of stratospheric chemistry and transport on an isentropic surface
NASA Technical Reports Server (NTRS)
Austin, John; Holton, James R.
1990-01-01
This paper presents a new photochemical transport model designed to simulate the behavior of stratospheric trace species in the middle stratosphere. The model has an Eulerian grid with the latitude and longitude coordinates on a single isentropic surface (hemispheric or global), in which both the dynamical and the photochemical processes can be accurately represented. The model is intgegrated for 12 days with winds and temperatures supplied by three-dimensional integration of an idealized wavenumber-one disturbance. The results for the long-lived tracers such as N2O showed excellent correlation with the potential vorticity distribution, validating the transport scheme. Calculations with zonally averaged wind and temperature fields showed that discrepancies in the calculation of the zonal mean were less than 10 percent for O3 and HNO3, compared with the zonal mean of the previous results.
Forest response to rising CO2 drives zonally asymmetric rainfall change over tropical land
NASA Astrophysics Data System (ADS)
Kooperman, Gabriel J.; Chen, Yang; Hoffman, Forrest M.; Koven, Charles D.; Lindsay, Keith; Pritchard, Michael S.; Swann, Abigail L. S.; Randerson, James T.
2018-05-01
Understanding how anthropogenic CO2 emissions will influence future precipitation is critical for sustainably managing ecosystems, particularly for drought-sensitive tropical forests. Although tropical precipitation change remains uncertain, nearly all models from the Coupled Model Intercomparison Project Phase 5 predict a strengthening zonal precipitation asymmetry by 2100, with relative increases over Asian and African tropical forests and decreases over South American forests. Here we show that the plant physiological response to increasing CO2 is a primary mechanism responsible for this pattern. Applying a simulation design in the Community Earth System Model in which CO2 increases are isolated over individual continents, we demonstrate that different circulation, moisture and stability changes arise over each continent due to declines in stomatal conductance and transpiration. The sum of local atmospheric responses over individual continents explains the pan-tropical precipitation asymmetry. Our analysis suggests that South American forests may be more vulnerable to rising CO2 than Asian or African forests.
NASA Astrophysics Data System (ADS)
Magic, Z.; Collet, R.; Hayek, W.; Asplund, M.
2013-12-01
Aims: We study the implications of averaging methods with different reference depth scales for 3D hydrodynamical model atmospheres computed with the Stagger-code. The temporally and spatially averaged (hereafter denoted as ⟨3D⟩) models are explored in the light of local thermodynamic equilibrium (LTE) spectral line formation by comparing spectrum calculations using full 3D atmosphere structures with those from ⟨3D⟩ averages. Methods: We explored methods for computing mean ⟨3D⟩ stratifications from the Stagger-grid time-dependent 3D radiative hydrodynamical atmosphere models by considering four different reference depth scales (geometrical depth, column-mass density, and two optical depth scales). Furthermore, we investigated the influence of alternative averages (logarithmic, enforced hydrostatic equilibrium, flux-weighted temperatures). For the line formation we computed curves of growth for Fe i and Fe ii lines in LTE. Results: The resulting ⟨3D⟩ stratifications for the four reference depth scales can be very different. We typically find that in the upper atmosphere and in the superadiabatic region just below the optical surface, where the temperature and density fluctuations are highest, the differences become considerable and increase for higher Teff, lower log g, and lower [Fe / H]. The differential comparison of spectral line formation shows distinctive differences depending on which ⟨3D⟩ model is applied. The averages over layers of constant column-mass density yield the best mean ⟨3D⟩ representation of the full 3D models for LTE line formation, while the averages on layers at constant geometrical height are the least appropriate. Unexpectedly, the usually preferred averages over layers of constant optical depth are prone to increasing interference by reversed granulation towards higher effective temperature, in particular at low metallicity. Appendix A is available in electronic form at http://www.aanda.orgMean ⟨3D⟩ models are
A zonal wavefront sensor with multiple detector planes
NASA Astrophysics Data System (ADS)
Pathak, Biswajit; Boruah, Bosanta R.
2018-03-01
A conventional zonal wavefront sensor estimates the wavefront from the data captured in a single detector plane using a single camera. In this paper, we introduce a zonal wavefront sensor which comprises multiple detector planes instead of a single detector plane. The proposed sensor is based on an array of custom designed plane diffraction gratings followed by a single focusing lens. The laser beam whose wavefront is to be estimated is incident on the grating array and one of the diffracted orders from each grating is focused on the detector plane. The setup, by employing a beam splitter arrangement, facilitates focusing of the diffracted beams on multiple detector planes where multiple cameras can be placed. The use of multiple cameras in the sensor can offer several advantages in the wavefront estimation. For instance, the proposed sensor can provide superior inherent centroid detection accuracy that can not be achieved by the conventional system. It can also provide enhanced dynamic range and reduced crosstalk performance. We present here the results from a proof of principle experimental arrangement that demonstrate the advantages of the proposed wavefront sensing scheme.
Logsdon, Benjamin A.; Carty, Cara L.; Reiner, Alexander P.; Dai, James Y.; Kooperberg, Charles
2012-01-01
Motivation: For many complex traits, including height, the majority of variants identified by genome-wide association studies (GWAS) have small effects, leaving a significant proportion of the heritable variation unexplained. Although many penalized multiple regression methodologies have been proposed to increase the power to detect associations for complex genetic architectures, they generally lack mechanisms for false-positive control and diagnostics for model over-fitting. Our methodology is the first penalized multiple regression approach that explicitly controls Type I error rates and provide model over-fitting diagnostics through a novel normally distributed statistic defined for every marker within the GWAS, based on results from a variational Bayes spike regression algorithm. Results: We compare the performance of our method to the lasso and single marker analysis on simulated data and demonstrate that our approach has superior performance in terms of power and Type I error control. In addition, using the Women's Health Initiative (WHI) SNP Health Association Resource (SHARe) GWAS of African-Americans, we show that our method has power to detect additional novel associations with body height. These findings replicate by reaching a stringent cutoff of marginal association in a larger cohort. Availability: An R-package, including an implementation of our variational Bayes spike regression (vBsr) algorithm, is available at http://kooperberg.fhcrc.org/soft.html. Contact: blogsdon@fhcrc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22563072
NASA Astrophysics Data System (ADS)
Bouchet, F.; Laurie, J.; Zaboronski, O.
2012-12-01
We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a
NASA Astrophysics Data System (ADS)
Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.
2017-07-01
Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.
Chen, Bihua; Yu, Tao; Ristagno, Giuseppe; Quan, Weilun; Li, Yongqin
2014-10-01
Defibrillation current has been shown to be a clinically more relevant dosing unit than energy. However, the effects of average and peak current in determining shock outcome are still undetermined. The aim of this study was to investigate the relationship between average current, peak current and defibrillation success when different biphasic waveforms were employed. Ventricular fibrillation (VF) was electrically induced in 22 domestic male pigs. Animals were then randomized to receive defibrillation using one of two different biphasic waveforms. A grouped up-and-down defibrillation threshold-testing protocol was used to maintain the average success rate of 50% in the neighborhood. In 14 animals (Study A), defibrillations were accomplished with either biphasic truncated exponential (BTE) or rectilinear biphasic waveforms. In eight animals (Study B), shocks were delivered using two BTE waveforms that had identical peak current but different waveform durations. Both average and peak currents were associated with defibrillation success when BTE and rectilinear waveforms were investigated. However, when pathway impedance was less than 90Ω for the BTE waveform, bivariate correlation coefficient was 0.36 (p=0.001) for the average current, but only 0.21 (p=0.06) for the peak current in Study A. In Study B, a high defibrillation success (67.9% vs. 38.8%, p<0.001) was observed when the waveform delivered more average current (14.9±2.1A vs. 13.5±1.7A, p<0.001) while keeping the peak current unchanged. In this porcine model of VF, average current was better than peak current to be an adequate parameter to describe the therapeutic dosage when biphasic defibrillation waveforms were used. The institutional protocol number: P0805. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin
2014-11-01
Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.
Lau, T W; Fang, C; Leung, F
2017-03-01
After the implementation of the multidisciplinary geriatric hip fracture clinical pathway in 2007, the hospital length of stay and the clinical outcomes improves. Moreover, the cost of manpower for each hip fracture decreases. It proves that this care model is cost-effective. The objective of this study is to compare the clinical outcomes and the cost of manpower before and after the implementation of the multidisciplinary geriatric hip fracture clinical pathway (GHFCP). The hip fracture data from 2006 was compared with the data of four consecutive years since 2008. The efficiency of the program is assessed using the hospital length of stay. The clinical outcomes include mortality rates and complication rates are compared. Cost of manpower was also analysed. After the implementation of the GHFCP, the preoperative length of stay shortened significantly from 5.8 days in 2006 to 1.3 days in 2011. The total length of stay in both acute and rehabilitation hospitals were also shortened by 6.1 days and 14.2 days, respectively. The postoperative pneumonia rate also decreased from 1.25 to 0.25%. The short- and long-term mortalities also showed a general improvement. Despite allied health manpower was increased to meet the increased workload, the shortened length of stay accounted for a mark decrease in cost of manpower per hip fracture case. This study proves that the GHFCP shortened the geriatric hip fracture patients' length of stay and improves the clinical outcomes. It is also cost-effective which proves better care is less costly.
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
NASA Astrophysics Data System (ADS)
Miyaguchi, Tomoshige
2017-10-01
There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.
NASA Technical Reports Server (NTRS)
Harrison, Phil; LaVerde, Bruce; Teague, David
2009-01-01
Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.
NASA Technical Reports Server (NTRS)
Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.
1989-01-01
Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.
NASA Astrophysics Data System (ADS)
Ibrahim, Ahmad; Steffler, Peter; She, Yuntong
2018-02-01
The interaction between surface water and groundwater through the hyporheic zone is recognized to be important as it impacts the water quantity and quality in both flow systems. Three-dimensional (3D) modeling is the most complete representation of a real-world hyporheic zone. However, 3D modeling requires extreme computational power and efforts; the sophistication is often significantly compromised by not being able to obtain the required input data accurately. Simplifications are therefore often needed. The objective of this study was to assess the accuracy of the vertically-averaged approximation compared to a more complete vertically-resolved model of the hyporheic zone. The groundwater flow was modeled by either a simple one-dimensional (1D) Dupuit approach or a two-dimensional (2D) horizontal/vertical model in boundary fitted coordinates, with the latter considered as a reference model. Both groundwater models were coupled with a 1D surface water model via the surface water depth. Applying the two models to an idealized pool-riffle sequence showed that the 1D Dupuit approximation gave comparable results in determining the characteristics of the hyporheic zone to the reference model when the stratum thickness is not very large compared to the surface water depth. Conditions under which the 1D model can provide reliable estimate of the seepage discharge, upwelling/downwelling discharges and locations, the hyporheic flow, and the residence time were determined.
Decomposition method for zonal resource allocation problems in telecommunication networks
NASA Astrophysics Data System (ADS)
Konnov, I. V.; Kashuba, A. Yu
2016-11-01
We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian metho