NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Stephens, G. L.; Campbell, G. G.
1980-01-01
The annual and seasonal averaged Earth atmosphere radiation budgets derived from the most complete set of satellite observations available are presented. The budgets were derived from a composite of 48 monthly mean radiation budget maps. Annually and seasonally averaged radiation budgets are presented as global averages and zonal averages. The geographic distribution of the various radiation budget quantities is described. The annual cycle of the radiation budget was analyzed and the annual variability of net flux was shown to be largely dominated by the regular semi and annual cycles forced by external Earth-Sun geometry variations. Radiative transfer calculations were compared to the observed budget quantities and surface budgets were additionally computed with particular emphasis on discrepancies that exist between the present computations and previous surface budget estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul; Durrer, Ruth, E-mail: jyoo@physik.uzh.ch, E-mail: ruth.durrer@unige.ch
Theoretical descriptions of observable quantities in cosmological perturbation theory should be independent of coordinate systems. This statement is often referred to as gauge-invariance of observable quantities, and the sanity of their theoretical description is verified by checking its gauge-invariance. We argue that cosmological observables are invariant scalars under diffeomorphisms and their theoretical description is gauge-invariant, only at linear order in perturbations. Beyond linear order, they are usually not gauge-invariant, and we provide the general law for the gauge-transformation that the perturbation part of an observable does obey. We apply this finding to derive the second-order expression for the observational light-conemore » average in cosmology and demonstrate that our expression is indeed invariant under diffeomorphisms.« less
NASA Technical Reports Server (NTRS)
Racusin, J. L.; Oates, S. R.; De Pasquale, M.; Kocevski, D.
2016-01-01
We present a correlation between the average temporal decay (alpha X,avg, greater than 200 s) and early-time luminosity (LX,200 s) of X-ray afterglows of gamma-ray bursts as observed by the Swift X-ray Telescope. Both quantities are measured relative to a rest-frame time of 200 s after the gamma-ray trigger. The luminosity â€" average decay correlation does not depend on specific temporal behavior and contains one scale-independent quantity minimizing the role of selection effects. This is a complementary correlation to that discovered by Oates et al. in the optical light curves observed by the Swift Ultraviolet Optical Telescope. The correlation indicates that, on average, more luminous X-ray afterglows decay faster than less luminous ones, indicating some relative mechanism for energy dissipation. The X-ray and optical correlations are entirely consistent once corrections are applied and contamination is removed. We explore the possible biases introduced by different light-curve morphologies and observational selection effects, and how either geometrical effects or intrinsic properties of the central engine and jet could explain the observed correlation.
Cosmological ensemble and directional averages of observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonvin, Camille; Clarkson, Chris; Durrer, Ruth
We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmologicalmore » observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.« less
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishida, Hideshi, E-mail: ishida@me.es.osaka-u.ac.jp
2014-06-15
In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. Thesemore » deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.« less
Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics
NASA Astrophysics Data System (ADS)
van Lith, Janneke
The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.
NASA Technical Reports Server (NTRS)
Liu, W. T.
1984-01-01
The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.
Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzler, Ralf
2014-01-14
Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less
Comparative climatology of four marine stratocumulus regimes
NASA Technical Reports Server (NTRS)
Hanson, Howard P.
1990-01-01
The climatology of marine stratocumulus (MSc) cloud regimes off the west coasts of California, Peru, Morocco, and Angola are examined. Long-term, annual averages are presented for several quantities of interest in the four MSc regimes. The climatologies were constructed using the Comprehensive Ocean-Atmosphere Data Set (COADS). A 40 year time series of observations was extracted for 32 x 32 deg analysis domains. The data were taken from the monthly-averaged, 2 deg product. The resolution of the analysis is therefore limited to scales of greater than 200 km with submonthly variability not resolved. The averages of total cloud cover, sea surface temperature, and surface pressure are presented.
Rotation and anisotropy of galaxies revisited
NASA Astrophysics Data System (ADS)
Binney, James
2005-11-01
The use of the tensor virial theorem (TVT) as a diagnostic of anisotropic velocity distributions in galaxies is revisited. The TVT provides a rigorous global link between velocity anisotropy, rotation and shape, but the quantities appearing in it are not easily estimated observationally. Traditionally, use has been made of a centrally averaged velocity dispersion and the peak rotation velocity. Although this procedure cannot be rigorously justified, tests on model galaxies show that it works surprisingly well. With the advent of integral-field spectroscopy it is now possible to establish a rigorous connection between the TVT and observations. The TVT is reformulated in terms of sky-averages, and the new formulation is tested on model galaxies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M.
Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle inmore » a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.« less
THE CELESTIAL REFERENCE FRAME AT 24 AND 43 GHz. II. IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charlot, P.; Boboltz, D. A.; Fey, A. L.
2010-05-15
We have measured the submilliarcsecond structure of 274 extragalactic sources at 24 and 43 GHz in order to assess their astrometric suitability for use in a high-frequency celestial reference frame (CRF). Ten sessions of observations with the Very Long Baseline Array have been conducted over the course of {approx}5 years, with a total of 1339 images produced for the 274 sources. There are several quantities that can be used to characterize the impact of intrinsic source structure on astrometric observations including the source flux density, the flux density variability, the source structure index, the source compactness, and the compactness variability.more » A detailed analysis of these imaging quantities shows that (1) our selection of compact sources from 8.4 GHz catalogs yielded sources with flux densities, averaged over the sessions in which each source was observed, of about 1 Jy at both 24 and 43 GHz, (2) on average the source flux densities at 24 GHz varied by 20%-25% relative to their mean values, with variations in the session-to-session flux density scale being less than 10%, (3) sources were found to be more compact with less intrinsic structure at higher frequencies, and (4) variations of the core radio emission relative to the total flux density of the source are less than 8% on average at 24 GHz. We conclude that the reduction in the effects due to source structure gained by observing at higher frequencies will result in an improved CRF and a pool of high-quality fiducial reference points for use in spacecraft navigation over the next decade.« less
Do Social Conditions Affect Capuchin Monkeys' (Cebus apella) Choices in a Quantity Judgment Task?
Beran, Michael J; Perdue, Bonnie M; Parrish, Audrey E; Evans, Theodore A
2012-01-01
Beran et al. (2012) reported that capuchin monkeys closely matched the performance of humans in a quantity judgment test in which information was incomplete but a judgment still had to be made. In each test session, subjects first made quantity judgments between two known options. Then, they made choices where only one option was visible. Both humans and capuchin monkeys were guided by past outcomes, as they shifted from selecting a known option to selecting an unknown option at the point at which the known option went from being more than the average rate of return to less than the average rate of return from earlier choices in the test session. Here, we expanded this assessment of what guides quantity judgment choice behavior in the face of incomplete information to include manipulations to the unselected quantity. We manipulated the unchosen set in two ways: first, we showed the monkeys what they did not get (the unchosen set), anticipating that "losses" would weigh heavily on subsequent trials in which the same known quantity was presented. Second, we sometimes gave the unchosen set to another monkey, anticipating that this social manipulation might influence the risk-taking responses of the focal monkey when faced with incomplete information. However, neither manipulation caused difficulty for the monkeys who instead continued to use the rational strategy of choosing known sets when they were as large as or larger than the average rate of return in the session, and choosing the unknown (riskier) set when the known set was not sufficiently large. As in past experiments, this was true across a variety of daily ranges of quantities, indicating that monkeys were not using some absolute quantity as a threshold for selecting (or not) the known set, but instead continued to use the daily average rate of return to determine when to choose the known versus the unknown quantity.
Will, Kipling W.; Gill, Aman S.; Lee, Hyeunjoo; Attygalle, Athula B.
2010-01-01
This study is the first to measure the quantity of pygidial gland secretions released defensively by carabid beetles (Coleoptera: Carabidae) and to accurately measure the relative quantity of formic acid contained in their pygidial gland reservoirs and spray emissions. Individuals of three typical formic acid producing species were induced to repeatedly spray, ultimately exhausting their chemical compound reserves. Beetles were subjected to faux attacks using forceps and weighed before and after each ejection of chemicals. Platynus brunneomarginatus (Mannerheim) (Platynini), P. ovipennis (Mannerheim) (Platynini) and Calathus ruficollis Dejean (Sphodrini), sprayed average quantities with standard error of 0.313 ± 0.172 mg, 0.337 ± 0.230 mg, and 0.197 ± 0.117 mg per spray event, respectively. The quantity an individual beetle released when induced to spray tended to decrease with each subsequent spray event. The quantity emitted in a single spray was correlated to the quantity held in the reservoirs at the time of spraying for beetles whose reserves are greater than the average amount emitted in a spray event. For beetles with a quantity less than the average amount sprayed in reserve there was no significant correlation. For beetles comparable in terms of size, physiological condition and gland reservoir fullness, the shape of the gland reservoirs and musculature determined that a similar effort at each spray event would mechanically meter out the release so that a greater amount was emitted when more was available in the reservoir. The average percentage of formic acid was established for these species as 34.2%, 73.5% and 34.1% for for P. brunneomarginatus, P. ovipennis and C. ruficollis, respectively. The average quantities of formic acid released by individuals of these species was less than two-thirds the amount shown to be lethal to ants in previously published experiments. However, the total quantity from multiple spray events from a single individual could aggregate to quantities at or above the lethal level, and lesser quantities are known to act as ant alarm pheromones. Using a model, one directed spray of the formic acid and hydrocarbon mix could spread to an area of 5–8 cm diameter and persisted for 9–22 seconds at a threshold level known to induce alarm behaviors in ants. These results show that carabid defensive secretions may act as a potent and relatively prolonged defense against ants or similar predators even at a sub-lethal dose. PMID:20575743
Observability of market daily volatility
NASA Astrophysics Data System (ADS)
Petroni, Filippo; Serva, Maurizio
2016-02-01
We study the price dynamics of 65 stocks from the Dow Jones Composite Average from 1973 to 2014. We show that it is possible to define a Daily Market Volatility σ(t) which is directly observable from data. This quantity is usually indirectly defined by r(t) = σ(t) ω(t) where the r(t) are the daily returns of the market index and the ω(t) are i.i.d. random variables with vanishing average and unitary variance. The relation r(t) = σ(t) ω(t) alone is unable to give an operative definition of the index volatility, which remains unobservable. On the contrary, we show that using the whole information available in the market, the index volatility can be operatively defined and detected.
Verbeke, J. M.; Petit, O.
2016-06-01
From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
The Quality vs. the Quantity of Schooling: What Drives Economic Growth?
ERIC Educational Resources Information Center
Breton, Theodore R.
2011-01-01
This paper challenges Hanushek and Woessmann's (2008) contention that the quality and not the quantity of schooling determines a nation's rate of economic growth. I first show that their statistical analysis is flawed. I then show that when a nation's average test scores and average schooling attainment are included in a national income model,…
Non-local thermodynamic equilibrium 1.5D modeling of red giant stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Mitchell E.; Short, C. Ian, E-mail: myoung@ap.smu.ca
Spectra for two-dimensional (2D) stars in the 1.5D approximation are created from synthetic spectra of one-dimensional (1D) non-local thermodynamic equilibrium (NLTE) spherical model atmospheres produced by the PHOENIX code. The 1.5D stars have the spatially averaged Rayleigh-Jeans flux of a K3-4 III star while varying the temperature difference between the two 1D component models (ΔT {sub 1.5D}) and the relative surface area covered. Synthetic observable quantities from the 1.5D stars are fitted with quantities from NLTE and local thermodynamic equilibrium (LTE) 1D models to assess the errors in inferred T {sub eff} values from assuming horizontal homogeneity and LTE. Fivemore » different quantities are fit to determine the T {sub eff} of the 1.5D stars: UBVRI photometric colors, absolute surface flux spectral energy distributions (SEDs), relative SEDs, continuum normalized spectra, and TiO band profiles. In all cases except the TiO band profiles, the inferred T {sub eff} value increases with increasing ΔT {sub 1.5D}. In all cases, the inferred T {sub eff} value from fitting 1D LTE quantities is higher than from fitting 1D NLTE quantities and is approximately constant as a function of ΔT {sub 1.5D} within each case. The difference between LTE and NLTE for the TiO bands is caused indirectly by the NLTE temperature structure of the upper atmosphere, as the bands are computed in LTE. We conclude that the difference between T {sub eff} values derived from NLTE and LTE modeling is relatively insensitive to the degree of the horizontal inhomogeneity of the star being modeled and largely depends on the observable quantity being fit.« less
Foliar nutrient status of young red spruce and balsam fir in a fertilized stand
Miroslaw M. Czapowskyj; L. O. Safford; Russell D. Briggs
1980-01-01
Average dry weight and nutrient levels in current foliage from red spruce and balsam fir seedlings and saplings in the understory of a 25-year old aspen and birch stand were observed 3 years after N, P, and lime treatments were applied. Elemental concentrations were plotted as a function of needle weight and quantity of element per needle. This allows interpretation of...
Unsteady characteristics of low-Re flow past two tandem cylinders
NASA Astrophysics Data System (ADS)
Zhang, Wei; Dou, Hua-Shu; Zhu, Zuchao; Li, Yi
2018-06-01
This study investigated the two-dimensional flow past two tandem circular or square cylinders at Re = 100 and D / d = 4-10, where D is the center-to-center distance and d is the cylinder diameter. Numerical simulation was performed to comparably study the effect of cylinder geometry and spacing on the aerodynamic characteristics, unsteady flow patterns, time-averaged flow characteristics and flow unsteadiness. We also provided the first global linear stability analysis and sensitivity analysis on the physical problem for the potential application of flow control. The objective of this work is to quantitatively identify the effect of the cylinder geometry and spacing on the characteristic quantities. Numerical results reveal that there is wake flow transition for both geometries depending on the spacing. The characteristic quantities, including the time-averaged and fluctuating streamwise velocity and pressure coefficient, are quite similar to that of the single cylinder case for the upstream cylinder, while an entirely different variation pattern is observed for the downstream cylinder. The global linear stability analysis shows that the spatial structure of perturbation is mainly observed in the wake of the downstream cylinder for small spacing, while moves upstream with reduced size and is also observed after the upstream cylinder for large spacing. The sensitivity analysis reflects that the temporal growth rate of perturbation is the most sensitive to the near-wake flow of downstream cylinder for small spacing and upstream cylinder for large spacing.
26 CFR 1.263A-4 - Rules for property produced in a farming business.
Code of Federal Regulations, 2013 CFR
2013-04-01
... period of plants grown in commercial quantities in the United States is based on the nationwide weighted... plants grown in commercial quantities in the United States, the nationwide weighted average preproductive... crop or yield. The plants are grown in commercial quantities in the United States. Farmer A acquires 1...
26 CFR 1.263A-4 - Rules for property produced in a farming business.
Code of Federal Regulations, 2010 CFR
2010-04-01
... period of plants grown in commercial quantities in the United States is based on the nationwide weighted... plants grown in commercial quantities in the United States, the nationwide weighted average preproductive... crop or yield. The plants are grown in commercial quantities in the United States. Farmer A acquires 1...
26 CFR 1.263A-4 - Rules for property produced in a farming business.
Code of Federal Regulations, 2012 CFR
2012-04-01
... period of plants grown in commercial quantities in the United States is based on the nationwide weighted... plants grown in commercial quantities in the United States, the nationwide weighted average preproductive... crop or yield. The plants are grown in commercial quantities in the United States. Farmer A acquires 1...
26 CFR 1.263A-4 - Rules for property produced in a farming business.
Code of Federal Regulations, 2014 CFR
2014-04-01
... period of plants grown in commercial quantities in the United States is based on the nationwide weighted... plants grown in commercial quantities in the United States, the nationwide weighted average preproductive... crop or yield. The plants are grown in commercial quantities in the United States. Farmer A acquires 1...
26 CFR 1.263A-4 - Rules for property produced in a farming business.
Code of Federal Regulations, 2011 CFR
2011-04-01
... period of plants grown in commercial quantities in the United States is based on the nationwide weighted... plants grown in commercial quantities in the United States, the nationwide weighted average preproductive... crop or yield. The plants are grown in commercial quantities in the United States. Farmer A acquires 1...
Tran, Irene; Clark, B. Ruth
2013-01-01
We measured the quantity and intensity of physical activity in 106 urban public school students during recess outdoors, recess indoors in the gym, and recess indoors in the classroom. Students in grades 2 through 5 wore accelerometer pedometers for an average of 6.2 (standard deviation [SD], 1.4) recess periods over 8 weeks; a subsample of 26 also wore heart rate monitors. We determined, on the basis of 655 recess observations, that outdoor recess enabled more total steps per recess period (P < .0001), more steps in moderate-to-vigorous physical activity (P < .0001), and higher heart rates than recess in the gym or classroom. To maximize physical activity quantity and intensity, school policies should promote outdoor recess. PMID:24262028
Graham, Jonathan Pietarila; Mininni, Pablo D; Pouquet, Annick
2005-10-01
We present direct numerical simulations and Lagrangian averaged (also known as alpha model) simulations of forced and free decaying magnetohydrodynamic turbulence in two dimensions. The statistics of sign cancellations of the current at small scales is studied using both the cancellation exponent and the fractal dimension of the structures. The alpha model is found to have the same scaling behavior between positive and negative contributions as the direct numerical simulations. The alpha model is also able to reproduce the time evolution of these quantities in free decaying turbulence. At large Reynolds numbers, an independence of the cancellation exponent with the Reynolds numbers is observed.
NASA Astrophysics Data System (ADS)
Rak, Rafał; Drożdż, Stanisław; Kwapień, Jarosław; Oświȩcimka, Paweł
2015-11-01
We consider a few quantities that characterize trading on a stock market in a fixed time interval: logarithmic returns, volatility, trading activity (i.e., the number of transactions), and volume traded. We search for the power-law cross-correlations among these quantities aggregated over different time units from 1 min to 10 min. Our study is based on empirical data from the American stock market consisting of tick-by-tick recordings of 31 stocks listed in Dow Jones Industrial Average during the years 2008-2011. Since all the considered quantities except the returns show strong daily patterns related to the variable trading activity in different parts of a day, which are the most evident in the autocorrelation function, we remove these patterns by detrending before we proceed further with our study. We apply the multifractal detrended cross-correlation analysis with sign preserving (MFCCA) and show that the strongest power-law cross-correlations exist between trading activity and volume traded, while the weakest ones exist (or even do not exist) between the returns and the remaining quantities. We also show that the strongest cross-correlations are carried by those parts of the signals that are characterized by large and medium variance. Our observation that the most convincing power-law cross-correlations occur between trading activity and volume traded reveals the existence of strong fractal-like coupling between these quantities.
NASA Technical Reports Server (NTRS)
Langel, R. A.
1974-01-01
The maximum disturbances from the positive and negative regions of delta B (Bp and Bn, respectively) are investigated with respect to their correlation with (1) the average N-S component, Bz, (2) the average angle with respect to the solar magnetospheric equatorial plane, theta (3) the variance, sigma sub i, and (4) the magnitude, Bi, of the interplanetary magnetic field. These quantities were averaged over a period, T, ranging from 20 min. to 8 hours prior to the measurement of Bp or Bn. Variations (i.e., disturbances) in total magnetic field magnitude were studied utilizing data from the Polar Orbiting Geophysical Observatory satellites (OGO 2, 4, and 6), unofficially referred to as POGO.
NASA Astrophysics Data System (ADS)
Wang, Yanan; Méndez, Mariano; Altamirano, Diego; Court, James; Beri, Aru; Cheng, Zheng
2018-05-01
We present simultaneous NuSTAR and Swift observations of the black hole transient IGR J17091-3642 during its 2016 outburst. By jointly fitting six NuSTAR and four Swift spectra, we found that during this outburst the source evolves from the hard to the hard/soft intermediate and back to the hard state, similar to the 2011 outburst. Unlike in the previous outburst, in this case we observed both a broad emission and an moderately broad absorption line in our observations. Our fits favour an accretion disc with an inclination angle of ˜45° with respect to the line of sight and a high iron abundance of 3.5 ± 0.3 in units of the solar abundance. We also observed heartbeat variability in one NuSTAR observation. We fitted the phase-resolved spectra of this observation and found that the reflected emission varies independently from the direct emission, whereas in the fits to the average spectra these two quantities are strongly correlated. Assuming that in IGR J17091-3642 the inner radius of the disc both in the average and the phase-resolved spectra is located at the radius of the innermost stable circular orbit, with 90% confidence the spin parameter of the black hole in this system is -0.13 ≤ a* ≤ 0.27.
Modulation of a methane Bunsen flame by upstream perturbations
NASA Astrophysics Data System (ADS)
de Souza, T. Cardoso; Bastiaans, R. J. M.; De Goey, L. P. H.; Geurts, B. J.
2017-04-01
In this paper the effects of an upstream spatially periodic modulation acting on a turbulent Bunsen flame are investigated using direct numerical simulations of the Navier-Stokes equations coupled with the flamelet generated manifold (FGM) method to parameterise the chemistry. The premixed Bunsen flame is spatially agitated with a set of coherent large-scale structures of specific wave-number, K. The response of the premixed flame to the external modulation is characterised in terms of time-averaged properties, e.g. the average flame height ⟨H⟩ and the flame surface wrinkling ⟨W⟩. Results show that the flame response is notably selective to the size of the length scales used for agitation. For example, both flame quantities ⟨H⟩ and ⟨W⟩ present an optimal response, in comparison with an unmodulated flame, when the modulation scale is set to relatively low wave-numbers, 4π/L ≲ K ≲ 6π/L, where L is a characteristic scale. At the agitation scales where the optimal response is observed, the average flame height, ⟨H⟩, takes a clearly defined minimal value while the surface wrinkling, ⟨W⟩, presents an increase by more than a factor of 2 in comparison with the unmodulated reference case. Combined, these two response quantities indicate that there is an optimal scale for flame agitation and intensification of combustion rates in turbulent Bunsen flames.
Corona graphs as a model of small-world networks
NASA Astrophysics Data System (ADS)
Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi
2015-11-01
We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.
Galambos, Nancy L; Vargas Lascano, Dayuma I; Howard, Andrea L; Maggs, Jennifer L
2013-01-01
This study tracked change over time in sleep quantity, disturbance, and timing, and sleep's covariations with living situation, stress, social support, alcohol use, and grade point average (GPA) across four years of university in 186 Canadian students. Women slept longer as they moved through university, and men slept less; rise times were later each year. Students reported sleeping fewer hours, more sleep disturbances, and later rise times during years with higher stress. In years when students lived away from home, they reported more sleep disturbances, later bedtimes, and later rise times. Living on campus was associated with later bedtimes and rise times. Alcohol use was higher and GPA was lower when bedtimes were later. The implications of these observed patterns for understanding the correlates and consequences of university students' sleep are discussed.
Cox, Melissa D; Myerscough, Mary R
2003-07-21
This paper develops and explores a model of foraging in honey bee colonies. The model may be applied to forage sources with various properties, and to colonies with different foraging-related parameters. In particular, we examine the effect of five foraging-related parameters on the foraging response and consequent nectar intake of a homogeneous colony. The parameters investigated affect different quantities critical to the foraging cycle--visit rate (affected by g), probability of dancing (mpd and bpd), duration of dancing (mcirc), or probability of abandonment (A). We show that one parameter, A, affects nectar intake in a nonlinear way. Further, we show that colonies with a midrange value of any foraging parameter perform better than the average of colonies with high- and low-range values, when profitable sources are available. Together these observations suggest that a heterogeneous colony, in which a range of parameter values are present, may perform better than a homogeneous colony. We modify the model to represent heterogeneous colonies and use it to show that the most important effect of heterogeneous foraging behaviour within the colony is to reduce the variance in the average quantity of nectar collected by heterogeneous colonies.
Cosmic reionization on computers. II. Reionization history and its back-reaction on early galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.; Kaurov, Alexander A., E-mail: gnedin@fnal.gov, E-mail: kaurov@uchicago.edu
We compare the results from several sets of cosmological simulations of cosmic reionization, produced under the Cosmic Reionization On Computers project, with existing observational data on the high-redshift Lyα forest and the abundance of Lyα emitters. We find good consistency with the observational measurements and previous simulation work. By virtue of having several independent realizations for each set of numerical parameters, we are able to explore the effect of cosmic variance on observable quantities. One unexpected conclusion we are forced into is that cosmic variance is unusually large at z > 6, with both our simulations and, most likely, observationalmore » measurements still not fully converged for even such basic quantities as the average Gunn-Peterson optical depth or the volume-weighted neutral fraction. We also find that reionization has little effect on the early galaxies or on global cosmic star formation history, because galaxies whose gas content is affected by photoionization contain no molecular (i.e., star-forming) gas in the first place. In particular, measurements of the faint end of the galaxy luminosity function by the James Webb Space Telescope are unlikely to provide a useful constraint on reionization.« less
Quantification and classification of ship scraping waste at Alang-Sosiya, India.
Srinivasa Reddy, M; Basha, Shaik; Sravan Kumar, V G; Joshi, H V; Ghosh, P K
2003-12-01
Alang-Sosiya located on the Western Coast of Gulf of Cambay, is the largest ship recycling yard in the world. Every year on average 365 ships having a mean weight (2.10x10(6)+/-7.82x10(5) LDT) are scrapped. This industry generates a huge quantity of solid waste in the form of broken wood, rubber, insulation materials, paper, metals, glass and ceramics, plastics, leather, textiles, food waste, chemicals, paints, thermocol, sponge, ash, oil mixed sponges, miscellaneous combustible and non-combustible. The quantity and composition of solid waste was collected for a period of three months and the average values are presented in this work. Sosiya had the most waste 15.63 kg/m(2) compared to Alang 10.19 kg/m(2). The combustible solid waste quantity was around 83.0% of the total solid waste available at the yard, which represents an average weight of 9.807 kg/m(2); whereas, non-combustible waste is 1.933 kg/m(2). There is not much difference between the average of total solid waste calculated from the sampling data (96.71 MT/day) and the data provided by the port authorities (96.8 MT/day).
The cost of medical education in an ambulatory neurology clinic.
Abramovitch, Anna; Newman, William; Padaliya, Bimal; Gill, Chandler; Charles, P. David
2005-01-01
Decreased revenue from clinical services has required academic hospitals and physicians to improve productivity. Medical student education may be a significant hindrance to increased productivity and income. This study quantifies the amount of time spent by faculty members teaching medical students in an ambulatory neurology clinic as well as the amount of time students occupied rooms when seeing patients on their own. Over a three-week period in an ambulatory neurology clinic, an observer noted these quantities of time, and the opportunity costs of both amounts of time were determined. Attending physicians spent an average of 19.6 minutes per medical student per half-day teaching, which translates to an average cost of $20.78 per half-day clinic. Students spent an average of 49.9 minutes per half-day seeing patients in the absence of an attending physician, an opportunity cost to the clinic of $142.50 per student per half-day. PMID:16296220
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
NASA Technical Reports Server (NTRS)
Creamean, J. M.; Ault, A. P.; White, A. B.; Neiman, P. J.; Ralph, F. M.; Minnis, Patrick; Prather, K. A.
2014-01-01
Aerosols that serve as cloud condensation nuclei (CCN) and ice nuclei (IN) have the potential to profoundly influence precipitation processes. Furthermore, changes in orographic precipitation have broad implications for reservoir storage and flood risks. As part of the CalWater I field campaign (2009-2011), the impacts of aerosol sources on precipitation were investigated in the California Sierra Nevada. In 2009, the precipitation collected on the ground was influenced by both local biomass burning (up to 79% of the insoluble residues found in precipitation) and long-range transported dust and biological particles (up to 80% combined), while in 2010, by mostly local sources of biomass burning and pollution (30-79% combined), and in 2011 by mostly long-range transport from distant sources (up to 100% dust and biological). Although vast differences in the source of residues was observed from year-to-year, dust and biological residues were omnipresent (on average, 55% of the total residues combined) and were associated with storms consisting of deep convective cloud systems and larger quantities of precipitation initiated in the ice phase. Further, biological residues were dominant during storms with relatively warm cloud temperatures (up to -15 C), suggesting these particles were more efficient IN compared to mineral dust. On the other hand, lower percentages of residues from local biomass burning and pollution were observed (on average 31% and 9%, respectively), yet these residues potentially served as CCN at the base of shallow cloud systems when precipitation quantities were low. The direct connection of the source of aerosols within clouds and precipitation type and quantity can be used in models to better assess how local emissions versus long-range transported dust and biological aerosols play a role in impacting regional weather and climate, ultimately with the goal of more accurate predictive weather forecast models and water resource management.
NASA Astrophysics Data System (ADS)
Creamean, J.; Ault, A. P.; White, A. B.; Neiman, P. J.; Minnis, P.; Prather, K. A.
2014-12-01
Aerosols that serve as cloud condensation nuclei (CCN) and ice nuclei (IN) have the potential to profoundly influence precipitation processes. Furthermore, changes in orographic precipitation have broad implications for reservoir storage and flood risks. As part of the CalWater I field campaign (2009-2011), the impacts of aerosol sources on precipitation were investigated in the California Sierra Nevada Mountains. In 2009, the precipitation collected on the ground was influenced by both local biomass burning and long-range transported dust and biological particles, while in 2010, by mostly local sources of biomass burning and pollution, and in 2011 by mostly long-range transport of dust and biological particles from distant sources. Although vast differences in the sources of residues were observed from year-to-year, dust and biological residues were omnipresent (on average, 55% of the total residues combined) and were associated with storms consisting of deep convective cloud systems and larger quantities of precipitation initiated in the ice phase. Further, biological residues were dominant during storms with relatively warm cloud temperatures (up to -15°C), suggesting biological components were more efficient IN than mineral dust. On the other hand, when precipitation quantities were lower, local biomass burning and pollution residues were observed (on average 31% and 9%, respectively), suggesting these residues potentially served as CCN at the base of shallow cloud systems and that lower level polluted clouds of storm systems produced less precipitation than non-polluted (i.e., marine) clouds. The direct connection of the sources of aerosols within clouds and precipitation type and quantity can be used in models to better assess how local emissions versus long-range transported dust and biological aerosols play a role in impacting regional weather and climate, ultimately with the goal of more accurate predictive weather forecast models and water resource management.
Acceleration of plates using non-conventional explosives heavily-loaded with inert materials
NASA Astrophysics Data System (ADS)
Loiseau, J.; Petel, O. E.; Huneault, J.; Serge, M.; Frost, D. L.; Higgins, A. J.
2014-05-01
The detonation behavior of high explosives containing quantities of dense additives has been previously investigated with the observation that such systems depart dramatically from the approximately "gamma law" behavior typical of conventional explosives due to momentum transfer and thermalization between particles and detonation products. However, the influence of this non-ideal detonation behavior on the divergence speed of plates has been less thoroughly studied and existing literature suggests that the effect of dense additives cannot be explained solely through the straightforward application of the Gurney method with energy and density averaging of the explosive. In the current study, the acceleration history and terminal velocity of aluminum flyers launched by packed beds of granular material saturated by amine-sensitized nitromethane is reported. It was observed that terminal flyer velocity scales primarily with the ratio of flyer mass to mass of the explosive component; a fundamental feature of the Gurney method. Velocity decrement from the addition of particles was only 20%-30% compared to the resulting velocity if propelled by an equivalent quantity of neat explosive.
Drinking patterns and adherence to "low-risk" guidelines among community-residing older adults.
Lewis, Ben; Garcia, Christian C; Nixon, Sara Jo
2018-06-01
Older adults constitute a rapidly expanding proportion of the U.S. Contemporary studies note the increasing prevalence of alcohol consumption in this group. Thus, understanding alcohol effects, consumption patterns, and associated risks in aging populations constitute critical areas of study with increasing public health relevance. Participants (n = 643; 292 women; ages 21-70) were community residing adult volunteers. Primary measures of interest included four patterns of alcohol consumption (average [oz./day]; typical quantity [oz./occasion]; frequency [% drinking days]; and maximal quantity [oz.]). Regression analyses explored associations between these measures, age, and relevant covariates. Subsequent between-group analyses investigated differences between two groups of older adults and a comparator group of younger adults, their adherance to "low-risk" guidelines, and whether alcohol-associated risks differed by age and adherence pattern. Average consumption did not vary by age or differ between age groups. In contrast, markedly higher frequencies and lower quantities of consumption were observed with increasing age. These differences persisted across adherence categories and were evident even in the oldest age group. Exceeding "low-risk" guidelines was associated with greater risk for alcohol-related problems among the older groups. These results emphasize the utility of considering underlying constituent patterns of consumption in older drinkers. Findings highlight difficulties in identifying problem drinking among older adults and contribute to the few characterizations of "risky" drinking patterns in this group. Taken together, our data contribute to literatures of import for the design and enhancement of screening, prevention, and education initiatives directed toward aging adults. Copyright © 2018. Published by Elsevier B.V.
Atmospheric deposition effects on the chemistry of a stream in Northeastern Georgia
Buell, G.R.; Peters, N.E.
1988-01-01
The quantity and quality of precipitation and streamwater were measured from August 1985 through September 1986 in the Brier Creek watershed, a 440-ha drainage in the Southern Blue Ridge Province of northeastern Georgia, to determine stream sensitivity to acidic deposition. Precipitation samples collected at 2 sites had a volume-weighted average pH of 4.40 whereas stream samples collected near the mouth of Brier Creek had a discharge-weighted average pH of 6.70. Computed solute fluxes through the watershed and observed changes in streamwater chemistry during stormflow suggest that cation exchange, mineral weathering, SO42- adsorption by the soil, and groundwater discharge to the stream are probable factors affecting neutralization of precipitation acidity. Net solute fluxes for the watershed indicate that, of the precipitation input, > 99% of the H+, 93% of the NH4+ and NO3-, and 77% of the SO42- were retained. Sources within the watershed yielded base cations, Cl-, and HCO3- and accounted for 84, 47, and 100% of the net transport, respectively. Although streamwater SO42- and NO3- concentrations increased during stormflow, peak concentrations of these anions were much less than average concentrations in the precipitation. This suggests retention of these solutes occurs even when water residence time is short.The quantity and quality of precipitation and streamwater were measured from August 1985 through September 1986 in the Brier Creek watershed, a 440-ha drainage in the Southern Blue Ridge Province of northeastern Georgia, to determine stream sensitivity to acidic deposition. Precipitation samples collected at 2 sites had a volume-weighted average pH of 4.40 whereas stream samples collected near the mouth of Brier Creek had a discharge-weighted average pYH of 6.70. Computed solute fluxes through the watershed and observed changes in streamwater chemistry drying stormflow suggest that cation exchange, mineral weathering, SO42- adsorption by the soil, and groundwater discharge to the stream are probable factors affecting neutralization of precipitation acidity. Although streamwater SO42- and NO3- concentrations increased during stormflow, peak concentrations of these anions were much less than average concentrations in the precipitation. This suggests retention of these solutes occurs even when water residence time is short.
Derivation and precision of mean field electrodynamics with mesoscale fluctuations
NASA Astrophysics Data System (ADS)
Zhou, Hongzhe; Blackman, Eric G.
2018-06-01
Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.
Line transport in turbulent atmosphere
NASA Astrophysics Data System (ADS)
Nikoghossian, Artur
We consider the spectral line transfer in turbulent atmospheres with a spatially correlated velocity field. Both the finite and semi-infinite media are treated. In finding the observed intensities we first deal with the problem for determining the mean intensity of radiation emerging from the medium for a fixed value of turbulent velocity at its boundary. New approach proposed in solving this problem is based on invariant imbedding technique which yields the solution of the proper problems for a family of media of different optical thicknesses and allows tackling different kinds of inhomogeneous problems. The dependence of the line profile, integral intensity and the line width on the mean correlation length and average value of the hydrodynamic velocity is studied. It is shown that the transition from a micro-turbulent regime to a macro-turbulent one occurs within a comparatively narrow range of variation in the correlation length. The diffuse reflection of the line radiation from a one-dimensional semi-infinite turbulent atmosphere is examined. In addition to the observed spectral line profile, statistical averages describing the diffusion process in the atmosphere (mean number of scattering events, average time spent by a diffusing photon in the medium) are determined. The dependence of these quantities on the average hydrodynamic velocity and correlation coefficient is studied.
Line Transport in Turbulent Atmospheres
NASA Astrophysics Data System (ADS)
Nikoghossian, A. G.
2017-07-01
The spectral line transfer in turbulent atmospheres with a spatially correlated velocity field is examined. Both the finite and semi-infinite media are treated. In finding the observed intensities we first deal with the problem for determining the mean intensity of radiation emerging from the medium for a fixed value of turbulent velocity at its boundary. A new approach proposed for solving this problem is based on the invariant imbedding technique which yields the solution of the proper problems for a family of media of different optical thicknesses and allows tackling different kinds of inhomogeneous problems. The dependence of the line profile, integral intensity, and the line width on the mean correlation length and the average value of the hydrodynamic velocity is studied. It is shown that the transition from a micro-turbulent regime to a macro-turbulence occurs within a comparatively narrow range of variation in the correlation length . Ambartsumian's principle of invariance is used to solve the problem of diffuse reflection of the line radiation from a one-dimensional semi-infinite turbulent atmosphere. In addition to the observed spectral line profile, statistical averages describing the diffusion process in the atmosphere (mean number of scattering events, average time spent by a diffusing photon in the medium) are determined. The dependence of these quantities on the average hydrodynamic velocity and correlation coefficient is studied.
This paper addresses the general problem of estimating at arbitrary locations the value of an unobserved quantity that varies over space, such as ozone concentration in air or nitrate concentrations in surface groundwater, on the basis of approximate measurements of the quantity ...
Calculating Time-Integral Quantities in Depletion Calculations
Isotalo, Aarno
2016-06-02
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
NASA Technical Reports Server (NTRS)
Khurana, Krishan K.; Kivelson, Margaret G.
1993-01-01
The averaged angular velocity of plasma from magnetic observations is evaluated using plasma outflow rate as a parameter. New techniques are developed to calculate the normal and azimuthal components of the magnetic field in and near to the plasma sheet in a plasma sheet coordinate system. The revised field components differ substantially from the quantities used in previous analyses. With the revised field values, it appears that during the Voyager 2 flyby for an outflow rate of 2.5 x 10 exp 29 amu/s, the observed magnetic torque may be sufficient to keep the plasma in corotation to radial distances of 50 Rj in the postmidnight quadrant.
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2010 CFR
2010-04-01
... micropascals. Day-night average sound level, abbreviated as DNL, and symbolized mathematically as Ldn is... day-night average sound level produced by the loud impulsive sounds shall have 8 decibels added to it...
An inverse problem of determining the implied volatility in option pricing
NASA Astrophysics Data System (ADS)
Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu
2008-04-01
In the Black-Scholes world there is the important quantity of volatility which cannot be observed directly but has a major impact on the option value. In practice, traders usually work with what is known as implied volatility which is implied by option prices observed in the market. In this paper, we use an optimal control framework to discuss an inverse problem of determining the implied volatility when the average option premium, namely the average value of option premium corresponding with a fixed strike price and all possible maturities from the current time to a chosen future time, is known. The issue is converted into a terminal control problem by Green function method. The existence and uniqueness of the minimum of the control functional are addressed by the optimal control method, and the necessary condition which must be satisfied by the minimum is also given. The results obtained in the paper may be useful for those who engage in risk management or volatility trading.
Jeon, Jae-Hyung; Metzler, Ralf
2010-02-01
Motivated by subdiffusive motion of biomolecules observed in living cells, we study the stochastic properties of a non-Brownian particle whose motion is governed by either fractional Brownian motion or the fractional Langevin equation and restricted to a finite domain. We investigate by analytic calculations and simulations how time-averaged observables (e.g., the time-averaged mean-squared displacement and displacement correlation) are affected by spatial confinement and dimensionality. In particular, we study the degree of weak ergodicity breaking and scatter between different single trajectories for this confined motion in the subdiffusive domain. The general trend is that deviations from ergodicity are decreased with decreasing size of the movement volume and with increasing dimensionality. We define the displacement correlation function and find that this quantity shows distinct features for fractional Brownian motion, fractional Langevin equation, and continuous time subdiffusion, such that it appears an efficient measure to distinguish these different processes based on single-particle trajectory data.
Evaluation of Ultra High Pressure (UHP) Firefighting in a Room-and-Contents Fire
2017-03-15
Burn Room and Hangar Temperature Prior to Ignition ............................................... 18 Figure 12. Effect of Temperature on Normalized...Figure 20. Maximum Average Temperature and Heat Flux ......................................................... 22 Figure 21. Effect of Maximum Average...Aspirated Ceiling Temperature .................................... 23 Figure 22. Effect of Maximum Average Floor Heat Flux on Extinguishment Quantity
42 CFR 414.904 - Average sales price as the basis for payment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subsection (c), the term billing unit means the identifiable quantity associated with a billing and payment code, as established by CMS. (c) Single source drugs—(1) Average sales price. The average sales price... report as required by section 623(c) of the Medicare Prescription Drug, Improvement, and Modernization...
42 CFR 414.904 - Average sales price as the basis for payment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subsection (c), the term billing unit means the identifiable quantity associated with a billing and payment code, as established by CMS. (c) Single source drugs—(1) Average sales price. The average sales price... report as required by section 623(c) of the Medicare Prescription Drug, Improvement, and Modernization...
NASA Astrophysics Data System (ADS)
Crocombette, Jean-Paul; Van Brutzel, Laurent; Simeone, David; Luneville, Laurence
2016-06-01
Displacement cascades have been calculated in two ordered alloys (Ni3Al and UO2) in the molecular dynamics framework using the CMDC (Cell Molecular Dynamics for Cascade) code (J.-P. Crocombette and T. Jourdan, Nucl. Instrum. Meth. B 352, 9 (2015)) for energies ranking between 0.1 and 580 keV. The defect production has been compared to the prediction of the NRT (Norgett, Robinson and Torrens) standard. One observes a decrease with energy of the number of defects compared to the NRT prediction at intermediate energies but, unlike what is commonly observed in elemental solids, the number of produced defects does not always turn to a linear variation with ballistic energy at high energies. The fragmentation of the cascade into subcascades has been studied through the analysis of surviving defect pockets. It appears that the common knowledge equivalence of linearity of defect production and subcascades division does not hold in general for alloys. We calculate the average number of subcascades and average number of defects per subcascades as a function of ballistic energy. We find an unexpected variety of behaviors for these two average quantities above the threshold for subcascade formation.
Hahn, Carole J. [Univ. of Colorado, Boulder, CO (United States). Cooperative Inst. for Research in Environmental Sciences (CIRES); Warren, Stephen G. [Department of Atmospheric Sciences, University of Colorado, Boulder, CO; London, Julius [Department of Astrophysical, Planetary, and Atmospheric Sciences, University of Colorado, Boulder, CO
1994-01-01
Routine, synoptic surface weather reports from ships and land stations over the entire globe, for the10-year period December 1981 through November 1991, were processed for total cloud cover and the frequencies of occurrence of clear sky, sky-obscured due to precipitation, and sky-obscured due to fog. Archived data, consisting of various annual, seasonal and monthly averages, are provided in grid boxes that are typically 2.5° × 2.5° for land and 5° × 5° for ocean. Day and nighttime averages are also given separately for each season. Several derived quantities, such as interannual variations and annual and diurnal harmonics, are provided as well. This data set incorporates an improved representation of nighttime cloudiness by utilizing only those nighttime observations for which the illuminance due to moonlight exceeds a specified threshold. This reduction in the night-detection bias increases the computed global average total cloud cover by about 2%. The impact on computed diurnal cycles is even greater, particularly over the oceans where it is found (in contrast to previous surface-based climatologies), that cloudiness is often greater at night than during the day.
NASA Technical Reports Server (NTRS)
Golub, L.; Krieger, A. S.; Vaiana, G. S.
1976-01-01
Observations of X-ray bright points (XBP) over a six-month interval in 1973 show significant variations in both the number density of XBP as a function of heliographic longitude and in the full-sun average number of XBP from one rotation to the next. The observed increases in XBP emergence are estimated to be equivalent to several large active regions emerging per day for several months. The number of XBP emerging at high latitudes varies in phase with the low-latitude variation and reaches a maximum approximately simultaneous with a major outbreak of active regions. The quantity of magnetic flux emerging in the form of XBP at high latitudes alone is estimated to be as large as the contribution from all active regions.
Equilibration and analysis of first-principles molecular dynamics simulations of water
NASA Astrophysics Data System (ADS)
Dawson, William; Gygi, François
2018-03-01
First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.
Equilibration and analysis of first-principles molecular dynamics simulations of water.
Dawson, William; Gygi, François
2018-03-28
First-principles molecular dynamics (FPMD) simulations based on density functional theory are becoming increasingly popular for the description of liquids. In view of the high computational cost of these simulations, the choice of an appropriate equilibration protocol is critical. We assess two methods of estimation of equilibration times using a large dataset of first-principles molecular dynamics simulations of water. The Gelman-Rubin potential scale reduction factor [A. Gelman and D. B. Rubin, Stat. Sci. 7, 457 (1992)] and the marginal standard error rule heuristic proposed by White [Simulation 69, 323 (1997)] are evaluated on a set of 32 independent 64-molecule simulations of 58 ps each, amounting to a combined cumulative time of 1.85 ns. The availability of multiple independent simulations also allows for an estimation of the variance of averaged quantities, both within MD runs and between runs. We analyze atomic trajectories, focusing on correlations of the Kohn-Sham energy, pair correlation functions, number of hydrogen bonds, and diffusion coefficient. The observed variability across samples provides a measure of the uncertainty associated with these quantities, thus facilitating meaningful comparisons of different approximations used in the simulations. We find that the computed diffusion coefficient and average number of hydrogen bonds are affected by a significant uncertainty in spite of the large size of the dataset used. A comparison with classical simulations using the TIP4P/2005 model confirms that the variability of the diffusivity is also observed after long equilibration times. Complete atomic trajectories and simulation output files are available online for further analysis.
[Response of forest bird communities to forest gap in winter in southwestern China].
Zhao, Dong-Dong; Wu, Ying-Huan; Lu, Zhou; Jiang, Guang-Wei; Zhou, Fang
2013-06-01
Although forest gap ecology is an important field of study, research remains limited. By plot setting and point counted observation, the response of birds to forest gaps in winter as well as bird distribution patterns in forest gaps and intact canopies were studied in a north tropical monsoon forest of southwestern China from November 2011 to February 2012 in the Fangcheng Golden Camellia National Nature Reserve, Guangxi. The regression equation of bird species diversity to habitat factor was Y1=0.611+0.002 X13+0.043 X2+0.002 X5-0.003 X8+0.006 X10+0.008 X1 and the regression equation of bird species dominance index to habitat factor was Y3=0.533+0.001 X13+0.019 X2+0.002 X3-0.017 X4+0.002 X1. There were 45 bird species (2 orders and 13 families) recorded in the forest gap, accounting for 84.9% of all birds (n=45), with an average of 9.6 species (range: 2-22). Thirty-nine bird species (5 orders and 14 families) were recorded in non-gap areas, accounting for 73.6% of all birds (n=39), with an average of 5.3 species (range: 1-12). These results suggested that gap size, arbor average height (10 m from gap margin), arbor quantity (10 m from gap margin), shrub quantity (10 m from gap margin), herbal average coverage (1 m from gap margin) and bare land ratio were the key forest gap factors that influenced bird diversities. On the whole, bird diversity in the forest gap was greater than in the intact canopy. Spatial distributions in the forest gaps were also observed in the bird community. Most birds foraged in the "middle" and "canopy" layers in the vertical stratification. In addition, "nearly from" and "close from" contained more birds in relation to horizontal stratification. Feeding niche differentiation was suggested as the main reason for these distribution patterns.
Use of bimodal carbon distribution in compacts for producing metallic iron nodules
Iwasaki, Iwao
2012-10-16
A method for use in production of metallic iron nodules comprising providing a reducible mixture into a hearth furnace for the production of metallic iron nodules, where the reducible mixture comprises a quantity of reducible iron bearing material, a quantity of first carbonaceous reducing material of a size less than about 28 mesh of an amount between about 65 percent and about 95 percent of a stoichiometric amount necessary for complete iron reduction of the reducible iron bearing material, and a quantity of second carbonaceous reducing material with an average particle size greater than average particle size of the first carbonaceous reducing material and a size between about 3 mesh and about 48 mesh of an amount between about 20 percent and about 60 percent of a stoichiometric amount of necessary for complete iron reduction of the reducible iron bearing material.
Use of bimodal carbon distribution in compacts for producing metallic iron nodules
Iwasaki, Iwao
2014-04-08
A method for use in production of metallic iron nodules comprising providing a reducible mixture into a hearth furnace for the production of metallic iron nodules, where the reducible mixture comprises a quantity of reducible iron bearing material, a quantity of first carbonaceous reducing material of a size less than about 28 mesh of an amount between about 65 percent and about 95 percent of a stoichiometric amount necessary for complete iron reduction of the reducible iron bearing material, and a quantity of second carbonaceous reducing material with an average particle size greater than average particle size of the first carbonaceous reducing material and a size between about 3 mesh and about 48 mesh of an amount between about 20 percent and about 60 percent of a stoichiometric amount of necessary for complete iron reduction of the reducible iron bearing material.
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., annual average flow-weighted benzene concentration, and annual benzene quantity. (2) For each waste... measurements, calculations, and other documentation used to determine that the continuous flow of process... benzene concentrations in the waste, the annual average flow-weighted benzene concentration of the waste...
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., annual average flow-weighted benzene concentration, and annual benzene quantity. (2) For each waste... measurements, calculations, and other documentation used to determine that the continuous flow of process... benzene concentrations in the waste, the annual average flow-weighted benzene concentration of the waste...
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., annual average flow-weighted benzene concentration, and annual benzene quantity. (2) For each waste... measurements, calculations, and other documentation used to determine that the continuous flow of process... benzene concentrations in the waste, the annual average flow-weighted benzene concentration of the waste...
Nucleon spin-averaged forward virtual Compton tensor at large Q 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Richard J.; Paz, Gil
The nucleon spin-averaged forward virtual Compton tensor determines important physical quantities such as electromagnetically-induced mass differences of nucleons, and two-photon exchange contributions in hydrogen spectroscopy. It depends on two kinematic variables:more » $$\
Witt, Emitt C; Wronkiewicz, David J; Shi, Honglan
2013-01-01
Fugitive road dust collection for chemical analysis and interpretation has been limited by the quantity and representativeness of samples. Traditional methods of fugitive dust collection generally focus on point-collections that limit data interpretation to a small area or require the investigator to make gross assumptions about the origin of the sample collected. These collection methods often produce a limited quantity of sample that may hinder efforts to characterize the samples by multiple geochemical techniques, preserve a reference archive, and provide a spatially integrated characterization of the road dust health hazard. To achieve a "better sampling" for fugitive road dust studies, a cyclonic fugitive dust (CFD) sampler was constructed and tested. Through repeated and identical sample collection routes at two collection heights (50.8 and 88.9 cm above the road surface), the products of the CFD sampler were characterized using particle size and chemical analysis. The average particle size collected by the cyclone was 17.9 μm, whereas particles collected by a secondary filter were 0.625 μm. No significant difference was observed between the two sample heights tested and duplicates collected at the same height; however, greater sample quantity was achieved at 50.8 cm above the road surface than at 88.9 cm. The cyclone effectively removed 94% of the particles >1 μm, which substantially reduced the loading on the secondary filter used to collect the finer particles; therefore, suction is maintained for longer periods of time, allowing for an average sample collection rate of about 2 g mi. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Astrophysics Data System (ADS)
Sanders, J. S.; Fabian, A. C.; Russell, H. R.; Walker, S. A.
2018-02-01
We analyse Chandra X-ray Observatory observations of a set of galaxy clusters selected by the South Pole Telescope using a new publicly available forward-modelling projection code, MBPROJ2, assuming hydrostatic equilibrium. By fitting a power law plus constant entropy model we find no evidence for a central entropy floor in the lowest entropy systems. A model of the underlying central entropy distribution shows a narrow peak close to zero entropy which accounts for 60 per cent of the systems, and a second broader peak around 130 keV cm2. We look for evolution over the 0.28-1.2 redshift range of the sample in density, pressure, entropy and cooling time at 0.015R500 and at 10 kpc radius. By modelling the evolution of the central quantities with a simple model, we find no evidence for a non-zero slope with redshift. In addition, a non-parametric sliding median shows no significant change. The fraction of cool-core clusters with central cooling times below 2 Gyr is consistent above and below z = 0.6 (˜30-40 per cent). Both by comparing the median thermodynamic profiles, centrally biased towards cool cores, in two redshift bins, and by modelling the evolution of the unbiased average profile as a function of redshift, we find no significant evolution beyond self-similar scaling in any of our examined quantities. Our average modelled radial density, entropy and cooling-time profiles appear as power laws with breaks around 0.2R500. The dispersion in these quantities rises inwards of this radius to around 0.4 dex, although some of this scatter can be fitted by a bimodal model.
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2011 CFR
2011-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2014 CFR
2014-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2012 CFR
2012-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2013 CFR
2013-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
Scaling laws and fluctuations in the statistics of word frequencies
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Altmann, Eduardo G.
2014-11-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.
ERIC Educational Resources Information Center
Galambos, Nancy L.; Howard, Andrea L.; Maggs, Jennifer L.
2011-01-01
Covariations of self-reported sleep quantity (duration) and quality (disturbances) with affective, stressful, academic, and social experiences across the first year of university in 187 Canadian students (M age=18.4) were examined with multilevel models. Female students reported sleeping fewer hours on average than did male students. In months…
ERIC Educational Resources Information Center
Beddard, Godfrey S.
2011-01-01
Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…
Alcohol Consumption and Long-Term Labor Market Outcomes.
Böckerman, Petri; Hyytinen, Ari; Maczulskij, Terhi
2017-03-01
This paper examines whether alcohol consumption is related to long-term labor market outcomes. We use twin data for Finnish men and women matched to register-based individual information on employment and earnings. The twin data allow us to account for the shared environmental and genetic factors. The quantity of alcohol consumption was measured by weekly average consumption using self-reported data from three surveys (1975, 1981 and 1990). The average of an individual's employment months and earnings were measured in adulthood over the period 1990-2009. The models that account for the shared environmental and genetic factors reveal that former drinkers and heavy drinkers both have almost 20% lower earnings compared with moderate drinkers. On average, former drinkers work annually approx. 1 month less over the 20-year observation period. These associations are robust to the use of covariates, such as education, pre-existing health endowment and smoking. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
A mechanism for the production of ultrafine particles from concrete fracture.
Jabbour, Nassib; Rohan Jayaratne, E; Johnson, Graham R; Alroe, Joel; Uhde, Erik; Salthammer, Tunga; Cravigan, Luke; Faghihi, Ehsan Majd; Kumar, Prashant; Morawska, Lidia
2017-03-01
While the crushing of concrete gives rise to large quantities of coarse dust, it is not widely recognized that this process also emits significant quantities of ultrafine particles. These particles impact not just the environments within construction activities but those in entire urban areas. The origin of these ultrafine particles is uncertain, as existing theories do not support their production by mechanical processes. We propose a hypothesis for this observation based on the volatilisation of materials at the concrete fracture interface. The results from this study confirm that mechanical methods can produce ultrafine particles (UFP) from concrete, and that the particles are volatile. The ultrafine mode was only observed during concrete fracture, producing particle size distributions with average count median diameters of 27, 39 and 49 nm for the three tested concrete samples. Further volatility measurements found that the particles were highly volatile, showing between 60 and 95% reduction in the volume fraction remaining by 125 °C. An analysis of the volatile fraction remaining found that different volatile material is responsible for the production of particles between the samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Marine debris contamination along undeveloped tropical beaches from northeast Brazil.
Santos, Isaac R; Friedrich, Ana Cláudia; Ivar do Sul, Juliana Assunção
2009-01-01
We hypothesize that floating debris leaving polluted coastal bays accumulate on nearby pristine beaches. We examined composition, quantities and distribution of marine debris along approximately 150 km of relatively undeveloped, tropical beaches in Costa do Dendê (Bahia, Brazil). The study site is located south of Salvador City, the largest urban settlement from NE Brazil. Strong spatial variations were observed. Plastics accounted for 76% of the sampled items, followed by styrofoam (14%). Small plastic fragments resultant from the breakdown of larger items are ubiquitous all over the area. Because the dominant littoral drift in Bahia is southward, average beach debris densities (9.1 items/m) along Costa do Dendê were threefold higher than densities previously observed north of Salvador City. River-dominated and stable beaches had higher debris quantities than unstable, erosional beaches. Areas immediately south of the major regional embayments (Camamu and Todos os Santos) were the preferential accumulation sites, indicating that rivers draining populous areas are the major source of debris to the study site. Our results provide baseline information for future assessments. Management actions should focus on input prevention at the hydrographic basin level rather than on cleaning services on beaches.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.
Quantification and characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wood, Christopher J.; Gambetta, Jay M.
2018-03-01
We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.
Relationship between dynamical entropy and energy dissipation far from thermodynamic equilibrium.
Green, Jason R; Costa, Anthony B; Grzybowski, Bartosz A; Szleifer, Igal
2013-10-08
Connections between microscopic dynamical observables and macroscopic nonequilibrium (NE) properties have been pursued in statistical physics since Boltzmann, Gibbs, and Maxwell. The simulations we describe here establish a relationship between the Kolmogorov-Sinai entropy and the energy dissipated as heat from a NE system to its environment. First, we show that the Kolmogorov-Sinai or dynamical entropy can be separated into system and bath components and that the entropy of the system characterizes the dynamics of energy dissipation. Second, we find that the average change in the system dynamical entropy is linearly related to the average change in the energy dissipated to the bath. The constant energy and time scales of the bath fix the dynamical relationship between these two quantities. These results provide a link between microscopic dynamical variables and the macroscopic energetics of NE processes.
Relationship between dynamical entropy and energy dissipation far from thermodynamic equilibrium
Green, Jason R.; Costa, Anthony B.; Grzybowski, Bartosz A.; Szleifer, Igal
2013-01-01
Connections between microscopic dynamical observables and macroscopic nonequilibrium (NE) properties have been pursued in statistical physics since Boltzmann, Gibbs, and Maxwell. The simulations we describe here establish a relationship between the Kolmogorov–Sinai entropy and the energy dissipated as heat from a NE system to its environment. First, we show that the Kolmogorov–Sinai or dynamical entropy can be separated into system and bath components and that the entropy of the system characterizes the dynamics of energy dissipation. Second, we find that the average change in the system dynamical entropy is linearly related to the average change in the energy dissipated to the bath. The constant energy and time scales of the bath fix the dynamical relationship between these two quantities. These results provide a link between microscopic dynamical variables and the macroscopic energetics of NE processes. PMID:24065832
The influence of price-related point-of-sale promotions on bottle shop purchases of young adults.
Jones, Sandra C; Barrie, Lance; Gregory, Parri; Allsop, Steve; Chikritzhs, Tanya
2015-03-01
To investigate the impact of point-of-sale promotions on product choice, brand choice and purchase quantity of young adults purchasing alcohol for off-premise consumption in Australia. A cross-sectional interviewer-completed survey conducted at 24 bottle shops (liquor stores), 12 each in the capital cities of Sydney, New South Wales and Perth, Western Australia. Participants were 509 adults (18 and over) exiting bottle shops having purchased alcohol. When prompted, 26.5% indicated that there was a special offer, price discount, or special promotion connected with a product that they had purchased. Those who participated in point-of-sale promotions purchased a greater quantity of alcohol than those who did not participate: ready to drink, an average of 11.5 standard drinks (SD) compared with an average of 8.9 SD (t = 1.320, P = 0.190); beer, an average of 26.8 SD compared with an average of 16.4 SD; wine, an average of 16.1 SD compared with an average of 13.8 SD (t = 0.924, P = 0.357). Participation in point-of-sale promotions may be associated with increased purchase quantities, not solely shifting between brands. There is a need for further research to explore changes in purchase and consumption patterns as a result of the availability of price-based promotions. The results of this study, combined with previous research, suggest that regulators-and marketers-should consider the immediate and cumulative effect of point-of-sale promotions on drinking patterns, particularly those of younger drinkers. © 2014 Australasian Professional Society on Alcohol and other Drugs.
Quantities for assessing high photon doses to the body: a calculational approach.
Eakins, Jonathan S; Ainsbury, Elizabeth A
2018-06-01
Tissue reactions are the most clinically significant consequences of high-dose exposures to ionising radiation. However, currently there is no universally recognized dose quantity that can be used to assess and report generalised risks to individuals following whole body exposures in the high-dose range. In this work, a number of potential dose quantities are presented and discussed, with mathematical modelling techniques employed to compare them and explore when their differences are most or least manifest. The results are interpreted to propose the average (D GRB ) of the absorbed doses to the stomach, small intestine, red bone marrow, and brain as the optimum quantity for informing assessments of risk. A second, maximally conservative dose quantity (D Max ) is also suggested, which places limits on any under-estimates resulting from the adoption of D GRB . The primary aim of this work is to spark debate, with further work required to refine the final choice of quantity or quantities most appropriate for the full range of different potential exposure scenarios.
Epidemic spreading induced by diversity of agents' mobility.
Zhou, Jie; Chung, Ning Ning; Chew, Lock Yue; Lai, Choy Heng
2012-08-01
In this paper, we study the impact of the preference of an individual for public transport on the spread of infectious disease, through a quantity known as the public mobility. Our theoretical and numerical results based on a constructed model reveal that if the average public mobility of the agents is fixed, an increase in the diversity of the agents' public mobility reduces the epidemic threshold, beyond which an enhancement in the rate of infection is observed. Our findings provide an approach to improve the resistance of a society against infectious disease, while preserving the utilization rate of the public transportation system.
Hart, George W.; Kern, Jr., Edward C.
1987-06-09
An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer.
Annealed importance sampling with constant cooling rate
NASA Astrophysics Data System (ADS)
Giovannelli, Edoardo; Cardini, Gianni; Gellini, Cristina; Pietraperzia, Giangaetano; Chelli, Riccardo
2015-02-01
Annealed importance sampling is a simulation method devised by Neal [Stat. Comput. 11, 125 (2001)] to assign weights to configurations generated by simulated annealing trajectories. In particular, the equilibrium average of a generic physical quantity can be computed by a weighted average exploiting weights and estimates of this quantity associated to the final configurations of the annealed trajectories. Here, we review annealed importance sampling from the perspective of nonequilibrium path-ensemble averages [G. E. Crooks, Phys. Rev. E 61, 2361 (2000)]. The equivalence of Neal's and Crooks' treatments highlights the generality of the method, which goes beyond the mere thermal-based protocols. Furthermore, we show that a temperature schedule based on a constant cooling rate outperforms stepwise cooling schedules and that, for a given elapsed computer time, performances of annealed importance sampling are, in general, improved by increasing the number of intermediate temperatures.
Hart, G.W.; Kern, E.C. Jr.
1987-06-09
An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer. 24 figs.
Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes
Chapman, E. G.; Shaw, W. J.; Easter, R. C.; ...
2002-12-03
The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less
Quasi 1-D Analysis of a Circular, Compressible, Turbulent Jet Laden with Water Droplets. Appendix C
NASA Technical Reports Server (NTRS)
2001-01-01
Recent experimental studies indicate that presence of small amount of liquid droplets reduces the Overall Sound Pressure Level (OASPL) of a jet. Present study is aimed at numerically investigating the effect of liquid particles on the overall flow quantities of a heated, compressible round jet. The jet is assumed perfectly expanded. A quasi-1D model was developed for this purpose which uses area-averaged quantities that satisfy integral conservation equations. Special attention is given to represent the early development region since it is acoustically important. Approximate velocity and temperature profiles were assumed in this region to evaluate entrainment rate. Experimental correlations were used to obtain spreading rate of shear layer. The base flow thus obtained is then laden with water droplets at the exit of the nozzle. Mass, momentum and energy coupling between the two phases is represented using empirical relations. Droplet size and mass loading are varied to observe their effect on flow variables.
Unexpected seasonality in quantity and composition of Amazon rainforest air reactivity
Nölscher, A. C.; Yañez-Serrano, A. M.; Wolff, S.; de Araujo, A. Carioca; Lavrič, J. V.; Kesselmeier, J.; Williams, J.
2016-01-01
The hydroxyl radical (OH) removes most atmospheric pollutants from air. The loss frequency of OH radicals due to the combined effect of all gas-phase OH reactive species is a measureable quantity termed total OH reactivity. Here we present total OH reactivity observations in pristine Amazon rainforest air, as a function of season, time-of-day and height (0–80 m). Total OH reactivity is low during wet (10 s−1) and high during dry season (62 s−1). Comparison to individually measured trace gases reveals strong variation in unaccounted for OH reactivity, from 5 to 15% missing in wet-season afternoons to mostly unknown (average 79%) during dry season. During dry-season afternoons isoprene, considered the dominant reagent with OH in rainforests, only accounts for ∼20% of the total OH reactivity. Vertical profiles of OH reactivity are shaped by biogenic emissions, photochemistry and turbulent mixing. The rainforest floor was identified as a significant but poorly characterized source of OH reactivity. PMID:26797390
Unexpected seasonality in quantity and composition of Amazon rainforest air reactivity.
Nölscher, A C; Yañez-Serrano, A M; Wolff, S; de Araujo, A Carioca; Lavrič, J V; Kesselmeier, J; Williams, J
2016-01-22
The hydroxyl radical (OH) removes most atmospheric pollutants from air. The loss frequency of OH radicals due to the combined effect of all gas-phase OH reactive species is a measureable quantity termed total OH reactivity. Here we present total OH reactivity observations in pristine Amazon rainforest air, as a function of season, time-of-day and height (0-80 m). Total OH reactivity is low during wet (10 s(-1)) and high during dry season (62 s(-1)). Comparison to individually measured trace gases reveals strong variation in unaccounted for OH reactivity, from 5 to 15% missing in wet-season afternoons to mostly unknown (average 79%) during dry season. During dry-season afternoons isoprene, considered the dominant reagent with OH in rainforests, only accounts for ∼20% of the total OH reactivity. Vertical profiles of OH reactivity are shaped by biogenic emissions, photochemistry and turbulent mixing. The rainforest floor was identified as a significant but poorly characterized source of OH reactivity.
Atlas of Seasonal Means Simulated by the NSIPP 1 Atmospheric GCM. Volume 17
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Bacmeister, Julio; Pegion, Philip J.; Schubert, Siegfried D.; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
This atlas documents the climate characteristics of version 1 of the NASA Seasonal-to-Interannual Prediction Project (NSIPP) Atmospheric General Circulation Model (AGCM). The AGCM includes an interactive land model (the Mosaic scheme), and is part of the NSIPP coupled atmosphere-land-ocean model. The results presented here are based on a 20-year (December 1979-November 1999) "ANIIP-style" integration of the AGCM in which the monthly-mean sea-surface temperature and sea ice are specified from observations. The climate characteristics of the AGCM are compared with the National Centers for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasting (ECMWF) reanalyses. Other verification data include Special Sensor Microwave/Imager (SSNM) total precipitable water, the Xie-Arkin estimates of precipitation, and Earth Radiation Budget Experiment (ERBE) measurements of short and long wave radiation. The atlas is organized by season. The basic quantities include seasonal mean global maps and zonal and vertical averages of circulation, variance/covariance statistics, and selected physics quantities.
NASA Astrophysics Data System (ADS)
Lan, Xuemei; Chai, Yuwei; Li, Rui; Li, Bowen; Cheng, Hongbo; Chang, Lei; Chai, Shouxi
2018-01-01
In order to explore the characteristics and relationship between soil temperature and yield of winter wheat, under different sowing quantities conditions of straw mulching conventional drilling in Northwest China, this study took Lantian 26 as material, under the whole corn mulching conventional drilling in Changhe town and Pingxiang town, setting up 3 different seeding quantities of 270 kg/ha (SSMC1), 324 kg/ha (SSMC2) and 405 kg/ha (SSMC3), to study the difference of soil temperature during the growth period of winter wheat and its correlation with yield components. Results showed: the average soil temperature of 0∼25cm in two ecological zones in the whole growth period have a significant change with the increase of sowing quantities; too much seeding had a sharp drop in soil temperature; the highest temperature of SSMC in Changhe town was the middle quantity of SSMC 2; the highest temperature of SSMC in Pingxiang town was the lowest sowing quantity of SSMC1. Diurnal variation of soil temperature at all growth stages showed: with the increase of SSMC, in the morning it increased with the increase of soil depth, noon and evening reducing with the depth of the soil. The average soil temperature of SSMC2 was higher than that of in all the two ecological zones in the whole growth period of SSMC.The maximum day temperature difference of each treatment was at noon. With the increase of SSMC, the yield increase varied with two ecological zones. SSMC of the local conventional sowing quantity of 270kg/ha SSMC1 yield was the highest in Changhe Town. SSMC of the middle sowing quantity SSMC2 of 324kg/ha yield was the highest in Pingxiang town. The difference of grain number per spike was the main cause of yield difference among these 3 treatments. Correlation analysis showed: the correlation among the yield and yield components, growth index and soil temperature varied with different ecological zones; thousand kernel weight and grain number per ear (.964** and.891**) had a very significant positive correlation with the yields in Changhe Town, but thousand kernel weight and grain number per ear (.708* and.718*) had a significant positive correlation with yield in Pingxiang Town. There was a significant positive correlation between harvest index and 10cm soil temperature (.763*). But in Pingxiang Town grain number per ear and 15cm soil temperature showed a significant positive correlation (.671*); 15cm soil temperature and the average temperature of 0∼25cm soil layer in the whole growth period (-.687* and -.698*) had a significant negative correlation with the number of panicles per unit area; there was a very significant negative correlation between plant height and average temperature of 0∼25cm in the whole growth period (-.906**). Thus, the changes of soil temperature under SSMC different sowing quantity have indirect effect on the yield of winter wheat.
NASA Astrophysics Data System (ADS)
Shim, J. S.; Rastätter, L.; Kuznetsova, M.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Fedrizzi, M.; Förster, M.; Fuller-Rowell, T. J.; Gardner, L. C.; Goncharenko, L.; Huba, J.; McDonald, S. E.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.
2017-10-01
In order to assess current modeling capability of reproducing storm impacts on total electron content (TEC), we considered quantities such as TEC, TEC changes compared to quiet time values, and the maximum value of the TEC and TEC changes during a storm. We compared the quantities obtained from ionospheric models against ground-based GPS TEC measurements during the 2006 AGU storm event (14-15 December 2006) in the selected eight longitude sectors. We used 15 simulations obtained from eight ionospheric models, including empirical, physics-based, coupled ionosphere-thermosphere, and data assimilation models. To quantitatively evaluate performance of the models in TEC prediction during the storm, we calculated skill scores such as RMS error, Normalized RMS error (NRMSE), ratio of the modeled to observed maximum increase (Yield), and the difference between the modeled peak time and observed peak time. Furthermore, to investigate latitudinal dependence of the performance of the models, the skill scores were calculated for five latitude regions. Our study shows that RMSE of TEC and TEC changes of the model simulations range from about 3 TECU (total electron content unit, 1 TECU = 1016 el m-2) (in high latitudes) to about 13 TECU (in low latitudes), which is larger than latitudinal average GPS TEC error of about 2 TECU. Most model simulations predict TEC better than TEC changes in terms of NRMSE and the difference in peak time, while the opposite holds true in terms of Yield. Model performance strongly depends on the quantities considered, the type of metrics used, and the latitude considered.
Abbeddou, Souheila; Hess, Sonja Y; Yakes Jimenez, Elizabeth; Somé, Jérôme W; Vosti, Stephen A; Guissou, Rosemonde M; Ouédraogo, Jean-Bosco; Brown, Kenneth H
2015-12-01
Adherence to supplementation provided during an intervention trial can affect interpretation of study outcomes. We compared different approaches for estimating adherence to small-quantity lipid-based nutrient supplements (SQ-LNS) and dispersible tablets in a randomised clinical trial in Burkina Faso. A total of 2435 children (9-18 months) were randomly assigned to receive daily 20 g SQ-LNS with varying contents of zinc and a dispersible tablet containing 0 or 5 mg zinc. Adherence to SQ-LNS and tablets was assessed for all children through weekly caregiver interviews, and disappearance rate was calculated based on empty and unused packages returned during home visits. Additional adherence data were collected in different randomly selected subgroups of children: 12-h home observations were completed for children 11 and 16 months of age (n = 192) to assess consumption of SQ-LNS and dispersible tablets, and plasma zinc concentration was measured at baseline and 18 months (n = 310). Apparent adherence to SQ-LNS and dispersible tablets differed according to the assessment method used. Average daily caregiver-reported adherence to both SQ-LNS and dispersible tablets was 97 ± 6%. Disappearance rates showed similarly high average weekly adherence (98 ± 4%). In contrast, only 63% and 54% of children at 11 and 16 months, respectively, received SQ-LNS during the 12-h home observation periods, and fewer (32% and 27%) received a tablet. The lack of change in plasma zinc concentration after 9 months of supplementation suggests low adherence to the zinc tablet. Better methods are needed to assess adherence in community-based supplementation trials. © 2014 John Wiley & Sons Ltd.
Magneto-acupuncture stimuli effects on ultraweak photon emission from hands of healthy persons.
Park, Sang-Hyun; Kim, Jungdae; Koo, Tae-Hoi
2009-03-01
We investigated ultraweak photon emissions from the hands of 45 healthy persons before and after magneto-acupuncture stimuli. Photon emissions were measured by using two photomultiplier tubes in the spectral range of UV and visible. Several statistical quantities such as the average intensity, the standard deviation, the delta-value, and the degree of asymmetry were calculated from the measurements of photon emissions before and after the magneto-acupuncture stimuli. The distributions of the quantities from the measurements with the magneto-acupuncture stimuli were more differentiable than those of the groups without any stimuli and with the sham magnets. We also analyzed the magneto-acupuncture stimuli effects on the photon emissions through a year-long measurement for two subjects. The individualities of the subjects increased the differences of photon emissions compared to the above group study before and after magnetic stimuli. The changes on the ultraweak photon emission rates of hand for the magnet group were detected conclusively in the quantities of the averages and standard deviations.
Electric Power Monthly, June 1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1990-09-13
The EPM is prepared by the Electric Power Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy. This publication provides monthly statistics at the national, Census division, and State levels for net generation, fuel consumption, fuel stocks, quantity and quality of fuel, electricity sales, and average revenue per kilowatthour of electricity sold. Data on net generation are also displayed at the North American Electric Reliability Council (NERC) region level. Additionally, company and plant level information are published in the EPM on capability of new plants, net generation, fuel consumption, fuel stocks, quantity andmore » quality of fuel, and cost of fuel. Quantity, quality, and cost of fuel data lag the net generation, fuel consumption, fuel stocks, electricity sales, and average revenue per kilowatthour data by 1 month. This difference in reporting appears in the national, Census division, and State level tables. However, at the plant level, all statistics presented are for the earlier month for the purpose of comparison. 40 tabs.« less
Discretionary salt use in airline meal service.
Wallace, S; Wellman, N S; Dierkes, K E; Johnson, P M
1987-02-01
Salt use in airline meal service was studied through observation of returned meal trays of 932 passengers. Observation and weighing of salt packets on returned trays revealed that 64% of passengers did not salt their airline dinner, while 6% used the entire salt packet, 0.92 gm NaCl (362 mg Na). Average discretionary salt use among the 234 passengers (25%) who added salt was 0.57 gm NaCl (232 mg Na). Estimates of total sodium in the four airline dinners averaged 2.0 gm NaCl (786 mg Na). Laboratory assays of menu items produced by the airline foodservice differed 3% to 19% from estimated values. Sodium content of the four airline dinner menus was similar and did not affect salt use. Discretionary salt use was related to the total amount of entrée consumed but was not affected by the amount of salad consumed. It is postulated that salt use in the "captive" airline situation is predicated on consistent, habitual practices. Lowering sodium consumption in this setting may require alteration in both food preparation methods and quantity of salt presented in the packets.
Three-dimensional scanning force/tunneling spectroscopy at room temperature.
Sugimoto, Yoshiaki; Ueda, Keiichi; Abe, Masayuki; Morita, Seizo
2012-02-29
We simultaneously measured the force and tunneling current in three-dimensional (3D) space on the Si(111)-(7 × 7) surface using scanning force/tunneling microscopy at room temperature. The observables, the frequency shift and the time-averaged tunneling current were converted to the physical quantities of interest, i.e. the interaction force and the instantaneous tunneling current. Using the same tip, the local density of states (LDOS) was mapped on the same surface area at constant height by measuring the time-averaged tunneling current as a function of the bias voltage at every lateral position. LDOS images at negative sample voltages indicate that the tip apex is covered with Si atoms, which is consistent with the Si-Si covalent bonding mechanism for AFM imaging. A measurement technique for 3D force/current mapping and LDOS imaging on the equivalent surface area using the same tip was thus demonstrated.
NASA Astrophysics Data System (ADS)
Aubert, Dominique; Teyssier, Romain
2010-11-01
We present a set of cosmological simulations with radiative transfer in order to model the reionization history of the universe from z = 18 down to z = 6. Galaxy formation and the associated star formation are followed self-consistently with gas and dark matter dynamics using the RAMSES code, while radiative transfer is performed as a post-processing step using a moment-based method with the M1 closure relation in the ATON code. The latter has been ported to a multiple Graphics Processing Unit (GPU) architecture using the CUDA language together with the MPI library, resulting in an overall acceleration that allows us to tackle radiative transfer problems at a significantly higher resolution than previously reported: 10243 + 2 levels of refinement for the hydrodynamic adaptive grid and 10243 for the radiative transfer Cartesian grid. We reach a typical acceleration factor close to 100× when compared to the CPU version, allowing us to perform 1/4 million time steps in less than 3000 GPU hr. We observe good convergence properties between our different resolution runs for various volume- and mass-averaged quantities such as neutral fraction, UV background, and Thomson optical depth, as long as the effects of finite resolution on the star formation history are properly taken into account. We also show that the neutral fraction depends on the total mass density, in a way close to the predictions of photoionization equilibrium, as long as the effect of self-shielding are included in the background radiation model. Although our simulation suite has reached unprecedented mass and spatial resolution, we still fail in reproducing the z ~ 6 constraints on the neutral fraction of hydrogen and the intensity of the UV background. In order to account for unresolved density fluctuations, we have modified our chemistry solver with a simple clumping factor model. Using our most spatially resolved simulation (12.5 Mpc h -1 with 10243 particles) to calibrate our subgrid model, we have resimulated our largest box (100 Mpc h -1 with 10243 particles) with the modified chemistry, successfully reproducing the observed level of neutral hydrogen in the spectra of high-redshift quasars. We however did not reproduce the average photoionization rate inferred from the same observations. We argue that this discrepancy could be partly explained by the fact that the average radiation intensity and the average neutral fraction depend on different regions of the gas density distribution, so that one quantity cannot be simply deduced from the other.
Reliability of a store observation tool in measuring availability of alcohol and selected foods.
Cohen, Deborah A; Schoeff, Diane; Farley, Thomas A; Bluthenthal, Ricky; Scribner, Richard; Overton, Adrian
2007-11-01
Alcohol and food items can compromise or contribute to health, depending on the quantity and frequency with which they are consumed. How much people consume may be influenced by product availability and promotion in local retail stores. We developed and tested an observational tool to objectively measure in-store availability and promotion of alcoholic beverages and selected food items that have an impact on health. Trained observers visited 51 alcohol outlets in Los Angeles and southeastern Louisiana. Using a standardized instrument, two independent observations were conducted documenting the type of outlet, the availability and shelf space for alcoholic beverages and selected food items, the purchase price of standard brands, the placement of beer and malt liquor, and the amount of in-store alcohol advertising. Reliability of the instrument was excellent for measures of item availability, shelf space, and placement of malt liquor. Reliability was lower for alcohol advertising, beer placement, and items that measured the "least price" of apples and oranges. The average kappa was 0.87 for categorical items and the average intraclass correlation coefficient was 0.83 for continuous items. Overall, systematic observation of the availability and promotion of alcoholic beverages and food items was feasible, acceptable, and reliable. Measurement tools such as the one we evaluated should be useful in studies of the impact of availability of food and beverages on consumption and on health outcomes.
Forecasting seeing and parameters of long-exposure images by means of ARIMA
NASA Astrophysics Data System (ADS)
Kornilov, Matwey V.
2016-02-01
Atmospheric turbulence is the one of the major limiting factors for ground-based astronomical observations. In this paper, the problem of short-term forecasting seeing is discussed. The real data that were obtained by atmospheric optical turbulence (OT) measurements above Mount Shatdzhatmaz in 2007-2013 have been analysed. Linear auto-regressive integrated moving average (ARIMA) models are used for the forecasting. A new procedure for forecasting the image characteristics of direct astronomical observations (central image intensity, full width at half maximum, radius encircling 80 % of the energy) has been proposed. Probability density functions of the forecast of these quantities are 1.5-2 times thinner than the respective unconditional probability density functions. Overall, this study found that the described technique could adequately describe temporal stochastic variations of the OT power.
NASA Astrophysics Data System (ADS)
Muschinski, A.; Hu, K.; Root, L. M.; Tichkule, S.; Wijesundara, S. N.
2010-12-01
Mean values and fluctuations of angles-of-arrival (AOAs) of light emitted from astronomical or terrestrial sources and observed through a telescope equipped with a CCD camera carry quantitative information about certain statistics of the wind and temperature field, integrated along the propagation path. While scintillometry (i.e., the retrieval of atmospheric quantities from light intensity fluctuations) has been a popular technique among micrometeorologists for many years, there have been relatively few attempts to utilize AOA observations to probe the atmospheric surface layer (ASL). Here we report results from a field experiment that we conducted at the Boulder Atmospheric Observatory (BAO) site near Erie, CO, in June 2010. During the night of 15/16 June, the ASL was characterized by intermittent turbulence and intermittent gravity-wave events. We measured temperature and wind with 12 sonics (R.M. Young, Model 81000, sampling rate 31 Hz) mounted on two portable towers at altitudes between 1.45 m and 4.84 m AGL; air pressure with two quartz-crystal barometers (Paroscientific, 10 Hz); and AOAs by means of a CCD camera (Lumenera, Model 075M, thirty 640x480 frames per second) attached to a 14-inch, Schmidt-Cassegrain telescope (Meade, Model LX200GPS) pointing at a rectangular array of four test lights (LEDs, vertical spacing 8 cm, horizontal spacing 10 cm) located at a distance of 182 m. The optical path was horizontal and 1.7 m above flat ground. The two towers were located 2 m away from the optical path. In our presentation, we focus on AOA retrievals of the following quantities: temporal fluctuations of the path-averaged, vertical temperature gradient; mean values and fluctuations of the path-averaged, lateral wind velocity; and mean values and fluctuations of the path-averaged temperature turbulence structure parameter. We compare the AOA retrievals with the collocated and simultaneous point measurements obtained with the sonics, and we analyze our observations in the framework of the Monin-Obukhov theory. The AOA techniques enable us to detect temporal fluctuations of the path-averaged vertical temperature gradient (estimated over a height increment defined by the telescope's aperture diameter) down to a few millikelvins per meter, which probably cannot be achieved with sonics. Extremely small wind velocities can also be resolved. Therefore, AOA techniques are well suited for observations of the nocturnal surface layer under quiet conditions. AOA retrieval techniques have major advantages over scintillometric techniques because AOAs can be understood within the framework of the weak-scattering theory or even geometrical optics (the eikonal-fluctuation theory), while the well-known "saturation effect" makes the weak-scattering theory invalid for intensity fluctuations in the majority of cases of practical relevance.
Changes in water and solute fluxes in the vadose zone after switching crops
NASA Astrophysics Data System (ADS)
Turkeltaub, Tuvia; Dahan, Ofer; Kurtzman, Daniel
2015-04-01
Switching crop type and therefore changing irrigation and fertilization regimes leads to alternation in deep percolation and concentrations of solutes in pore water. Changes of fluxes of water, chloride and nitrate under a commercial greenhouse due to a change from tomato to green spices were observed. The site, located above the a coastal aquifer, was monitored for the last four years. A vadose-zone monitoring system (VMS) was implemented under the greenhouse and provided continuous data on both the temporal variation in water content and the chemical composition of pore water at multiple depths in the deep vadose zone (~20 m). Chloride and nitrate profiles, before and after the crop type switching, indicate on a clear alternation in soil water solutes concentrations. Before the switching of the crop type, the average chloride profile ranged from ~130 to ~210, while after the switching, the average profile ranged from ~34 to ~203 mg L-1, 22% reduction in chloride mass. Counter trend was observed for the nitrate concentrations, the average nitrate profile before switching ranged from ~11 to ~44 mg L-1, and after switching, the average profile ranged from ~500 to ~75 mg L-1, 400% increase in nitrate mass. A one dimensional unsaturated water flow and chloride transport model was calibrated to transient deep vadose zone data. A comparison between the simulation results under each of the surface boundary conditions of the vegetables and spices cultivation regime, clearly show a distinct alternation in the quantity and quality of groundwater recharge.
Preliminary studies of the effect of thinning techniques over muon production profiles
NASA Astrophysics Data System (ADS)
Tomishiyo, G.; Souza, V.
2017-06-01
In the context of air shower simulations, thinning techniques are employed to reduce computational time and storage requirements. These techniques are tailored to preserve locally mean quantities during shower development, such as the average number of particles in a given atmosphere layer, and to not induce systematic shifts in shower observables, such as the depth of shower maximum. In this work we investigate thinning effects on the determination of the depth in which the shower has the maximum muon production {X}\\max μ -{sim}. We show preliminary results in which the thinning factor and maximum thinning weight might influence the determination of {X}\\max μ -{sim}
Fermionic vacuum polarization in a higher-dimensional global monopole spacetime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezerra de Mello, E. R.
2007-12-15
In this paper we analyze the vacuum polarization effects associated with a massless fermionic field in a higher-dimensional global monopole spacetime in the 'braneworld' scenario. In this context we admit that our Universe, the bulk, is represented by a flat (n-1)-dimensional brane having a global monopole in an extra transverse three-dimensional submanifold. We explicitly calculate the renormalized vacuum average of the energy-momentum tensor,
Estimation of Bid Curves in Power Exchanges using Time-varying Simultaneous-Equations Models
NASA Astrophysics Data System (ADS)
Ofuji, Kenta; Yamaguchi, Nobuyuki
Simultaneous-equations model (SEM) is generally used in economics to estimate interdependent endogenous variables such as price and quantity in a competitive, equilibrium market. In this paper, we have attempted to apply SEM to JEPX (Japan Electric Power eXchange) spot market, a single-price auction market, using the publicly available data of selling and buying bid volumes, system price and traded quantity. The aim of this analysis is to understand the magnitude of influences to the auctioned prices and quantity from the selling and buying bids, than to forecast prices and quantity for risk management purposes. In comparison with the Ordinary Least Squares (OLS) estimation where the estimation results represent average values that are independent of time, we employ a time-varying simultaneous-equations model (TV-SEM) to capture structural changes inherent in those influences, using State Space models with Kalman filter stepwise estimation. The results showed that the buying bid volumes has that highest magnitude of influences among the factors considered, exhibiting time-dependent changes, ranging as broad as about 240% of its average. The slope of the supply curve also varies across time, implying the elastic property of the supply commodity, while the demand curve remains comparatively inelastic and stable over time.
USDA-ARS?s Scientific Manuscript database
The US Environmental Protection Agency’s 2004 Dioxin Reassessment included a characterization of background exposures to dioxin-like compounds, including an estimate of an average background intake dose and an average background body burden. These quantities were derived from data generated in the m...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...
2017-06-07
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Guangsheng; Tan, Zhenyu, E-mail: tzy@sdu.edu.cn; Pan, Jie
In this work, a comparative study on the frequency effects of the electrical characteristics of the pulsed dielectric barrier discharges in He/O{sub 2} and in Ar/O{sub 2} at atmospheric pressure has been performed by means of the numerical simulation based on a 1-D fluid model at frequencies below 100 kHz. The frequency dependences of the characteristic quantities of the discharges in the two gases have been systematically calculated and analyzed under the oxygen concentrations below 2%. The characteristic quantities include the discharge current density, the averaged electron density, the electric field, and the averaged electron temperature. Especially, the frequency effects onmore » the averaged particle densities of the reactive species have also been calculated. This work gives the following significant results. For the two gases, there are two bipolar discharges in one period of applied voltage pulse under the considered frequency range and oxygen concentrations, as occurred in the pure noble gases. The frequency affects the two discharges in He/O{sub 2}, but in Ar/O{sub 2}, it induces a strong effect only on the first discharge. For the first discharge in each gas, there is a characteristic frequency at which the characteristic quantities reach their respective minimum, and this frequency appears earlier for Ar/O{sub 2}. For the second discharge in Ar/O{sub 2}, the averaged electron density presents a slight variation with the frequency. In addition, the discharge in Ar/O{sub 2} is strong and the averaged electron temperature is low, compared to those in He/O{sub 2.} The total averaged particle density of the reactive species in Ar/O{sub 2} is larger than those in He/O{sub 2} by about one order of magnitude.« less
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
The effect of urban green on small-area (healthy) life expectancy.
Jonker, M F; van Lenthe, F J; Donkers, B; Mackenbach, J P; Burdorf, A
2014-10-01
Several epidemiological studies have investigated the effect of the quantity of green space on health outcomes such as self-rated health, morbidity and mortality ratios. These studies have consistently found positive associations between the quantity of green and health. However, the impact of other aspects, such as the perceived quality and average distance to public green, and the effect of urban green on population health are still largely unknown. Linear regression models were used to investigate the impact of three different measures of urban green on small-area life expectancy (LE) and healthy life expectancy (HLE) in The Netherlands. All regressions corrected for average neighbourhood household income, accommodated spatial autocorrelation, and took measurement uncertainty of LE, HLE as well as the quality of urban green into account. Both the quantity and the perceived quality of urban green are modestly related to small-area LE and HLE: an increase of 1 SD in the percentage of urban green space is associated with a 0.1-year higher LE, and, in the case of quality of green, with an approximately 0.3-year higher LE and HLE. The average distance to the nearest public green is unrelated to population health. The quantity and particularly quality of urban green are positively associated with small-area LE and HLE. This concurs with a growing body of evidence that urban green reduces stress, stimulates physical activity, improves the microclimate and reduces ambient air pollution. Accordingly, urban green development deserves a more prominent place in urban regeneration and neighbourhood renewal programmes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Alcohol-impaired driving: average quantity consumed and frequency of drinking do matter.
Birdsall, William C; Reed, Beth Glover; Huq, Syeda S; Wheeler, Laura; Rush, Sarah
2012-01-01
The objective of this article is to estimate and validate a logistic model of alcohol-impaired driving using previously ignored alcohol consumption behaviors, other risky behaviors, and demographic characteristics as independent variables. The determinants of impaired driving are estimated using the US Centers for Disease Control and Prevention's (CDC) Behavioral Risk Factor Surveillance System (BRFSS) surveys. Variables used in a logistic model to explain alcohol-impaired driving are not only standard sociodemographic variables and bingeing but also frequency of drinking and average quantity consumed, as well as other risky behaviors. We use interactions to understand how being female and being young affect impaired driving. Having estimated our model using the 1997 survey, we validated our model using the BRFSS data for 1999. Drinking 9 or more times in the past month doubled the odds of impaired driving. The greater average consumption of alcohol per session, the greater the odds of driving impaired, especially for persons in the highest quartile of alcohol consumed. Bingeing has the greatest effect on impaired driving. Seat belt use is the one risky behavior found to be related to such driving. Sociodemographic effects are consistent with earlier research. Being young (18-30) interacts with two of the alcohol consumption variables and being a woman interacts with always wearing a seat belt. Our model was robust in the validation analysis. All 3 dimensions of drinking behavior are important determinants of alcohol-impaired driving, including frequency and average quantity consumed. Including these factors in regressions improves the estimates of the effects of all variables.
NASA Astrophysics Data System (ADS)
Raju, K. P.
2018-05-01
The Calcium K spectroheliograms of the Sun from Kodaikanal have a data span of about 100 years and covers over 9 solar cycles. The Ca line is a strong chromospheric line dominated by chromospheric network and plages which are good indicators of solar activity. Length-scales and relative intensities of the chromospheric network have been obtained in the solar latitudes from 50 degree N to 50 degree S from the spectroheliograms. The length-scale was obtained from the half-width of the two-dimensional autocorrelation of the latitude strip which gives a measure of the width of the network boundary. As reported earlier for the transition region extreme ultraviolet (EUV) network, relative intensity and width of the chromospheric network boundary are found to be dependent on the solar cycle. A varying phase difference has been noticed in the quantities in different solar latitudes. A cross-correlation analysis of the quantities from other latitudes with ±30 degree latitude revealed an interesting phase difference pattern indicating flux transfer. Evidence of equatorward flux transfer has been observed. The average equatorward flux transfer was estimated to be 5.8 ms-1. The possible reasons of the drift could be meridional circulation, torsional oscillations, or the bright point migration. Cross-correlation of intensity and length-scale from the same latitude showed increasing phase difference with increasing latitude. We have also obtained the cross correlation of the quantities across the equator to see the possible phase lags in the two hemispheres. Signatures of lags are seen in the length scales of southern hemisphere near the equatorial latitudes, but no such lags in the intensity are observed. The results have important implications on the flux transfer over the solar surface and hence on the solar activity and dynamo.
Green strips or vegetative fuel breaks
Loren St. John; Dan Ogle
2009-01-01
According to the National Interagency Fire Center, between 1998 and 2008 there were on average 65,581 fires per year and an average of 6,114,135 acres burned each year in the United States. Rangelands in the western United States have been invaded by many annual weed species including cheatgrass, an introduced winter annual grass that produces large quantities of...
Geology and ground-water resources of the Cockfield Formation in western Tennessee
Parks, W.S.; Carmichael, J.K.
1990-01-01
The Cockfield Formation of the Claiborne Group of Tertiary age underlies approximately 4,000 sq mi in western Tennessee. The formation consists primarily of lenticular beds of very fine to coarse sand, silt, clay, and lignite. The Cockfield Formation has been extensively eroded, and the original thickness is preserved only in a few areas where the formation ranges from 235 to 270 ft in thickness. Recharge to the Cockfield aquifer is from precipitation on sparse outcrops or by downward infiltration of water from the overlying fluvial deposits of Tertiary and Quaternary age and alluvium of Quaternary age or, where present, the overlying Jackson Formation of Tertiary age. Data from two observation wells indicate that water levels have risen at average rates of about 0.5 and 0.7 ft/year during the period 1980-85. Water from the Cockfield aquifer is a calcium bicarbonate type that contains low concentrations of most major constituents, and generally is suitable for most uses. Dissolved-solids concentrations range from 44 to 218 mg/L. Data from two aquifer tests indicate transmissivities of 2,500 and 6 ,000 sq ft/day and storage coefficients of 0.0003 and 0.0007, respectively. The Cockfield aquifer presently provides small to moderate quantities of water for several public and industrial water supplies and small quantities to numerous domestic and farm wells. Withdrawals for public and industrial supplies in 1983 averaged about 3.3 million gal/day. (USGS)
Measurement of surface water runoff from plots of two different sizes
NASA Astrophysics Data System (ADS)
Joel, Abraham; Messing, Ingmar; Seguel, Oscar; Casanova, Manuel
2002-05-01
Intensities and amounts of water infiltration and runoff on sloping land are governed by the rainfall pattern and soil hydraulic conductivity, as well as by the microtopography and soil surface conditions. These components are closely interrelated and occur simultaneously, and their particular contribution may change during a rainfall event, or their effects may vary at different field scales. The scale effect on the process of infiltration/runoff was studied under natural field and rainfall conditions for two plot sizes: small plots of 0·25 m2 and large plots of 50 m2. The measurements were carried out in the central region of Chile in a piedmont most recently used as natural pastureland. Three blocks, each having one large plot and five small plots, were established. Cumulative rainfall and runoff quantities were sampled every 5 min. Significant variations in runoff responses to rainfall rates were found for the two plot sizes. On average, large plots yielded only 40% of runoff quantities produced on small plots per unit area. This difference between plot sizes was observed even during periods of continuous runoff.
NASA Astrophysics Data System (ADS)
Miyazaki, Jun
2013-10-01
We present an analytical method for quantifying exciton hopping in an energetically disordered system with quenching sites. The method is subsequently used to provide a quantitative understanding of exciton hopping in a quantum dot (QD) array. Several statistical quantities that characterize the dynamics (survival probability, average number of distinct sites visited, average hopping distance, and average hopping rate in the initial stage) are obtained experimentally by measuring time-resolved fluorescence intensities at various temperatures. The time evolution of these quantities suggests in a quantitative way that at low temperature an exciton tends to be trapped at a local low-energy site, while at room temperature, exciton hopping occurs repeatedly, leading to a large hopping distance. This method will serve to facilitate highly efficient optoelectronic devices using QDs such as photovoltaic cells and light-emitting diodes, since exciton hopping is considered to strongly influence their operational parameters. The presence of a dark QD (quenching site) that exhibits fast decay is also quantified.
Gamberg, Leonard; Metz, Andreas; Pitonyak, Daniel; ...
2018-03-15
Here, we extend the improved Collins–Soper–Sterman (iCSS) W+Y construction recently presented in to the case of polarized observables, where we focus in particular on the Sivers effect in semi-inclusive deep-inelastic scattering. We further show how one recovers the expected leading-order collinear twist-3 result from a (weighted) q T-integral of the differential cross section. We are also able to demonstrate the validity of the well-known relation between the (TMD) Sivers function and the (collinear twist-3) Qiu–Sterman function within the iCSS framework. This relation allows for their interpretation as functions yielding the average transverse momentum of unpolarized quarks in a transversely polarizedmore » spin-1/2 target. We further outline how this study can be generalized to other polarized quantities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamberg, Leonard; Metz, Andreas; Pitonyak, Daniel
Here, we extend the improved Collins–Soper–Sterman (iCSS) W+Y construction recently presented in to the case of polarized observables, where we focus in particular on the Sivers effect in semi-inclusive deep-inelastic scattering. We further show how one recovers the expected leading-order collinear twist-3 result from a (weighted) q T-integral of the differential cross section. We are also able to demonstrate the validity of the well-known relation between the (TMD) Sivers function and the (collinear twist-3) Qiu–Sterman function within the iCSS framework. This relation allows for their interpretation as functions yielding the average transverse momentum of unpolarized quarks in a transversely polarizedmore » spin-1/2 target. We further outline how this study can be generalized to other polarized quantities.« less
NASA Astrophysics Data System (ADS)
Gamberg, Leonard; Metz, Andreas; Pitonyak, Daniel; Prokudin, Alexei
2018-06-01
We extend the improved Collins-Soper-Sterman (iCSS) W + Y construction recently presented in [1] to the case of polarized observables, where we focus in particular on the Sivers effect in semi-inclusive deep-inelastic scattering. We further show how one recovers the expected leading-order collinear twist-3 result from a (weighted) qT-integral of the differential cross section. We are also able to demonstrate the validity of the well-known relation between the (TMD) Sivers function and the (collinear twist-3) Qiu-Sterman function within the iCSS framework. This relation allows for their interpretation as functions yielding the average transverse momentum of unpolarized quarks in a transversely polarized spin-1/2 target. We further outline how this study can be generalized to other polarized quantities.
Müller, M; Meyer, H; Stummer, H
2011-07-01
In extramural setting, general practitioners serve as gatekeepers and therefore control the demand for medical treatment and pharmaceuticals. As a result prescription habits are of major interest. The aim of the present study is to identify sample characteristics in the prescription behaviour of the general practitioners that allow one to differentiate between the individual and the basic population. The prescription behaviour of 4 231 general practitioners was operationalised by means of the two variables "quantity" and "price". Outliers in those categories, that indicate a doctor prescribing too many or too expensive drugs, were identified using Chebyshev's inequality. We found a statistically significant linear relationship between the individual characteristics of the medical doctors and their prescription behaviour (0.54≤ r ≤0.89) as well as between the variables "quantity" and "price" (r=0.86). Particularly notable seems to be the correlation between the number of the consultations and the quantity of the prescribed drugs. The average prescription amounts to approximately 1.8 pharmaceuticals per consultation. The quantity of drugs prescribed correlates with the demand for the physician's service. Only a few general practitioners deviate from this coherence. The tendency to prescribe disproportionately expensive drugs (average costs amount to € 18.4 per drug) especially applies to those general practitioners who, in addition to their occupation as a physician, are allowed to dispense the pharmaceuticals directly to the patient within their privately owned pharmacies ("Hausapotheke"). In addition to this attribute, the variables "number of patients" and "number of consultations" intensify the effect. The risk to be identified as an outlier is 7 times higher within the group of general practitioners who own a "Hausapotheke" and account for an above average number of consultations as within the group that does not incorporate those characteristics. The strong coherence between the quantity and the demand is inherent to the health-care system and explains 79% of the variance of the prescribed quantities. Only 21% of the variance is determined by outside influences such as state of health of the patients. Physicians who have a monetary benefit from also distributing the drugs, however, enhance the prescription of high priced pharmaceuticals. © Georg Thieme Verlag KG Stuttgart · New York.
Kumar, Ajay; Sharma, Sumit; Mehra, Rohit; Narang, Saurabh; Mishra, Rosaline
2017-07-01
Background The inhalation doses resulting from the exposure to radon, thoron, and their progeny are important quantities in estimating the radiation risk for epidemiological studies as the average global annual effective dose due to radon and its progeny is 1.3 mSv as compared to that of 2.4 mSv due to all other natural sources of ionizing radiation. Objectives The annual inhalation dose has been assessed with an aim of investigating the health risk to the inhabitants of the studied region. Methods Time integrated deposition based 222 Rn/ 220 Rn sensors have been used to measure concentrations in 146 dwellings of Udhampur district, Jammu and Kashmir. An active smart RnDuo monitor has also been used for comparison purposes. Results The range of indoor radon/thoron concentrations is found to vary from 11 to 58 Bqm -3 with an average value of 29 ± 9 Bqm -3 and from 25 to 185 Bqm -3 with an average value of 83 ± 32 Bqm -3 , respectively. About 10.7% dwellings have higher values than world average of 40 Bqm -3 prescribed by UNSCEAR. The relationship of indoor radon and thoron levels with different seasons, ventilation conditions, and different geological formations have been discussed. Conclusions The observed values of concentrations and average annual effective dose due to radon, thoron, and its progeny in the study area have been found to be below the recommended level of ICRP. The observed concentrations of 222 Rn and 220 Rn measured with active and passive techniques are found to be in good agreement.
Pang, Chong-guang; Yu, Wei; Yang, Yang
2010-03-01
In July of 2008, under the natural condition of sea water, the Laser in-situ scattering and transmissometry (LISST-100X Type C) was used to measure grain size distribution spectrum and volume concentration of total suspended matter in the sea water, including flocs at different layers of 24 sampling stations at Changjiang Estuary and its adjacent sea. The characteristics and its forming mechanism on grain size distribution of total suspended matter were analyzed based on the observation data of LISST-100X Type C, and combining with the temperature, salinity and turbidity of sea water, simultaneously observed by Alec AAQ1183. The observation data showed that the average median grain size of total suspended matter was about 4.69 phi in the whole measured sea area, and the characteristics of grain size distribution was relatively poor sorted, wide kurtosis, and basically symmetrical. The conclusion could be drawn that vertically average volume concentration decreased with the distance from the coastline, while median grain size had an increase trend with the distance, for example, at 31.0 degrees N section, the depth-average median grain size had been increased from 11 microm up to 60 microm. With the increasing of distance from the coast, the concentration of fine suspended sediment reduced distinctly, nevertheless some relatively big organic matter or big flocs appeared in quantity, so its grain size would rise. The observation data indicated that the effective density was ranged from 246 kg/m3 to 1334 kg/m, with average was 613 kg/m3. When the concentration of total suspended matter was relatively high, median grain size of total suspended matter increased with the water depth, while effective density decreased with the depth, because of the faster settling velocity and less effective density of large flocs that of small flocs. As for station 37 and 44, their correlation coefficients between effective density and median grain size were larger than 0.9.
Exploring end of life priorities in Saudi males: usefulness of Q-methodology.
Hammami, Muhammad M; Al Gaai, Eman; Hammami, Safa; Attala, Sahar
2015-11-26
Quality end-of-life care depends on understanding patients' end-of-life choices. Individuals and cultures may hold end-of-life priorities at different hierarchy. Forced ranking rather than independent rating, and by-person factor analysis rather than averaging may reveal otherwise masked typologies. We explored Saudi males' forced-ranked, end-of-life priorities and dis-priorities. Respondents (n = 120) rank-ordered 47 opinion statements on end-of-life care following a 9-category symmetrical distribution. Statements' scores were analyzed by averaging analysis and factor analysis (Q-methodology). Respondents' mean age was 32.1 years (range, 18-65); 52% reported average religiosity, 88 and 83% ≥ very good health and life-quality, respectively, and 100% ≥ high school education. Averaging analysis revealed that the extreme five end-of-life priorities were to, be at peace with God, be able to say the statement of faith, maintain dignity, resolve conflicts, and have religious death rituals respected, respectively. The extreme five dis-priorities were to, die in the hospital, not receive intensive care if in coma, die at peak of life, be informed about impending death by family/friends rather than doctor, and keep medical status confidential from family/friends, respectively. Q-methodology classified 67% of respondents into five highly transcendent opinion types. Type-I (rituals-averse, family-caring, monitoring-coping, life-quality-concerned) and Type-V (rituals-apt, family-centered, neutral-coping, life-quantity-concerned) reported the lowest and highest religiosity, respectively. Type-II (rituals-apt, family-dependent, monitoring-coping, life-quantity-concerned) and Type-III (rituals-silent, self/family-neutral, avoidance-coping, life-quality & quantity-concerned) reported the best and worst life-quality, respectively. Type-I respondents were the oldest with the lowest general health, in contrast to Type-IV (rituals-apt, self-centered, monitoring-coping, life-quality/quantity-neutral). Of the extreme 14 priorities/dis-priorities for the five types, 29, 14, 14, 50, and 36%, respectively, were not among the extreme 20 priorities/dis-priorities identified by averaging analysis for the entire cohort. 1) Transcendence was the extreme end-of-life priority, and dying in the hospital was the extreme dis-priority. 2) Quality of life was conceptualized differently with less emphasize on its physiological aspects. 3) Disclosure of terminal illness to family/close friends was preferred as long it is through the patient. 4) Q-methodology identified five types of constellations of end-of-life priorities and dis-priorities that may be related to respondents' demographics and are partially masked by averaging analysis.
In-situ study of discontinuous precipitation in Al-15 at.% Zn
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdou, S.; El-Boragy, M.; Solorzano, G.
1996-05-01
In the present study, attention was focused on in-situ work on discontinuous precipitation in Al-15.0 at.% Zn in a high voltage electron microscope using a hot stage and a video system. The microscope was an AEI instrument with a maximum voltage of 1.25 MV. The voltage used was 500 kV. The scope of the present study was to check if the grain boundary migration in the discontinuous precipitation reaction proceeds in a stop-and-go fashion. From all the observations reported here it can be concluded that the stop-and-go type of grain boundary migration seems to be a very general one. Butmore » in many cases it cannot easily experimentally be proved. In case of discontinuous precipitation in Al-15.0 at.% Zn it has been clearly demonstrated by in-situ observations in a high-voltage electron microscope that the reaction front migration occurs in a stop-and-go fashion. Consequently, there is a drastic difference between the average velocity and the instantaneous velocity. The only quantity, which can be determined in traditional experiments, is the average velocity to which the Petermann-Hornbogen equation is adequate.« less
Daley, Kiley; Castleden, Heather; Jamieson, Rob; Furgal, Chris; Ell, Lorna
2014-01-01
Background Access to adequate quantities of water has a protective effect on human health and well-being. Despite this, public health research and interventions are frequently focused solely on water quality, and international standards for domestic water supply minimums are often overlooked or unspecified. This trend is evident in Inuit and other Arctic communities even though numerous transmissible diseases and bacterium infections associated with inadequate domestic water quantities are prevalent. Objectives Our objective was to explore the pathways by which the trucked water distribution systems being used in remote northern communities are impacting health at the household level, with consideration given to the underlying social and environmental determinants shaping health in the region. Methods Using a qualitative case study design, we conducted 37 interviews (28 residents, 9 key informants) and a review of government water documents to investigate water usage practices and perspectives. These data were thematically analysed to understand potential health risks in Arctic communities and households. Results Each resident receives an average of 110 litres of municipal water per day. Fifteen of 28 households reported experiencing water shortages at least once per month. Of those 15, most were larger households (5 people or more) with standard sized water storage tanks. Water shortages and service interruptions limit the ability of some households to adhere to public health advice. The households most resilient, or able to cope with domestic water supply shortages, were those capable of retrieving their own drinking water directly from lake and river sources. Residents with extended family and neighbours, whom they can rely on during shortages, were also less vulnerable to municipal water delays. Conclusions The relatively low in-home water quantities observed in Coral Harbour, Nunavut, appear adequate for some families. Those living in overcrowded households, however, are accessing water in quantities more typically seen in water insecure developing countries. We recommend several practical interventions and revisions to municipal water supply systems. PMID:24765615
Daley, Kiley; Castleden, Heather; Jamieson, Rob; Furgal, Chris; Ell, Lorna
2014-01-01
Access to adequate quantities of water has a protective effect on human health and well-being. Despite this, public health research and interventions are frequently focused solely on water quality, and international standards for domestic water supply minimums are often overlooked or unspecified. This trend is evident in Inuit and other Arctic communities even though numerous transmissible diseases and bacterium infections associated with inadequate domestic water quantities are prevalent. Our objective was to explore the pathways by which the trucked water distribution systems being used in remote northern communities are impacting health at the household level, with consideration given to the underlying social and environmental determinants shaping health in the region. Using a qualitative case study design, we conducted 37 interviews (28 residents, 9 key informants) and a review of government water documents to investigate water usage practices and perspectives. These data were thematically analysed to understand potential health risks in Arctic communities and households. Each resident receives an average of 110 litres of municipal water per day. Fifteen of 28 households reported experiencing water shortages at least once per month. Of those 15, most were larger households (5 people or more) with standard sized water storage tanks. Water shortages and service interruptions limit the ability of some households to adhere to public health advice. The households most resilient, or able to cope with domestic water supply shortages, were those capable of retrieving their own drinking water directly from lake and river sources. Residents with extended family and neighbours, whom they can rely on during shortages, were also less vulnerable to municipal water delays. The relatively low in-home water quantities observed in Coral Harbour, Nunavut, appear adequate for some families. Those living in overcrowded households, however, are accessing water in quantities more typically seen in water insecure developing countries. We recommend several practical interventions and revisions to municipal water supply systems.
NASA Astrophysics Data System (ADS)
Zheng, L.; Weisberg, R. H.
2016-02-01
A 3D, numerical circulation model, with high resolution (20 m) at important mass conveyances (inlets and rivers etc.), is developed to estimate the bulk residence time and diagnose the salt balances and salt fluxes for Tampa Bay estuary. These analyses are justified via quantitative comparisons between the simulation and observations of sea level, velocity and salinity. The non-tidal circulation is the primary agent for the flushing of Tampa Bay. Tides alone have a minor effect. Exceptions pertain to within a tidal excursion from the bay mouth and regions with multiple inlets where different tide phases aid in flushing. The fully 3D salt flux divergences (SFD) and fluxes vary spatially throughout the estuary. On experimental duration (three month) average, the total advective SFD is balanced primarily by the vertical diffusive SFD, except near the bottom of the channel where the horizontal diffusive SFD is also important. Instantaneously, the local rate of salinity change is controlled primarily by the advective SFD, with a secondary contribution by the vertical diffusive SFD everywhere and the horizontal diffusive SFD near the channel bottom. After decomposing the advective salt fluxes and their divergences into mean quantity and tidal pumping, the horizontal and vertical advective SFDs by the mean quantities are large and counterbalance, with their sum being a small but significant residual. The horizontal and vertical advective SFDs by tidal pumping are relatively small (when compared with the mean quantities) and counterbalance; but, when summed, their residual is comparable in magnitude to that by the mean quantities. So whereas the salt fluxes by tidal pumping are secondary importance to the salt fluxes by the mean quantities, their total flux divergences are of comparable importance. The salt flux 3D components vary along the Tampa Bay axis, and these findings may be typical of coastal plain estuaries given their geometrical complexities.
NASA Technical Reports Server (NTRS)
Turner, J. W. (Inventor)
1973-01-01
A measurement system is described for providing an indication of a varying physical quantity represented by or converted to a variable frequency signal. Timing pulses are obtained marking the duration of a fixed number, or set, of cycles of the sampled signal and these timing pulses are employed to control the period of counting of cycles of a higher fixed and known frequency source. The counts of cycles obtained from the fixed frequency source provide a precise measurement of the average frequency of each set of cycles sampled, and thus successive discrete values of the quantity being measured. The frequency of the known frequency source is made such that each measurement is presented as a direct digital representation of the quantity measured.
An airborne sensor for the avoidance of clear air turbulence
NASA Technical Reports Server (NTRS)
Gary, B. L.
1981-01-01
This paper describes an airborne microwave radiometer that may be able to provide altitude guidance away from layers containing clear air turbulence, CAT. The sensor may also be able to predict upper limits for the severity of upcoming CAT. The 55 GHz radiometer is passive, not radar, and it measures the temperature of oxygen molecules in the viewing direction (averaged along a several-kilometer path). A small computer directs the viewing direction through elevation angle scans, and converts observed quantities to an 'altitude temperature profile'. The principle for CAT avoidance is that CAT is found statistically more often within inversion layers and at the tropopause, both of which are easily located from sensor-generated altitude temperature profiles.
Increasing market efficiency in the stock markets
NASA Astrophysics Data System (ADS)
Yang, Jae-Suk; Kwak, Wooseop; Kaizoji, Taisei; Kim, In-Mook
2008-01-01
We study the temporal evolutions of three stock markets; Standard and Poor's 500 index, Nikkei 225 Stock Average, and the Korea Composite Stock Price Index. We observe that the probability density function of the log-return has a fat tail but the tail index has been increasing continuously in recent years. We have also found that the variance of the autocorrelation function, the scaling exponent of the standard deviation, and the statistical complexity decrease, but that the entropy density increases as time goes over time. We introduce a modified microscopic spin model and simulate the model to confirm such increasing and decreasing tendencies in statistical quantities. These findings indicate that these three stock markets are becoming more efficient.
Phase transitions in the first-passage time of scale-invariant correlated processes
Carretero-Campos, Concepción; Bernaola-Galván, Pedro; Ch. Ivanov, Plamen
2012-01-01
A key quantity describing the dynamics of complex systems is the first-passage time (FPT). The statistical properties of FPT depend on the specifics of the underlying system dynamics. We present a unified approach to account for the diversity of statistical behaviors of FPT observed in real-world systems. We find three distinct regimes, separated by two transition points, with fundamentally different behavior for FPT as a function of increasing strength of the correlations in the system dynamics: stretched exponential, power-law, and saturation regimes. In the saturation regime, the average length of FPT diverges proportionally to the system size, with important implications for understanding electronic delocalization in one-dimensional correlated-disordered systems. PMID:22400544
Azevedo, Maria Helena Ferreira; Paula, Tarcízio Antônio Rego; Balarini, Maytê Koch; Matta, Sérgio Luiz Pinto; Peixoto, Juliano Vogas; Guião Leite, Flaviana Lima; Rossi, João Luis; da Costa, Eduardo Paulino
2008-12-01
The endocrine portion of mammal testicle is represented by Leydig cells which, together with connective cells, leukocytes, blood and lymphatic vessels, form the intertubular space. The arrangement and proportion of these components vary in the different species of mammals and form mechanisms that keep the testosterone level--the main product of the Leydig cell--two to three times higher in the interstitial fluid than in the testicular blood vessels and 40-250 times higher in these than in the peripheral blood. Marked differences are observed among animal species regarding the abundance of Leydig cells, loose connective tissue, development degree and location of the lymphatic vessels and their topographical relationship with seminiferous tubules. In the jaguar about 13% of the testicular parenchyma is occupied by Leydig cells, 8.3% by connective tissue and 0.3% by lymphatic vessels. Although included in standard II, as described in the literature, concerning the arrangement of the intertubular space, the jaguar has grouped lymphatic vessels in the intertubular space instead of isolated ones. In the jaguar the average volume of the Leydig cell was 2386 microm3 and its average nuclear diameter was 7.7 microm. A great quantity of 2.3 microm diameter lipidic drops was observed in the Leydig cell cytoplasm of the jaguar. The Leydig cells in the jaguar occupy an average 0.0036% of the body weight and the average number per gram of testicle was within the range for most mammals: between 20 and 40 million.
Strategies to Prevent MRSA Transmission in Community-Based Nursing Homes: A Cost Analysis.
Roghmann, Mary-Claire; Lydecker, Alison; Mody, Lona; Mullins, C Daniel; Onukwugha, Eberechukwu
2016-08-01
OBJECTIVE To estimate the costs of 3 MRSA transmission prevention scenarios compared with standard precautions in community-based nursing homes. DESIGN Cost analysis of data collected from a prospective, observational study. SETTING AND PARTICIPANTS Care activity data from 401 residents from 13 nursing homes in 2 states. METHODS Cost components included the quantities of gowns and gloves, time to don and doff gown and gloves, and unit costs. Unit costs were combined with information regarding the type and frequency of care provided over a 28-day observation period. For each scenario, the estimated costs associated with each type of care were summed across all residents to calculate an average cost and standard deviation for the full sample and for subgroups. RESULTS The average cost for standard precautions was $100 (standard deviation [SD], $77) per resident over a 28-day period. If gown and glove use for high-risk care was restricted to those with MRSA colonization or chronic skin breakdown, average costs increased to $137 (SD, $120) and $125 (SD, $109), respectively. If gowns and gloves were used for high-risk care for all residents in addition to standard precautions, the average cost per resident increased substantially to $223 (SD, $127). CONCLUSIONS The use of gowns and gloves for high-risk activities with all residents increased the estimated cost by 123% compared with standard precautions. This increase was ameliorated if specific subsets (eg, those with MRSA colonization or chronic skin breakdown) were targeted for gown and glove use for high-risk activities. Infect Control Hosp Epidemiol 2016;37:962-966.
On the Lagrangian description of dissipative systems
NASA Astrophysics Data System (ADS)
Martínez-Pérez, N. E.; Ramírez, C.
2018-03-01
We consider the Lagrangian formulation with duplicated variables of dissipative mechanical systems. The application of Noether theorem leads to physical observable quantities which are not conserved, like energy and angular momentum, and conserved quantities, like the Hamiltonian, that generate symmetry transformations and do not correspond to observables. We show that there are simple relations among the equations satisfied by these two types of quantities. In the case of the damped harmonic oscillator, from the quantities obtained by the Noether theorem follows the algebra of Feshbach and Tikochinsky. Furthermore, if we consider the whole dynamics, the degrees of freedom separate into a physical and an unphysical sector. We analyze several cases, with linear and nonlinear dissipative forces; the physical consistency of the solutions is ensured, observing that the unphysical sector has always the trivial solution.
Estimates of average annual tributary inflow to the lower Colorado River, Hoover Dam to Mexico
Owen-Joyce, Sandra J.
1987-01-01
Estimates of tributary inflow by basin or area and by surface water or groundwater are presented in this report and itemized by subreaches in tabular form. Total estimated average annual tributary inflow to the Colorado River between Hoover Dam and Mexico, excluding the measured tributaries, is 96,000 acre-ft or about 1% of the 7.5 million acre-ft/yr of Colorado River water apportioned to the States in the lower Colorado River basin. About 62% of the tributary inflow originates in Arizona, 30% in California, and 8% in Nevada. Tributary inflow is a small component in the water budget for the river. Most of the quantities of unmeasured tributary inflow were estimated in previous studies and were based on mean annual precipitation for 1931-60. Because mean annual precipitation for 1951-80 did not differ significantly from that of 1931-60, these tributary inflow estimates are assumed to be valid for use in 1984. Measured average annual runoff per unit drainage area on the Bill Williams River has remained the same. Surface water inflow from unmeasured tributaries is infrequent and is not captured in surface reservoirs in any of the States; it flows to the Colorado River gaging stations. Estimates of groundwater inflow to the Colorad River valley. Average annual runoff can be used in a water budget; although in wet years, runoff may be large enough to affect the calculation of consumptive use and to be estimated from hydrographs for the Colorado River valley are based on groundwater recharge estimates in the bordering areas, which have not significantly changed through time. In most areas adjacent to the Colorado River valley, groundwater pumpage is small and pumping has not significantly affected the quantity of groundwater discharged to the Colorado River valley. In some areas where groundwater pumpage exceeds the quantity of groundwater discharge and water levels have declined, the quantity of discharge probably has decreased and groundwater inflow to the Colorado River valley will eventually be reduced if not stopped completely. Groundwater discharged at springs below Hoover Dam is unused and flows directly to the Colorado River. (Lantz-PTT)
Observational Learning of Quantity Conservation and Piagetian Generalization Tasks
ERIC Educational Resources Information Center
Charbonneau, Claude; And Others
1976-01-01
Twenty first-graders observed an adult model perform a quantity conservation task. The children were then tested on a series of generalization tasks immediately, after one week, and after three months. The results suggested that the social experience of observation appeared to activate a cognitive restructuring of the children's mental operations.…
Week Long Topography Study of Young Adults Using Electronic Cigarettes in Their Natural Environment.
Robinson, R J; Hensel, E C; Roundtree, K A; Difrancesco, A G; Nonnemaker, J M; Lee, Y O
2016-01-01
Results of an observational, descriptive study quantifying topography characteristics of twenty first generation electronic nicotine delivery system users in their natural environment for a one week observation period are presented. The study quantifies inter-participant variation in puffing topography between users and the intra-participant variation for each user observed during one week of use in their natural environment. Puff topography characteristics presented for each user include mean puff duration, flow rate and volume for each participant, along with descriptive statistics of each quantity. Exposure characteristics including the number of vaping sessions, total number of puffs and cumulative volume of aerosol generated from ENDS use (e-liquid aerosol) are reported for each participant for a one week exposure period and an effective daily average exposure. Significant inter-participant and intra-participant variation in puff topography was observed. The observed range of natural use environment characteristics is used to propose a set of topography protocols for use as command inputs to drive machine-puffed electronic nicotine delivery systems in a controlled laboratory environment.
Week Long Topography Study of Young Adults Using Electronic Cigarettes in Their Natural Environment
Roundtree, K. A.; Difrancesco, A. G.; Nonnemaker, J. M.; Lee, Y. O.
2016-01-01
Results of an observational, descriptive study quantifying topography characteristics of twenty first generation electronic nicotine delivery system users in their natural environment for a one week observation period are presented. The study quantifies inter-participant variation in puffing topography between users and the intra-participant variation for each user observed during one week of use in their natural environment. Puff topography characteristics presented for each user include mean puff duration, flow rate and volume for each participant, along with descriptive statistics of each quantity. Exposure characteristics including the number of vaping sessions, total number of puffs and cumulative volume of aerosol generated from ENDS use (e-liquid aerosol) are reported for each participant for a one week exposure period and an effective daily average exposure. Significant inter-participant and intra-participant variation in puff topography was observed. The observed range of natural use environment characteristics is used to propose a set of topography protocols for use as command inputs to drive machine-puffed electronic nicotine delivery systems in a controlled laboratory environment. PMID:27736944
Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories
NASA Astrophysics Data System (ADS)
Vallisneri, Michele; Galley, Chad R.
2012-06-01
The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term ‘sensitivity’ is used loosely to refer to the detector’s noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the ‘classic LISA’ configuration. We confirm that the (standard) inverse-rms average sensitivity for the isotropic population remains the same whether or not the LISA orbits are included in the computation. However, detector motion tightens the distribution of sensitivities, so for 50% of sources the sensitivity is within 30% of its average. For the Galactic-disk population, the average and the distribution of the sensitivity for a moving detector turn out to be similar to the isotropic case.
Leiva-Valenzuela, Gabriel A; Quilaqueo, Marcela; Lagos, Daniela; Estay, Danilo; Pedreschi, Franco
2018-04-01
The aim of this research was to determine the effect of composition (dietary fiber = DF, fat = F, and gluten = G) and baking time on the target microstructural parameters that were observed using images of potato and wheat starch biscuits. Microstructures were studied Scanning Electron Microscope (SEM). Non-enzymatic browning (NEB) was assessed using color image analysis. Texture and moisture analysis was performed to have a better understanding of the baking process. Analysis of images revealed that the starch granules retained their native form at the end of baking, suggesting their in complete gelatinization. Granules size was similar at several different baking times, with an average equivalent diameter of 9 and 27 µm for wheat and potato starch, respectively. However, samples with different levels of DF and G increased circularity during baking to more than 30%, and also increasing hardness. NEB developed during baking, with the maximum increase observed between 13 and 19 min. This was reflected in decreased luminosity (L*) values due to a decrease in moisture levels. After 19 min, luminosity did not vary significantly. The ingredients that are used, as well as their quantities, can affect sample L* values. Therefore, choosing the correct ingredients and quantities can lead to different microstructures in the biscuits, with varying amounts of NEB products.
Budget of Turbulent Kinetic Energy in a Shock Wave Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Vyas, Manan; Waindim, Mbu; Gaitonde, Datta
2016-01-01
Implicit large-eddy simulation (ILES) of a shock wave boundary-layer interaction (SBLI) was performed. Quantities present in the exact equation of the turbulent kinetic energy (TKE) transport were accumulated. These quantities will be used to calculate the components of TKE-like production, dissipation, transport, and dilatation. Correlations of these terms will be presented to study the growth and interaction between various terms. A comparison with its RANS (Reynolds-Averaged Navier-Stokes) counterpart will also be presented.
Turbulent thermal superstructures in Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Stevens, Richard J. A. M.; Blass, Alexander; Zhu, Xiaojue; Verzicco, Roberto; Lohse, Detlef
2018-04-01
We report the observation of superstructures, i.e., very large-scale and long living coherent structures in highly turbulent Rayleigh-Bénard convection up to Rayleigh Ra=109 . We perform direct numerical simulations in horizontally periodic domains with aspect ratios up to Γ =128 . In the considered Ra number regime the thermal superstructures have a horizontal extend of six to seven times the height of the domain and their size is independent of Ra. Many laboratory experiments and numerical simulations have focused on small aspect ratio cells in order to achieve the highest possible Ra. However, here we show that for very high Ra integral quantities such as the Nusselt number and volume averaged Reynolds number only converge to the large aspect ratio limit around Γ ≈4 , while horizontally averaged statistics such as standard deviation and kurtosis converge around Γ ≈8 , the integral scale converges around Γ ≈32 , and the peak position of the temperature variance and turbulent kinetic energy spectra only converge around Γ ≈64 .
NASA Technical Reports Server (NTRS)
Hickey, M. P.
1988-01-01
This paper examines the effect of inclusion of Coriolis force and eddy dissipation in the gravity wave dynamics theory of Walterscheid et al. (1987). It was found that the values of the ratio 'eta' (where eta is a complex quantity describing the ralationship between the intensity oscillation about the time-averaged intensity, and the temperature oscillation about the time-averaged temperature) strongly depend on the wave period and the horizontal wavelength; thus, if comparisons are to be made between observations and theory, horizontal wavelengths will need to be measured in conjunction with the OH nightglow measurements. For the waves with horizontal wavelengths up to 1000 km, the eddy dissipation was found to dominate over the Coriolis force in the gravity wave dynamics and also in the associated values of eta. However, for waves with horizontal wavelengths of 10,000 km or more, the Coriolis force cannot be neglected; it has to be taken into account along with the eddy dissipation.
Effects of pan cooking on micropollutants in meat.
Planche, Christelle; Ratel, Jérémy; Blinet, Patrick; Mercier, Frédéric; Angénieux, Magaly; Chafey, Claude; Zinck, Julie; Marchond, Nathalie; Chevolleau, Sylvie; Marchand, Philippe; Dervilly-Pinel, Gaud; Guérin, Thierry; Debrauwer, Laurent; Engel, Erwan
2017-10-01
This work presents the effects of pan cooking on PCBs, PCDD/Fs, pesticides and trace elements in meat from a risk assessment perspective. Three different realistic cooking intensities were studied. A GC×GC-TOF/MS method was set up for the multiresidue analysis of 189 PCBs, 17 PCDD/Fs and 16 pesticides whereas Cd, As, Pb and Hg were assayed by ICP-MS. In terms of quantity, average PCB losses after cooking were 18±5% for rare, 30±3% for medium, and 48±2% for well-done meat. In contrast, average PCDD/F losses were not significant. For pesticides, no loss occurred for aldrin, lindane, DDE or DDD, whereas losses exceeding 80% were found for dieldrin, sulfotep or phorate. Losses close to the margin of error were observed for trace elements. These results are discussed in light of the physicochemical properties of the micropollutants as well as of water and fat losses into cooking juice. Copyright © 2017 Elsevier Ltd. All rights reserved.
Energy Drinks and Binge Drinking Predict College Students' Sleep Quantity, Quality, and Tiredness.
Patrick, Megan E; Griffin, Jamie; Huntley, Edward D; Maggs, Jennifer L
2018-01-01
This study examines whether energy drink use and binge drinking predict sleep quantity, sleep quality, and next-day tiredness among college students. Web-based daily data on substance use and sleep were collected across four semesters in 2009 and 2010 from 667 individuals for up to 56 days each, yielding information on 25,616 person-days. Controlling for average levels of energy drink use and binge drinking (i.e., 4+ drinks for women, 5+ drinks for men), on days when students consumed energy drinks, they reported lower sleep quantity and quality that night, and greater next-day tiredness, compared to days they did not use energy drinks. Similarly, on days when students binge drank, they reported lower sleep quantity and quality that night, and greater next-day tiredness, compared to days they did not binge drink. There was no significant interaction effect between binge drinking and energy drink use on the outcomes.
NASA Astrophysics Data System (ADS)
Hetényi, Balázs
2014-03-01
The Drude weight, the quantity which distinguishes metals from insulators, is proportional to the second derivative of the ground state energy with respect to a flux at zero flux. The same expression also appears in the definition of the Meissner weight, the quantity which indicates superconductivity, as well as in the definition of non-classical rotational inertia of bosonic superfluids. It is shown that the difference between these quantities depends on the interpretation of the average momentum term, which can be understood as the expectation value of the total momentum (Drude weight), the sum of the expectation values of single momenta (rotational inertia of a superfluid), or the sum over expectation values of momentum pairs (Meissner weight). This distinction appears naturally when the current from which the particular transport quantity is derived is cast in terms of shift operators.
Complexity study on the Cournot-Bertrand mixed duopoly game model with market share preference
NASA Astrophysics Data System (ADS)
Ma, Junhai; Sun, Lijian; Hou, Shunqi; Zhan, Xueli
2018-02-01
In this paper, a Cournot-Bertrand duopoly model with market share preference is established. Assume that there is a degree of product difference between the two firms, where one firm takes the price as a decision variable and the other takes the quantity. Both firms are bounded rational, with linear cost functions and demand functions. The stability of the equilibrium points is analyzed, and the effects of some parameters (α, β, d and v1) on the model stability are studied. Basins of attraction are investigated and the evolution process is shown with the increase in the output adjustment speed. The simulation results show that instability will lead to the increase in the average utility of the firm that determines the quantity and reduce the average utility of the firm that determines price.
Alslaibi, Tamer M; Abustan, Ismail; Mogheir, Yunes K; Afifi, Samir
2013-01-01
Landfills are a source of groundwater pollution in Gaza Strip. This study focused on Deir Al Balah landfill, which is a unique sanitary landfill site in Gaza Strip (i.e., it has a lining system and a leachate recirculation system). The objective of this article is to assess the generated leachate quantity and percolation to the groundwater aquifer at a specific site, using the approaches of (i) the hydrologic evaluation of landfill performance model (HELP) and (ii) the water balance method (WBM). The results show that when using the HELP model, the average volume of leachate discharged from Deir Al Balah landfill during the period 1997 to 2007 was around, 6800 m3/year. Meanwhile, the average volume of leachate percolated through the clay layer was 550 m3/year, which represents around 8% of the generated leachate. Meanwhile, the WBM indicated that the average volume of leachate discharged from Deir Al Balah landfill during the same period was around 7660 m3/year--about half of which comes from the moisture content of the waste, while the remainder comes from the infiltration of precipitation and re-circulated leachate. Therefore, the estimated quantity of leachate to groundwater by these two methods was very close. However, compared with the measured leachate quantity, these results were overestimated and indicated a dangerous threat to the groundwater aquifer, as there was no separation between municipal, hazardous and industrial wastes, in the area.
A comparison of water vapor quantities from model short-range forecasts and ARM observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hnilo, J J
2006-03-17
Model evolution and improvement is complicated by the lack of high quality observational data. To address a major limitation of these measurements the Atmospheric Radiation Measurement (ARM) program was formed. For the second quarter ARM metric we will make use of new water vapor data that has become available, and called the 'Merged-sounding' value added product (referred to as OBS, within the text) at three sites: the North Slope of Alaska (NSA), Darwin Australia (DAR) and the Southern Great Plains (SGP) and compare these observations to model forecast data. Two time periods will be analyzed March 2000 for the SGPmore » and October 2004 for both DAR and NSA. The merged-sounding data have been interpolated to 37 pressure levels (e.g., from 1000hPa to 100hPa at 25hPa increments) and time averaged to 3 hourly data for direct comparison to our model output.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hnilo, J.
2006-03-17
Model evolution and improvement is complicated by the lack of high quality observational data. To address a major limitation of these measurements the Atmospheric Radiation Measurement (ARM) program was formed. For the second quarter ARM metric we will make use of new water vapor data that has become available, and called the “Mergedsounding” value added product (referred to as OBS, within the text) at three sites: the North Slope of Alaska (NSA), Darwin Australia (DAR) and the Southern Great Plains (SGP) and compare these observations to model forecast data. Two time periods will be analyzed March 2000 for the SGPmore » and October 2004 for both DAR and NSA. The merged-sounding data have been interpolated to 37 pressure levels (e.g., from 1000hPa to 100hPa at 25hPa increments) and time averaged to 3 hourly data for direct comparison to our model output.« less
NASA Astrophysics Data System (ADS)
Heinze, Rieke; Moseley, Christopher; Böske, Lennart Nils; Muppa, Shravan Kumar; Maurer, Vera; Raasch, Siegfried; Stevens, Bjorn
2017-06-01
Large-eddy simulations (LESs) of a multi-week period during the HD(CP)2 (High-Definition Clouds and Precipitation for advancing Climate Prediction) Observational Prototype Experiment (HOPE) conducted in Germany are evaluated with respect to mean boundary layer quantities and turbulence statistics. Two LES models are used in a semi-idealized setup through forcing with mesoscale model output to account for the synoptic-scale conditions. Evaluation is performed based on the HOPE observations. The mean boundary layer characteristics like the boundary layer depth are in a principal agreement with observations. Simulating shallow-cumulus layers in agreement with the measurements poses a challenge for both LES models. Variance profiles agree satisfactorily with lidar measurements. The results depend on how the forcing data stemming from mesoscale model output are constructed. The mean boundary layer characteristics become less sensitive if the averaging domain for the forcing is large enough to filter out mesoscale fluctuations.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Wehrenberg, C. E.; Comley, A. J.; Barton, N. R.; ...
2015-09-29
We report direct lattice level measurements of plastic relaxation kinetics through time-resolved, in-situ Laue diffraction of shock-compressed single-crystal [001] Ta at pressures of 27-210 GPa. For a 50 GPa shock, a range of shear strains is observed extending up to the uniaxial limit for early data points (<0.6 ns) and the average shear strain relaxes to a near steady state over ~1 ns. For 80 and 125 GPa shocks, the measured shear strains are fully relaxed already at 200 ps, consistent with rapid relaxation associated with the predicted threshold for homogeneous nucleation of dislocations occurring at shock pressure ~65 GPa.more » The relaxation rate and shear stresses are used to estimate the dislocation density and these quantities are compared to the Livermore Multiscale Strength model as well as various molecular dynamics simulations.« less
NASA Astrophysics Data System (ADS)
Rawlins, M. A.; Adam, J. C.; Vorosmarty, C. J.; Serreze, M. C.; Hinzman, L. D.; Holland, M.; Shiklomanov, A.
2007-12-01
It is expected that a warming climate will be attended by an intensification of the global hydrological cycle. While there are signs of positive trends in several hydrological quantities emerging at the global scale, the scope, character, and quantitative significance of these changes are not well established. In particular, long-term increases in river discharge across Arctic Eurasia are assumed to represent such an intensification and have received considerable attention. Yet, no change in long-term annual precipitation across the region can be related with the discharge trend. Given linkages and feedbacks between the arctic and global climate systems, a more complete understanding of observed changes across northern high latitudes is needed. We present a working definition of an accelerated or intensified hydrological cycle and a synthesis of long-term (nominally 50 years) trends in observed freshwater stocks and fluxes across the arctic land-atmosphere-ocean system. Trend and significance measures from observed data are described alongside expectations of intensification based on GCM simulations of contemporary and future climate. Our domain of interest includes the terrestrial arctic drainage (including all of Alaska and drainage to Hudson Bay), the Arctic Ocean, and the atmosphere over the land and ocean domains. For the terrestrial Arctic, time series of spatial averages which are derived from station data and atmospheric reanalysis are available. Reconstructed data sets are used for quantities such as Arctic Ocean ice and liquid freshwater transports. Study goals include a comprehensive survey of past changes in freshwater across the pan-arctic and a set of benchmarks for expected changes based on an ensemble of GCM simulations, and identification of potential mechanistic linkages which may be examined with contemporary remote sensing data sets.
[Reproductive biology and artificial propagation of Acipenser sinensis below Gezhouba Dam].
Liu, Jian-yi; Wei, Qi-wei; Chen, Xi-hua; Yang, De-guo; Du, Hao; Zhu, Yong-jiu
2007-06-01
A total of 36 females and 21 males of Chinese sturgeon Acipenser sinensis were caught in 1998-2004 excluding 2002 to study the characteristics of their reproductive biology and the effect of their artificial propagation. The results showed that the body length (BL), body mass (BM) and age of the females were 240-320 cm, 140-432 kg, and 15-30 years, and those of the males were 153-284 cm, 70-244 kg and 12-26 years, respectively. The inducing rate was 93.1% for females and 100% for males, and the ova had 7 different colors. The absolute fecundity was 200,000-590,000 eggs, with an average of 358,000 eggs, and the relative fecundity to BM was 820-3,020 eggs per kg, with an average of 1,590 eggs per kg. The sperm had 4 different colors. The absolute sperm quantity obtained from one male was 1,000-5,952 ml, with an average of 2,597.8 ml, and the relative sperm quantity to BM was 1.25-31.24 ml . kg(-1), with an average of 13.3 ml . kg(-1). During the study period, the average fertilization rate in artificial propagation was 63.7%, and the hatching rate was 48.1%, with 4,762,000 fry obtained. Compared with the data in 1976, the natural reproductive capacity of the Chinese sturgeon broodstocks declined greatly.
Thomas, T K; Ritter, T; Bruden, D; Bruce, M; Byrd, K; Goldberger, R; Dobson, J; Hickel, K; Smith, J; Hennessy, T
2016-02-01
Approximately 20% of rural Alaskan homes lack in-home piped water; residents haul water to their homes. The limited quantity of water impacts the ability to meet basic hygiene needs. We assessed rates of infections impacted by water quality (waterborne, e.g. gastrointestinal infections) and quantity (water-washed, e.g. skin and respiratory infections) in communities transitioning to in-home piped water. Residents of four communities consented to a review of medical records 3 years before and after their community received piped water. We selected health encounters with ICD-9CM codes for respiratory, skin and gastrointestinal infections. We calculated annual illness episodes for each infection category after adjusting for age. We obtained 5,477 person-years of observation from 1032 individuals. There were 9,840 illness episodes with at least one ICD-9CM code of interest; 8,155 (83%) respiratory, 1,666 (17%) skin, 241 (2%) gastrointestinal. Water use increased from an average 1.5 gallons/capita/day (g/c/d) to 25.7 g/c/d. There were significant (P-value < 0.05) declines in respiratory (16, 95% confidence interval (CI): 11-21%), skin (20, 95%CI: 10-30%), and gastrointestinal infections (38, 95%CI: 13-55%). We demonstrated significant declines in respiratory, skin and gastrointestinal infections among individuals who received in-home piped water. This study reinforces the importance of adequate quantities of water for health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isotalo, Aarno
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
Equation of state in the presence of gravity
NASA Astrophysics Data System (ADS)
Kim, Hyeong-Chan; Kang, Gungwon
2016-11-01
We investigate how an equation of state for matter is affected when a gravity is present. For this purpose, we consider a box of ideal gas in the presence of Newtonian gravity. In addition to the ordinary thermodynamic quantities, a characteristic variable that represents a weight per unit area relative to the average pressure is required in order to describe a macroscopic state of the gas. Although the density and the pressure are not uniform due to the presence of gravity, the ideal gas law itself is satisfied for the thermodynamic quantities when averaged over the system. Assuming that the system follows an adiabatic process further, we obtain a new relation between the averaged pressure and density, which differs from the conventional equation of state for the ideal gas in the absence of gravity. Applying our results to a small volume in a Newtonian star, however, we find that the conventional one is reliable for most astrophysical situations when the characteristic scale is small. On the other hand, gravity effects become significant near the surface of a Newtonian star.
An analysis of the equivalent dose calculation for the remainder tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zankl, M.; Drexler, G.
1995-09-01
In the 1990 Recommendations of the International Commission on Radiological Protection, the risk-weighted quantity {open_quotes}effective dose equivalent{close_quotes} was replaced by a similar quantity, {open_quotes}effective dose.{close_quotes} Among other alterations, the selection of the organs and tissues contributing to the risk-weighted quantity and their respective weighting factors were changed, including a modified definition of the so-called {open_quotes}remainder.{close_quotes} Close consideration of this latter definition shows that is causes certain ambiguities are unexpected effects which are dealt with in the following. For several geometries of external photon irradiation, the numerical differences of two possible methods of evaluating the remainder dose from the doses tomore » ten single organs, namely as arithmetic mean or as mass weighted average, are assessed. It is shown that deviation from these averaging procedures, as prescribed for these cases where a remainder organ receives a higher dose than an organ with a specified weighting factor, cause discontinuities in the energy dependence of the remainder dose and, consequently, also non-additivity of this quantity. These problems are discussed, and it is shown that, although the numerical consequences for the calculation of the effective dose are small, this unsatisfactory situation needs clarification. One approach might be to abolish some of the ICRP guidance relating to the appropriate tissue weighting factors for the remainder tissues and organs and to make other guidance more precise. 14 refs., 12 figs., 2 tabs.« less
Hahn, C. J. [University of Arizona; Warren, S. G. [University of Washington
2007-01-01
Surface synoptic weather reports from ships and land stations worldwide were processed to produce a global cloud climatology which includes: total cloud cover, the amount and frequency of occurrence of nine cloud types within three levels of the troposphere, the frequency of occurrence of clear sky and of precipitation, the base heights of low clouds, and the non-overlapped amounts of middle and high clouds. Synoptic weather reports are made every three hours; the cloud information in a report is obtained visually by human observers. The reports used here cover the period 1971-96 for land and 1954-2008 for ocean. This digital archive provides multi-year monthly, seasonal, and annual averages in 5x5-degree grid boxes (or 10x10-degree boxes for some quantities over the ocean). Daytime and nighttime averages, as well as the diurnal average (average of day and night), are given. Nighttime averages were computed using only those reports that met an "illuminance criterion" (i.e., made under adequate moonlight or twilight), thus minimizing the "night-detection bias" and making possible the determination of diurnal cycles and nighttime trends for cloud types. The phase and amplitude of the first harmonic of both the diurnal cycle and the annual cycle are given for the various cloud types. Cloud averages for individual years are also given for the ocean for each of 4 seasons, and for each of the 12 months (daytime-only averages for the months). [Individual years for land are not gridded, but are given for individual stations in a companion data set, CDIAC's NDP-026D).] This analysis used 185 million reports from 5388 weather stations on continents and islands, and 50 million reports from ships; these reports passed a series of quality-control checks. This analysis updates (and in most ways supercedes) the previous cloud climatology constructed by the authors in the 1980s. Many of the long-term averages described here are mapped on the University of Washington, Department of Atmospheric Sciences Web site. The Online Cloud Atlas containing NDP-026E data is available via the University of Washington.
Predicting Culex pipiens/restuans population dynamics by interval lagged weather data
2013-01-01
Background Culex pipiens/restuans mosquitoes are important vectors for a variety of arthropod borne viral infections. In this study, the associations between 20 years of mosquito capture data and the time lagged environmental quantities daytime length, temperature, precipitation, relative humidity and wind speed were used to generate a predictive model for the population dynamics of this vector species. Methods Mosquito population in the study area was represented by averaged time series of mosquitos counts captured at 6 sites in Cook County (Illinois, USA). Cross-correlation maps (CCMs) were compiled to investigate the association between mosquito abundances and environmental quantities. The results obtained from the CCMs were incorporated into a Poisson regression to generate a predictive model. To optimize the predictive model the time lags obtained from the CCMs were adjusted using a genetic algorithm. Results CCMs for weekly data showed a highly positive correlation of mosquito abundances with daytime length 4 to 5 weeks prior to capture (quantified by a Spearman rank order correlation of rS = 0.898) and with temperature during 2 weeks prior to capture (rS = 0.870). Maximal negative correlations were found for wind speed averaged over 3 week prior to capture (rS = −0.621). Cx. pipiens/restuans population dynamics was predicted by integrating the CCM results in Poisson regression models. They were used to simulate the average seasonal cycle of the mosquito abundance. Verification with observations resulted in a correlation of rS = 0.899 for daily and rS = 0.917 for weekly data. Applying the optimized models to the entire 20-years time series also resulted in a suitable fit with rS = 0.876 for daily and rS = 0.899 for weekly data. Conclusions The study demonstrates the application of interval lagged weather data to predict mosquito abundances with a feasible accuracy, especially when related to weekly Cx. pipiens/restuans populations. PMID:23634763
Observed Budgets for the Global Climate
NASA Astrophysics Data System (ADS)
Kottek, M.; Haimberger, L.; Rubel, F.; Hantel, M.
2003-04-01
A global dataset for selected budget quantities specifying the present climate for the period 1991-1995 has been compiled. This dataset is an essential component of the new climate volume within the series Landolt Boernstein - Numerical Data and Functional Relationships in Science and Technology, to be published this year. Budget quantities are those that appear in a budget equation. Emphasis in this collection is placed on observational data of both in situ and remotely sensed quantities. The fields are presented as monthly means with a uniform space resolution of one degree. Main focus is on climatologically relevant state and flux quantities at the earth's surface and at the top of atmosphere. Some secondary and complex climate elements are also presented (e.g. tornadoe frequency). The progress of this collection as compared to other climate datasets is, apart from the quality of the input data, that all fields are presented in standardized form as far as possible. Further, visualization loops of the global fields in various projections will be available for the user in the eventual book. For some budget quantities, e.g. precipitation, it has been necessary to merge data from different sources; insufficiently observed parameters have been supplemented through the ECMWF ERA-40 reanalyses. If all quantities of a budget have been evaluated the gross residual represents an estimate of data quality. For example, the global water budget residual is found to be up to 30 % depending on the used data. This suggests that the observation of global climate parameters needs further improvement.
Chamberlain, Mike; Gräfe, James L; Aslam; Byun, Soo-Hyun; Chettle, David R; Egden, Lesley M; Webber, Colin E; McNeill, Fiona E
2012-03-01
Humans can be exposed to fluorine (F) through their diet, occupation, environment and oral dental care products. Fluorine, at proper dosages, is believed to have positive effects by reducing the incidence of dental caries, but fluorine toxicity can occur when people are exposed to excessive quantities of fluorine. In this paper we present the results of a small pilot in vivo study on 33 participants living in Southwestern Ontario, Canada. The mean age of participants was 45 ± 18 years with a range of 20-87 years. The observed calcium normalized hand-bone-fluorine concentrations in this small pilot study ranged from 1.1 to 8.8 mg F/g Ca. Every person measured in this study had levels of fluorine in bone above the detection limit of the system. The average fluorine concentration in bone was found to be 3.5 ± 0.4 mg F/g Ca. No difference was observed in average concentration for men and women. In addition, a significant correlation (r(2) = 0.55, p < 0.001) was observed between hand-bone-fluorine content and age. The amount of fluorine was found to increase at a rate of 0.084 ± 0.014 mg F/g Ca per year. There was no significant difference observed in this small group of subjects between the accumulation rates in men and women. To the best of our knowledge, this is the first time data from in vivo measurement of fluorine content in humans by neutron activation analysis have been presented. The data determined by this technique were found to be consistent with results from ex vivo studies from other countries. We suggest that the data demonstrate that this low risk non-invasive diagnostic technique will permit the routine assessment of bone-fluorine content with potential application in the study of clinical bone-related diseases. This small study demonstrated that people in Southern Ontario are exposed to fluoride in measureable quantities, and that fluoride can be seen to accumulate in bone with age. However, all volunteers were found to have levels below those expected with clinical fluorosis, and only one older subject was found to have levels comparable with preclinical exposure.
Dushimirimana, Severin; Hance, Thierry; Damiens, David
2012-01-01
Summary The sterile insect technique (SIT) is increasingly used to control pest insect populations. The success of SIT control programs depends on the ability to release sterile males and on the capacity of sterile males to compete with wild males to inseminate wild females. In this study, we evaluated the mating performance of Schistocerca gregaria (Försk.) males irradiated with 4 Gray. We compared reproductive traits, such as duration of precopulation time, mating duration, quantity of sperm stored by females after copulation, number of females mated successively and postmating competition of irradiated males with non-irradiated males. Irradiated males were able to mate but the resulting number of offspring was dramatically reduced compared to the average number of offspring observed during a regular mating. During a single copulation, irradiated males transferred fewer sperm than regular males but, theoretically, this quantity is enough to fertilize all the eggs produced by a female during its reproductive life. Irradiated males also had the ability to remove sperm from a previous mating with unirraditated males. This new information on the mating strategies helps explain the post-copulation guarding behaviour of S. gregaria. PMID:23213413
Two-walker discrete-time quantum walks on the line with percolation
NASA Astrophysics Data System (ADS)
Rigovacca, L.; di Franco, C.
2016-02-01
One goal in the quantum-walk research is the exploitation of the intrinsic quantum nature of multiple walkers, in order to achieve the full computational power of the model. Here we study the behaviour of two non-interacting particles performing a quantum walk on the line when the possibility of lattice imperfections, in the form of missing links, is considered. We investigate two regimes, statical and dynamical percolation, that correspond to different time scales for the imperfections evolution with respect to the quantum-walk one. By studying the qualitative behaviour of three two-particle quantities for different probabilities of having missing bonds, we argue that the chosen symmetry under particle-exchange of the input state strongly affects the output of the walk, even in noisy and highly non-ideal regimes. We provide evidence against the possibility of gathering information about the walkers indistinguishability from the observation of bunching phenomena in the output distribution, in all those situations that require a comparison between averaged quantities. Although the spread of the walk is not substantially changed by the addition of a second particle, we show that the presence of multiple walkers can be beneficial for a procedure to estimate the probability of having a broken link.
2017-01-01
Provides monthly statistics at the state, Census division, and U.S. levels for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity sales, revenue, and average revenue per kilowatthour of electricity sold.
Levels of Heavy Metals in Popular Cigarette Brands and Exposure to These Metals via Smoking
Ashraf, Muhammad Waqar
2012-01-01
The levels of selected heavy metals in popular cigarette brands sold and/or produced in Saudi Arabia were determined by graphite furnace-atomic absorption spectrometry (GFAAS). Average concentrations of Cadmium and Lead in different cigarette brands were 1.81 and 2.46 μg g−1 (dry weight), respectively. The results obtained in this study estimate the average quantity of Cd inhaled from smoking one packet of 20 cigarettes to be in the range of 0.22–0.78 μg. Results suggest that the quantity of Pb inhaled of smoking one packet of 20 cigarettes is estimated to be 0.97–2.64 μg. The concentrations of Cd and Pb in cigarettes were significantly different between cigarette brands tested. The results of the present study were compared with those of other regional and international studies. PMID:22489199
Evaluation of Turbulence-Model Performance as Applied to Jet-Noise Prediction
NASA Technical Reports Server (NTRS)
Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.
1998-01-01
The accurate prediction of jet noise is possible only if the jet flow field can be predicted accurately. Predictions for the mean velocity and turbulence quantities in the jet flowfield are typically the product of a Reynolds-averaged Navier-Stokes solver coupled with a turbulence model. To evaluate the effectiveness of solvers and turbulence models in predicting those quantities most important to jet noise prediction, two CFD codes and several turbulence models were applied to a jet configuration over a range of jet temperatures for which experimental data is available.
Harris, Jeff R.; Lance, Blake W.; Smith, Barton L.
2015-08-10
We present computational fluid dynamics (CFD) validation dataset for turbulent forced convection on a vertical plate. The design of the apparatus is based on recent validation literature and provides a means to simultaneously measure boundary conditions (BCs) and system response quantities (SRQs). Important inflow quantities for Reynolds-Averaged Navier-Stokes (RANS). CFD are also measured. Data are acquired at two heating conditions and cover the range 40,000 < Re x < 300,000, 357 < Re δ2 < 813, and 0.02 < Gr/Re 2 < 0.232.
Solar Resource Assessment for Sri Lanka and Maldives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renne, D.; George, R.; Marion, B.
2003-08-01
The countries of Sri Lanka and the Maldives lie within the equatorial belt, a region where substantial solar energy resources exist throughout much of the year in adequate quantities for many applications, including solar water heating, solar electricity, and desalination. The extent of solar resources in Sri Lanka has been estimated in the past based on a study of the daily total direct sunshine hours recorded at a number of weather and agricultural stations throughout the country. These data have been applied to the well-known Angstrom relationship in order to obtain an estimate of the distribution of monthly average dailymore » total solar resources at these stations. This study is an effort in improve on these estimates in two ways: (1) to apply a gridded cloud cover database at a 40-km resolution to produce updated monthly average daily total estimates of all solar resources (global horizontal, DNI, and diffuse) for the country, and (2) to input hourly or three-hourly cloud cover observations made at nine weather stations in Sri Lanka and two in the Maldives into a solar model that produces estimates of hourly solar radiation values of the direct normal, global, and diffuse resource covering the length of the observational period. Details and results of these studies are summarized in this report.« less
Average rainwater pH, concepts of atmospheric acidity, and buffering in open systems
NASA Astrophysics Data System (ADS)
Liljestrand, Howard M.
The system of water equilibrated with a constant partial pressure of CO 2, as a reference point for pH acidity-alkalinity relationships, has nonvolatile acidity and alkalinity components as conservative quantities, but not [H +]. Simple algorithms are presented for the determination of the average pH for combinations of samples both above and below pH 5.6. Averaging the nonconservative quantity [H +] yields erroneously low mean pH values. To extend the open CO 2 system to include other volatile atmospheric acids and bases distributed among the gas, liquid and particulate matter phases, a theoretical framework for atmospheric acidity is presented. Within certain oxidation-reduction limitations, the total atmospheric acidity (but not free acidity) is a conservative quantity. The concept of atmospheric acidity is applied to air-water systems approximating aerosols, fogwater, cloudwater and rainwater. The buffer intensity in hydrometeors is described as a function of net strong acidity, partial pressures of acid and base gases and the water to air ratio. For high liquid to air volume ratios, the equilibrium partial pressures of trace acid and base gases are set by the pH or net acidity controlled by the nonvolatile acid and base concentrations. For low water to air volume ratios as well as stationary state systems such as precipitation scavenging with continuous emissions, the partial pressures of trace gases (NH 3, HCl, HNO 3, SO 2 and CH 3COOH) appear to be of greater or equal importance as carbonate species as buffers in the aqueous phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Coal and Electric Data and Renewables Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy prepares the EPM. This publication provides monthly statistics at the State, Census division, and U.S. levels for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity retail sales, associated revenue, and average revenue per kilowatthour of electricity sold. In addition, data on net generation, fuel consumption, fuel stocks, quantity and cost of fossil fuels are also displayed for the North American Electric Reliability Council (NERC) regions. The EIAmore » publishes statistics in the EPM on net generation by energy source; consumption, stocks, quantity, quality, and cost of fossil fuels; and capability of new generating units by company and plant.« less
Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.
Holm, Darryl D.
2002-06-01
We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Miller, T. L.; Bosilovich, M. G.; Chen, J.
2010-01-01
Retrospective analyses (reanalyses) use a fixed assimilation model to take diverse observations and synthesize consistent, time-dependent fields of state variables and fluxes (e.g. temperature, moisture, momentum, turbulent and radiative fluxes). Because they offer data sets of these quantities at regular space / time intervals, atmospheric reanalyses have become a mainstay of the climate community for diagnostic purposes and for driving offline ocean and land models. Of course, one weakness of these data sets is the susceptibility of the flux products to uncertainties because of shortcomings in parameterized model physics. Another issue, perhaps less appreciated, is the fact that the discreet changes in the evolving observational system, particularly from satellite sensors, may also introduce artifacts in the time series of quantities. In this paper we examine the ability of the NASA MERRA (Modern Era Retrospective Analysis for Research and Applications) and other recent reanalyses to determine variability in the climate system over the satellite record ( the last 30 years). In particular we highlight the effect on reanalyses of discontinuities at the junctures of the onset of passive microwave imaging (Special Sensor Microwave Imager) in late 1987 as well as improved sounding and imaging with the Advanced Microwave Sounding Unit, AMSU-A, in 1998. We examine these data sets from two perspectives. The first is the ability to capture modes of variability that have coherent spatial structure (e.g. ENSO events and near-decadal coupling to SST changes) and how these modes are contained within trends in near global averages of key quantities. Secondly, we consider diagnostics that measure the consistency in energetic scaling in the hydrologic cycle, particularly the fractional changes in column-integrated water vapor versus precipitation as they are coupled to radiative flux constraints. These results will be discussed in the context of implications for science objectives and priorities of the NASA Energy and Water Cycle Study, NEWS.
Effective depth of spectral line formation in planetary atmospheres
NASA Technical Reports Server (NTRS)
Lestrade, J. P.; Chamberlain, J. W.
1980-01-01
The effective level of line formation for spectroscopic absorption lines has long been regarded as a useful parameter for determining average atmospheric values of the quantities involved in line formation. The identity of this parameter was recently disputed. The dependence of this parameter on the average depth where photons are absorbed in a semi-infinite atmosphere is established. It is shown that the mean depths derived by others are similar in nature and behavior.
Anisotropic Developments for Homogeneous Shear Flows
NASA Technical Reports Server (NTRS)
Cambon, Claude; Rubinstein, Robert
2006-01-01
The general decomposition of the spectral correlation tensor R(sub ij)(k) by Cambon et al. (J. Fluid Mech., 202, 295; J. Fluid Mech., 337, 303) into directional and polarization components is applied to the representation of R(sub ij)(k) by spherically averaged quantities. The decomposition splits the deviatoric part H(sub ij)(k) of the spherical average of R(sub ij)(k) into directional and polarization components H(sub ij)(sup e)(k) and H(sub ij)(sup z)(k). A self-consistent representation of the spectral tensor in the limit of weak anisotropy is constructed in terms of these spherically averaged quantities. The directional polarization components must be treated independently: models that attempt the same representation of the spectral tensor using the spherical average H(sub ij)(k) alone prove to be inconsistent with Navier-Stokes dynamics. In particular, a spectral tensor consistent with a prescribed Reynolds stress is not unique. The degree of anisotropy permitted by this theory is restricted by realizability requirements. Since these requirements will be less severe in a more accurate theory, a preliminary account is given of how to generalize the formalism of spherical averages to higher expansion of the spectral tensor. Directionality is described by a conventional expansion in spherical harmonics, but polarization requires an expansion in tensorial spherical harmonics generated by irreducible representations of the spatial rotation group SO(exp 3). These expansions are considered in more detail in the special case of axial symmetry.
Flow of Red Blood Cells in Stenosed Microvessels.
Vahidkhah, Koohyar; Balogh, Peter; Bagchi, Prosenjit
2016-06-20
A computational study is presented on the flow of deformable red blood cells in stenosed microvessels. It is observed that the Fahraeus-Lindqvist effect is significantly enhanced due to the presence of a stenosis. The apparent viscosity of blood is observed to increase by several folds when compared to non-stenosed vessels. An asymmetric distribution of the red blood cells, caused by geometric focusing in stenosed vessels, is observed to play a major role in the enhancement. The asymmetry in cell distribution also results in an asymmetry in average velocity and wall shear stress along the length of the stenosis. The discrete motion of the cells causes large time-dependent fluctuations in flow properties. The root-mean-square of flow rate fluctuations could be an order of magnitude higher than that in non-stenosed vessels. Several folds increase in Eulerian velocity fluctuation is also observed in the vicinity of the stenosis. Surprisingly, a transient flow reversal is observed upstream a stenosis but not downstream. The asymmetry and fluctuations in flow quantities and the flow reversal would not occur in absence of the cells. It is concluded that the flow physics and its physiological consequences are significantly different in micro- versus macrovascular stenosis.
Flow of Red Blood Cells in Stenosed Microvessels
NASA Astrophysics Data System (ADS)
Vahidkhah, Koohyar; Balogh, Peter; Bagchi, Prosenjit
2016-06-01
A computational study is presented on the flow of deformable red blood cells in stenosed microvessels. It is observed that the Fahraeus-Lindqvist effect is significantly enhanced due to the presence of a stenosis. The apparent viscosity of blood is observed to increase by several folds when compared to non-stenosed vessels. An asymmetric distribution of the red blood cells, caused by geometric focusing in stenosed vessels, is observed to play a major role in the enhancement. The asymmetry in cell distribution also results in an asymmetry in average velocity and wall shear stress along the length of the stenosis. The discrete motion of the cells causes large time-dependent fluctuations in flow properties. The root-mean-square of flow rate fluctuations could be an order of magnitude higher than that in non-stenosed vessels. Several folds increase in Eulerian velocity fluctuation is also observed in the vicinity of the stenosis. Surprisingly, a transient flow reversal is observed upstream a stenosis but not downstream. The asymmetry and fluctuations in flow quantities and the flow reversal would not occur in absence of the cells. It is concluded that the flow physics and its physiological consequences are significantly different in micro- versus macrovascular stenosis.
Augustine, David J; Springer, Tim L
2013-06-01
Potential competition between native and domestic herbivores is a major consideration influencing the management and conservation of native herbivores in rangeland ecosystems. In grasslands of the North American Great Plains, black-tailed prairie dogs (Cynomys ludovicianus) are widely viewed as competitors with cattle but are also important for biodiversity conservation due to their role in creating habitat for other native species. We examined spatiotemporal variation in prairie dog effects on growing-season forage quality and quantity using measurements from three colony complexes in Colorado and South Dakota and from a previous study of a fourth complex in Montana. At two complexes experiencing below-average precipitation, forage availability both on and off colonies was so low (12-54 g/m2) that daily forage intake rates of cattle were likely constrained by instantaneous intake rates and daily foraging time. Under these dry conditions, prairie dogs (1) substantially reduced forage availability, thus further limiting cattle daily intake rates, and (2) had either no or a small positive effect on forage digestibility. Under such conditions, prairie dogs are likely to compete with cattle in direct proportion to their abundance. For two complexes experiencing above-average precipitation, forage quantity on and off colonies (77-208 g/m2) was sufficient for daily forage intake of cattle to be limited by digestion rather than instantaneous forage intake. At one complex where prairie dogs enhanced forage digestibility and [N] while having no effect on forage quantity, prairie dogs are predicted to facilitate cattle mass gains regardless of prairie dog abundance. At the second complex where prairie dogs enhanced digestibility and [N] but reduced forage quantity, effects on cattle can vary from competition to facilitation depending on prairie dog abundance. Our findings show that the high spatiotemporal variation in vegetation dynamics characteristic of semiarid grasslands is paralleled by variability in the magnitude of competition between native and domestic grazers. Competitive interactions evident during dry periods may be partially or wholly offset by facilitation during periods when forage digestibility is enhanced and forage quantity does not limit the daily intake rate of cattle.
Investigation of the Discharge Rate of a Fuel-injection System
NASA Technical Reports Server (NTRS)
Gerrish, Harold C; Voss, Fred
1931-01-01
In connection with the development of a method for analyzing indicator cards taken from high-speed compression-ignition engines, this investigation was undertaken to determine the average quantity of fuel discharged during each crank degree of injection period.
Active Subspaces of Airfoil Shape Parameterizations
NASA Astrophysics Data System (ADS)
Grey, Zachary J.; Constantine, Paul G.
2018-05-01
Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.
The quantum measurement of time
NASA Technical Reports Server (NTRS)
Shepard, Scott R.
1994-01-01
Traditionally, in non-relativistic Quantum Mechanics, time is considered to be a parameter, rather than an observable quantity like space. In relativistic Quantum Field Theory, space and time are treated equally by reducing space to also be a parameter. Herein, after a brief review of other measurements, we describe a third possibility, which is to treat time as a directly observable quantity.
Martínez Maldonado, Raúl; Pedrão, Luiz Jorge; Alonso Castillo, María Magdalena; López García, Karla Selene; Oliva Rodríguez, Nora Nely
2008-01-01
This study aimed to know the differences, if any, in the consumption of tobacco and alcohol among adolescents from urban and rural areas, and if self-esteem and self-efficacy are related to the consumption in these two groups of adolescents from secondary schools in urban and rural areas of Nuevo León México, from January to June in 2006. The study was based on the theoretical concepts of self-esteem, perceived self-efficacy and consumption of alcohol and tobacco. The design was descriptive and correlational with a sample of 359 students. A substantial difference was found in the consumption of tobacco among secondary students from urban and rural areas (U= 7513.50, p = .03). The average consumption in urban area was higher (average chi = .35) than in the rural area (average chi = .14). A negative and significant relation was found between the quantity of drinks consumed on a typical day and self-esteem (r s = - .23, p <.001), as well as for the quantity of cigarettes consumed on a typical day (r s = - .20, p <.001).
Highly-resolved numerical simulations of bed-load transport in a turbulent open-channel flow
NASA Astrophysics Data System (ADS)
Vowinckel, Bernhard; Kempe, Tobias; Nikora, Vladimir; Jain, Ramandeep; Fröhlich, Jochen
2015-11-01
The study presents the analysis of phase-resolving Direct Numerical Simulations of a horizontal turbulent open-channel flow laden with a large number of spherical particles. These particles have a mobility close to their threshold of incipient motion andare transported in bed-load mode. The coupling of the fluid phase with the particlesis realized by an Immersed Boundary Method. The Double-Averaging Methodology is applied for the first time convolutingthe data into a handy set of quantities averaged in time and space to describe the most prominent flow features.In addition, a systematic study elucidatesthe impact of mobility and sediment supply on the pattern formation of particle clusters ina very large computational domain. A detailed description of fluid quantities links the developed particle patterns to the enhancement of turbulence and to a modified hydraulic resistance. Conditional averaging isapplied toerosion events providingthe processes involved inincipient particle motion. Furthermore, the detection of moving particle clusters as well as their surrounding flow field is addressedby a a moving frameanalysis. Funded by German Research Foundation (DFG), project FR 1593/5-2, computational time provided by ZIH Dresden, Germany, and JSC Juelich, Germany.
Analysis of Mass Averaged Tissue Doses in CAM, CAF, MAX, and FAX
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Qualls, Garry D.; Clowdsley, Martha S.; Blattnig, Steve R.; Simonsen, Lisa C.; Walker, Steven A.; Singleterry, Robert C.
2009-01-01
To estimate astronaut health risk due to space radiation, one must have the ability to calculate exposure-related quantities averaged over specific organs and tissue types. In this study, we first examine the anatomical properties of the Computerized Anatomical Man (CAM), Computerized Anatomical Female (CAF), Male Adult voXel (MAX), and Female Adult voXel (FAX) models by comparing the masses of various tissues to the reference values specified by the International Commission on Radiological Protection (ICRP). Major discrepancies are found between the CAM and CAF tissue masses and the ICRP reference data for almost all of the tissues. We next examine the distribution of target points used with the deterministic transport code HZETRN to compute mass averaged exposure quantities. A numerical algorithm is used to generate multiple point distributions for many of the effective dose tissues identified in CAM, CAF, MAX, and FAX. It is concluded that the previously published CAM and CAF point distributions were under-sampled and that the set of point distributions presented here should be adequate for future studies involving CAM, CAF, MAX, or FAX. It is concluded that MAX and FAX are more accurate than CAM and CAF for space radiation analyses.
Effect of a "pill mill" law on opioid prescribing and utilization: The case of Texas.
Lyapustina, Tatyana; Rutkow, Lainie; Chang, Hsien-Yen; Daubresse, Matthew; Ramji, Alim F; Faul, Mark; Stuart, Elizabeth A; Alexander, G Caleb
2016-02-01
States have attempted to reduce prescription opioid abuse through strengthening the regulation of pain management clinics; however, the effect of such measures remains unclear. We quantified the impact of Texas's September 2010 "pill mill" law on opioid prescribing and utilization. We used the IMS Health LRx LifeLink database to examine anonymized, patient-level pharmacy claims for a closed cohort of individuals filling prescription opioids in Texas between September 2009 and August 2011. Our primary outcomes were derived at a monthly level and included: (1) average morphine equivalent dose (MED) per transaction; (2) aggregate opioid volume; (3) number of opioid prescriptions; and (4) quantity of opioid pills dispensed. We compared observed values with the counterfactual, which we estimated from pre-intervention levels and trends. Texas's pill mill law was associated with declines in average MED per transaction (-0.57 mg/month, 95% confidence interval [CI] -1.09, -0.057), monthly opioid volume (-9.99 kg/month, CI -12.86, -7.11), monthly number of opioid prescriptions (-12,200 prescriptions/month, CI -15,300, -9,150) and monthly quantity of opioid pills dispensed (-714,000 pills/month, CI -877,000, -550,000). These reductions reflected decreases of 8.1-24.3% across the outcomes at one year compared with the counterfactual, and they were concentrated among prescribers and patients with the highest opioid prescribing and utilization at baseline. Following the implementation of Texas's 2010 pill mill law, there were clinically significant reductions in opioid dose, volume, prescriptions and pills dispensed within the state, which were limited to individuals with higher levels of baseline opioid prescribing and utilization. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Enhanced ionization efficiency in TIMS analyses of plutonium and americium using porous ion emitters
Baruzzini, Matthew L.; Hall, Howard L.; Watrous, Matthew G.; ...
2016-12-05
Investigations of enhanced sample utilization in thermal ionization mass spectrometry (TIMS) using porous ion emitter (PIE) techniques for the analyses of trace quantities of americium and plutonium were performed. Repeat ionization efficiency (i.e., the ratio of ions detected to atoms loaded on the filament) measurements were conducted on sample sizes ranging from 10–100 pg for americium and 1–100 pg for plutonium using PIE and traditional (i.e., a single, zone-refined rhenium, flat filament ribbon with a carbon ionization enhancer) TIMS filament sources. When compared to traditional filaments, PIEs exhibited an average boost in ionization efficiency of ~550% for plutonium and ~1100%more » for americium. A maximum average efficiency of 1.09% was observed at a 1 pg plutonium sample loading using PIEs. Supplementary trials were conducted using newly developed platinum PIEs to analyze 10 pg mass loadings of plutonium. As a result, platinum PIEs exhibited an additional ~134% boost in ion yield over standard PIEs and ~736% over traditional filaments at the same sample loading level.« less
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Vitrinite reflectance of sinkhole coals, east central Missouri fire clay district
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laudon, R.C.
1993-03-01
East central Missouri contains numerous sinkholes many of which are filled with commercial quantities of fire clay and some contain small amounts of coal. Vitrinite reflectance averages from 513 samples taken from eleven of these coals ranged from 0.71 to 0.78. Data were remarkably consistent and no local trends were observed. Using Barker and Goldstein (1990) and Barker and Pawlewicz (1986) temperature correlations, these measurements suggest that the coals have been heated to temperatures on the order of 108 C to 128 C (average = 116). These temperatures are considered anomalously high when compared against known geothermal gradients and burialmore » depths for these rocks. The temperatures suggest that the sinkhole coals have been heated by some thermal event, possibly associated with Mississippi Valley type mineralization. These temperatures are consistent with regional trends in the state. This data, when combined with other vitrinite reflectance and fluid inclusion data (right), suggest that southwest Missouri (Tristate) and southeast Missouri (Viburnum Trend) were hot spots, and that temperatures decrease regionally away from these two areas.« less
The role of temperature in the onset of the Olea europaea L. pollen season in southwestern Spain
NASA Astrophysics Data System (ADS)
Galán, C.; García-Mozo, H.; Cariñanos, P.; Alcázar, P.; Domínguez-Vilches, E.
Temperature is one of the main factors affecting the flowering of Mediterranean trees. In the case of Olea europaea L., a low-temperature period prior to bud development is essential to interrupt dormancy. After that, and once a base temperature is reached, the plant accumulates heat until flowering starts. Different methods of obtaining the best-forecast model for the onset date of the O. europaea pollen season, using temperature as the predictive parameter, are proposed in this paper. An 18-year pollen and climatic data series (1982-1999) from Cordoba (Spain) was used to perform the study. First a multiple-regression analysis using 15-day average temperatures from the period prior to flowering time was tested. Second, three heat-summation methods were used, determining the the quantities heat units (HU): accumulated daily mean temperature after deducting a threshold, growing degree-days (GDD): proposed by Snyder [J Agric Meteorol 35:353-358 (1985)] as a measure of physiological time, and accumulated maximum temperature. In the first two, the optimum base temperature selected for heat accumulation was 12.5°C. The multiple-regression equation for 1999 gives a 7-day delay from the observed date. The most accurate results were obtained with the GDD method, with a difference of only 4.7 days between predicted and observed dates. The average heat accumulation expressed as GDD was 209.9°C days. The HU method also gives good results, with no significant statistical differences between predictions and observations.
Assessing DNA recovery from chewing gum.
Eychner, Alison M; Schott, Kelly M; Elkins, Kelly M
2017-01-01
The purpose of this study was to evaluate which DNA extraction method yields the highest quantity of DNA from chewing gum. In this study, several popular extraction methods were tested, including Chelex-100, phenol-chloroform-isoamyl alcohol (PCIA), DNA IQ, PrepFiler, and QIAamp Investigator, and the quantity of DNA recovered from chewing gum was determined using real-time polymerase chain reaction with Quantifiler. Chewed gum control samples were submitted by anonymous healthy adult donors, and discarded environmental chewing gum samples simulating forensic evidence were collected from outside public areas (e.g., campus bus stops, streets, and sidewalks). As expected, results indicate that all methods tested yielded sufficient amplifiable human DNA from chewing gum using the wet-swab method. The QIAamp performed best when DNA was extracted from whole pieces of control gum (142.7 ng on average), and the DNA IQ method performed best on the environmental whole gum samples (29.0 ng on average). On average, the QIAamp kit also recovered the most DNA from saliva swabs. The PCIA method demonstrated the highest yield with wet swabs of the environmental gum (26.4 ng of DNA on average). However, this method should be avoided with whole gum samples (no DNA yield) due to the action of the organic reagents in dissolving and softening the gum and inhibiting DNA recovery during the extraction.
NASA Technical Reports Server (NTRS)
Dong, Xiquan; Wielicki, Bruce A.; Xi, Baike; Hu, Yongxiang; Mace, Gerald G.; Benson, Sally; Rose, Fred; Kato, Seiji; Charlock, Thomas; Minnis, Patrick
2008-01-01
Atmospheric column absorption of solar radiation A(sub col) is a fundamental part of the Earth's energy cycle but is an extremely difficult quantity to measure directly. To investigate A(sub col), we have collocated satellite-surface observations for the optically thick Deep Convective Systems (DCS) at the Department of Energy Atmosphere Radiation Measurement (ARM) Tropical Western Pacific (TWP) and Southern Great Plains (SGP) sites during the period of March 2000 December 2004. The surface data were averaged over a 2-h interval centered at the time of the satellite overpass, and the satellite data were averaged within a 1 deg X 1 deg area centered on the ARM sites. In the DCS, cloud particle size is important for top-of-atmosphere (TOA) albedo and A(sub col) although the surface absorption is independent of cloud particle size. In this study, we find that the A(sub col) in the tropics is approximately 0.011 more than that in the middle latitudes. This difference, however, disappears, i.e., the A(sub col) values at both regions converge to the same value (approximately 0.27 of the total incoming solar radiation) in the optically thick limit (tau greater than 80). Comparing the observations with the NASA Langley modified Fu_Liou 2-stream radiative transfer model for optically thick cases, the difference between observed and model-calculated surface absorption, on average, is less than 0.01, but the model-calculated TOA albedo and A(sub col) differ by 0.01 to 0.04, depending primarily on the cloud particle size observation used. The model versus observation discrepancies found are smaller than many previous studies and are just within the estimated error bounds. We did not find evidence for a large cloud absorption anomaly for the optically thick limit of extensive ice cloud layers. A more modest cloud absorption difference of 0.01 to 0.04 cannot yet be ruled out. The remaining uncertainty could be reduced with additional cases, and by reducing the current uncertainty in cloud particle size.
Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1992-01-01
Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.
Measured Plume Dispersion Parameters Over Water. Volume 1.
1984-09-01
meteorlogical parameters were continuously monitored at various locations. Tracer gas concentrations were measured by a variety of methods at...addition, this step added a header . to the data set containing a variety of averaged meteorlogical quantities. The basic procedure in this step was
The Momentum Distribution of Liquid ⁴He
Prisk, T. R.; Bryan, M. S.; Sokol, P. E.; ...
2017-07-24
We report a high-resolution neutron Compton scattering study of liquid ⁴He under milli-Kelvin temperature control. To interpret the scattering data, we performed Quantum Monte Carlo calculations of the atomic momentum distribution and final state effects for the conditions of temperature and density considered in the experiment. There is excellent agreement between the observed scattering and ab initio calculations of its lineshape at all temperatures. We also used model fit functions to obtain from the scattering data empirical estimates of the average atomic kinetic energy and Bose condensate fraction. These quantities are also in excellent agreement with ab initio calculations. Wemore » conclude that contemporary Quantum Monte Carlo methods can furnish accurate predictions for the properties of Bose liquids, including the condensate fraction, close to the superfluid transition temperature.« less
Chaotic Bohmian trajectories for stationary states
NASA Astrophysics Data System (ADS)
Cesa, Alexandre; Martin, John; Struyve, Ward
2016-09-01
In Bohmian mechanics, the nodes of the wave function play an important role in the generation of chaos. However, so far, most of the attention has been on moving nodes; little is known about the possibility of chaos in the case of stationary nodes. We address this question by considering stationary states, which provide the simplest examples of wave functions with stationary nodes. We provide examples of stationary wave functions for which there is chaos, as demonstrated by numerical computations, for one particle moving in three spatial dimensions and for two and three entangled particles in two dimensions. Our conclusion is that the motion of the nodes is not necessary for the generation of chaos. What is important is the overall complexity of the wave function. That is, if the wave function, or rather its phase, has a complex spatial variation, it will lead to complex Bohmian trajectories and hence to chaos. Another aspect of our work concerns the average Lyapunov exponent, which quantifies the overall amount of chaos. Since it is very hard to evaluate the average Lyapunov exponent analytically, which is often computed numerically, it is useful to have simple quantities that agree well with the average Lyapunov exponent. We investigate possible correlations with quantities such as the participation ratio and different measures of entanglement, for different systems and different families of stationary wave functions. We find that these quantities often tend to correlate to the amount of chaos. However, the correlation is not perfect, because, in particular, these measures do not depend on the form of the basis states used to expand the wave function, while the amount of chaos does.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krawiec, F.; Thomas, T.; Jackson, F.
1980-11-01
An examination is made of the current and future energy demands, and uses, and cost to characterize typical applications and resulting services in the US and industrial sectors of 15 selected states. Volume III presents tables containing data on selected states' manufacturing subsector energy consumption, functional uses, and cost in 1974 and 1976. Alabama, California, Illinois, Indiana, Louisiana, Michigan, Missouri, New Jersey, New York, Ohio, Oregon, Pennsylvania, Texas, West Virginia, and Wisconsin were chosen as having the greatest potential for replacing conventional fuel with solar energy. Basic data on the quantities, cost, and types of fuel and electric energy purchasedmore » by industr for heat and power were obtained from the 1974 and 1976 Annual Survey of Manufacturers. The specific indutrial energy servic cracteristics developed for each selected state include. 1974 and 1976 manufacturing subsector fuels and electricity consumption by 2-, 3-, and 4-digit SIC and primary fuel (quantity and relative share); 1974 and 1976 manufacturing subsector fuel consumption by 2-, 3-, and 4-digit SIC and primary fuel (quantity and relative share); 1974 and 1976 manufacturing subsector average cost of purchsed fuels and electricity per million Btu by 2-, 3-, and 4-digit SIC and primary fuel (in 1976 dollars); 1974 and 1976 manufacturing subsector fuels and electric energy intensity by 2-, 3-, and 4-digit SIC and primary fuel (in 1976 dollars); manufacturing subsector average annual growth rates of (1) fuels and electricity consumption, (2) fuels and electric energy intensity, and (3) average cost of purchased fuels and electricity (1974 to 1976). Data are compiled on purchased fuels, distillate fuel oil, residual ful oil, coal, coal, and breeze, and natural gas. (MCW)« less
NASA Technical Reports Server (NTRS)
Takeuchi, Yoshimi R.; Frantz, Peter P.; Hilton, Michael R.
2014-01-01
The performance and life of precision ball bearings are critically dependent on maintaining a quantity of oil at the ball/race interface that is sufficient to support a robust protective film. In space applications, where parched conditions are intentionally the norm, harsh operating conditions can displace the small reserves of oil, resulting in reduced film thickness and premature wear. In the past, these effects have proven difficult to model or to measure experimentally. This paper describes a study addressing this challenge, where bearing thermal conductance measurements are employed to infer changes in lubricant quantity at the critical rolling interfaces. In the first part of the paper, we explain how the lubricant's presence and its quantity impacts bearing thermal conductance measurements. For a stationary bearing, we show that conductance is directly related to the lubricant quantity in the ball/race contacts. Hence, aspects of bearing performance related to oil quantity can be understood and insights improved with thermal conductance data. For a moving bearing, a different mechanism of heat transfer dominates and is dependent on lubricant film thickness on the ball. In the second part of the report, we discuss lubricant quantity observations based on bearing thermal conductance measurements. Lubricant quantity, and thus bearing thermal conductance, depends on various initial and operating conditions and is impacted further by the run-in process. A significant effect of maximum run-in speed was also observed, with less oil remaining after obtaining higher speeds. Finally, we show that some of the lubricant that is displaced between the ball and race during run-in operation can be recovered during rest, and we measure the rate of recovery for one example.
Analysis of a water moderated critical assembly with anisn-Vitamin C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, L.
1979-03-01
A tightly packed water moderated /sup 233/UO/sub 2/--ThO/sub 2/ critical assembly was analyzed with the Vitamin C library and the 1-D S/s n/ code, ANISN (S/sub 8/,P/sub 3/). The purpose of the study was to provide validation of this calculational model as applied to water-cooled hybrid fusion blankets. The quantities compared were the core eigenvalue and various activation shapes. The calculated eigenvalue was 1.02 +- 0.01. The /sup 233/U fission and /sup 232/Th capture shapes were found to be in good agreement (+-5%) with experiment, except near water--metal boundaries where differences up to 24% were observed. No such error peakingmore » was observed in the /sup 232/Th fast fission shape. We conclude that the model provides good volume averaged reaction rates in water-cooled systems. However, care must be exercised near water boundaries where thermally dependent reaction rates are significantly underestimated.« less
Global Ocean Integrals and Means, with Trend Implications.
Wunsch, Carl
2016-01-01
Understanding the ocean requires determining and explaining global integrals and equivalent average values of temperature (heat), salinity (freshwater and salt content), sea level, energy, and other properties. Attempts to determine means, integrals, and climatologies have been hindered by thinly and poorly distributed historical observations in a system in which both signals and background noise are spatially very inhomogeneous, leading to potentially large temporal bias errors that must be corrected at the 1% level or better. With the exception of the upper ocean in the current altimetric-Argo era, no clear documentation exists on the best methods for estimating means and their changes for quantities such as heat and freshwater at the levels required for anthropogenic signals. Underestimates of trends are as likely as overestimates; for example, recent inferences that multidecadal oceanic heat uptake has been greatly underestimated are plausible. For new or augmented observing systems, calculating the accuracies and precisions of global, multidecadal sampling densities for the full water column is necessary to avoid the irrecoverable loss of scientifically essential information.
Plasma properties of driver gas following interplanetary shocks observed by ISEE-3
NASA Technical Reports Server (NTRS)
Zwickl, R. D.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Gosling, J. T.; Smith, E. J.
1983-01-01
Plasma fluid parameters calculated from solar wind and magnetic field data to determine the characteristic properties of driver gas following a select subset of interplanetary shocks were studied. Of 54 shocks observed from August 1978 to February 1980, 9 contained a well defined driver gas that was clearly identifiable by a discontinuous decrease in the average proton temperature. While helium enhancements were present downstream of the shock in all 9 of these events, only about half of them contained simultaneous changes in the two quantities. Simultaneous with the drop in proton temperature the helium and electron temperature decreased abruptly. In some cases the proton temperature depression was accompanied by a moderate increase in magnetic field magnitude with an unusually low variance, by a small decrease in the variance of the bulk velocity, and by an increase in the ratio of parallel to perpendicular temperature. The cold driver gas usually displayed a bidirectional flow of suprathermal solar wind electrons at higher energies.
Plasma properties of driver gas following interplanetary shocks observed by ISEE-3
NASA Technical Reports Server (NTRS)
Zwickl, R. D.; Ashbridge, J. R.; Bame, S. J.; Feldman, W. C.; Gosling, J. T.; Smith, E. J.
1982-01-01
Plasma fluid parameters calculated from solar wind and magnetic field data obtained on ISEE 3 were studied. The characteristic properties of driver gas following interplanetary shocks was determined. Of 54 shocks observed from August 1978 to February 1980, nine contained a well defined driver gas that was clearly identifiable by a discontinuous decrease in the average proton temperature across a tangential discontinuity. While helium enhancements were present in all of nine of these events, only about half of them contained simultaneous changes in the two quantities. Often the He/H ratio changed over a period of minutes. Simultaneous with the drop in proton temperature the helium and electron temperature decreased abruptly. In some cases the proton temperature depression was accompanied by a moderate increase in magnetic field magnitude with an unusually low variance and by an increase in the ratio of parallel to perpendicular temperature. The drive gas usually displayed a bidirectional flow of suprathermal solar wind electrons at higher energies.
Mining influence on underground water resources in arid and semiarid regions
NASA Astrophysics Data System (ADS)
Luo, A. K.; Hou, Y.; Hu, X. Y.
2018-02-01
Coordinated mining of coal and water resources in arid and semiarid regions has traditionally become a focus issue. The research takes Energy and Chemical Base in Northern Shaanxi as an example, and conducts statistical analysis on coal yield and drainage volume from several large-scale mines in the mining area. Meanwhile, research determines average water volume per ton coal, and calculates four typical years’ drainage volume in different mining intensity. Then during mining drainage, with the combination of precipitation observation data in recent two decades and water level data from observation well, the calculation of groundwater table, precipitation infiltration recharge, and evaporation capacity are performed. Moreover, the research analyzes the transforming relationship between surface water, mine water, and groundwater. The result shows that the main reason for reduction of water resources quantity and transforming relationship between surface water, groundwater, and mine water is massive mine drainage, which is caused by large-scale coal mining in the research area.
On the Statistical Properties of the Lower Main Sequence
NASA Astrophysics Data System (ADS)
Angelou, George C.; Bellinger, Earl P.; Hekker, Saskia; Basu, Sarbani
2017-04-01
Astronomy is in an era where all-sky surveys are mapping the Galaxy. The plethora of photometric, spectroscopic, asteroseismic, and astrometric data allows us to characterize the comprising stars in detail. Here we quantify to what extent precise stellar observations reveal information about the properties of a star, including properties that are unobserved, or even unobservable. We analyze the diagnostic potential of classical and asteroseismic observations for inferring stellar parameters such as age, mass, and radius from evolutionary tracks of solar-like oscillators on the lower main sequence. We perform rank correlation tests in order to determine the capacity of each observable quantity to probe structural components of stars and infer their evolutionary histories. We also analyze the principal components of classic and asteroseismic observables to highlight the degree of redundancy present in the measured quantities and demonstrate the extent to which information of the model parameters can be extracted. We perform multiple regression using combinations of observable quantities in a grid of evolutionary simulations and appraise the predictive utility of each combination in determining the properties of stars. We identify the combinations that are useful and provide limits to where each type of observable quantity can reveal information about a star. We investigate the accuracy with which targets in the upcoming TESS and PLATO missions can be characterized. We demonstrate that the combination of observations from GAIA and PLATO will allow us to tightly constrain stellar masses, ages, and radii with machine learning for the purposes of Galactic and planetary studies.
43 CFR 3480.0-5 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... more Federal leases and may include intervening or adjacent lands in which the United States does not... Secretary for Land and Water Resources means the Assistant Secretary for Land and Water Resources... average amount of not less than commercial quantities of recoverable coal reserves per continued operation...
NASA Astrophysics Data System (ADS)
Lv, Xizhi; Zuo, Zhongguo; Xiao, Peiqing
2017-06-01
With increasing demand for water resources and frequently a general deterioration of local water resources, water conservation by forests has received considerable attention in recent years. To evaluate water conservation capacities of different forest ecosystems in mountainous areas of Loess Plateau, the landscape of forests was divided into 18 types in Loess Plateau. Under the consideration of the factors such as climate, topography, plant, soil and land use, the water conservation of the forest ecosystems was estimated by means of InVEST model. The result showed that 486417.7 hm2 forests in typical mountain areas were divided into 18 forest types, and the total water conservation quantity was 1.64×1012m3, equaling an average of water conversation quantity of 9.09×1010m3. There is a great difference in average water conversation capacity among various forest types. The water conservation function and its evaluation is crucial and complicated issues in the study of ecological service function in modern times.
Observational determination of albedo decrease caused by vanishing Arctic sea ice.
Pistone, Kristina; Eisenman, Ian; Ramanathan, V
2014-03-04
The decline of Arctic sea ice has been documented in over 30 y of satellite passive microwave observations. The resulting darkening of the Arctic and its amplification of global warming was hypothesized almost 50 y ago but has yet to be verified with direct observations. This study uses satellite radiation budget measurements along with satellite microwave sea ice data to document the Arctic-wide decrease in planetary albedo and its amplifying effect on the warming. The analysis reveals a striking relationship between planetary albedo and sea ice cover, quantities inferred from two independent satellite instruments. We find that the Arctic planetary albedo has decreased from 0.52 to 0.48 between 1979 and 2011, corresponding to an additional 6.4 ± 0.9 W/m(2) of solar energy input into the Arctic Ocean region since 1979. Averaged over the globe, this albedo decrease corresponds to a forcing that is 25% as large as that due to the change in CO2 during this period, considerably larger than expectations from models and other less direct recent estimates. Changes in cloudiness appear to play a negligible role in observed Arctic darkening, thus reducing the possibility of Arctic cloud albedo feedbacks mitigating future Arctic warming.
Simulated effects of irrigation on salinity in the Arkansas River Valley in Colorado
Goff, K.; Lewis, M.E.; Person, M.A.; Konikow, Leonard F.
1998-01-01
Agricultural irrigation has a substantial impact on water quantity and quality in the lower Arkansas River valley of southeastern Colorado. A two-dimensional flow and solute transport model was used to evaluate the potential effects of changes in irrigation on the quantity and quality of water in the alluvial aquifer and in the Arkansas River along an 17.7 km reach of the fiver. The model was calibrated to aquifer water level and dissolved solids concentration data collected throughout the 24 year study period (197195). Two categories of irrigation management were simulated with the calibrated model: (1) a decrease in ground water withdrawals for irrigation; and (2) cessation of all irrigation from ground water and surface water sources. In the modeled category of decreased irrigation from ground water pumping, there was a resulting 6.9% decrease in the average monthly ground water salinity, a 0.6% decrease in average monthly river salinity, and an 11.1% increase in ground water return flows to the river. In the modeled category of the cessation of all irrigation, average monthly ground water salinity decreased by 25%; average monthly river salinity decreased by 4.4%; and ground water return flows to the river decreased by an average of 64%. In all scenarios, simulated ground water salinity decreased relative to historical conditions for about 12 years before reaching a new dynamic equilibrium condition. Aquifer water levels were not sensitive to any of the modeled scenarios. These potential changes in salinity could result in improved water quality for irrigation purposes downstream from the affected area.
Fryer, Luke K; Vermunt, Jan D
2018-03-01
Contemporary models of student learning within higher education are often inclusive of processing and regulation strategies. Considerable research has examined their use over time and their (person-centred) convergence. The longitudinal stability/variability of learning strategy use, however, is poorly understood, but essential to supporting student learning across university experiences. Develop and test a person-centred longitudinal model of learning strategies across the first-year university experience. Japanese university students (n = 933) completed surveys (deep and surface approaches to learning; self, external, and lack of regulation) at the beginning and end of their first year. Following invariance and cross-sectional tests, latent profile transition analysis (LPTA) was undertaken. Initial difference testing supported small but significant differences for self-/external regulation. Fit indices supported a four-group model, consistent across both measurement points. These subgroups were labelled Low Quality (low deep approaches and self-regulation), Low Quantity (low strategy use generally), Average (moderate strategy use), and High Quantity (intense use of all strategies) strategies. The stability of these groups ranged from stable to variable: Average (93% stayers), Low Quality (90% stayers), High Quantity (72% stayers), and Low Quantity (40% stayers). The three largest transitions presented joint shifts in processing/regulation strategy preference across the year, from adaptive to maladaptive and vice versa. Person-centred longitudinal findings presented patterns of learning transitions that different students experience during their first year at university. Stability/variability of students' strategy use was linked to the nature of initial subgroup membership. Findings also indicated strong connections between processing and regulation strategy changes across first-year university experiences. Implications for theory and practice are discussed. © 2017 The British Psychological Society.
Photoprotection by sunscreen depends on time spent on application.
Heerfordt, Ida M; Torsnes, Linnea R; Philipsen, Peter A; Wulf, Hans Christian
2018-03-01
To be effective, sunscreens must be applied in a sufficient quantity and reapplication is recommended. No previous study has investigated whether time spent on sunscreen application is important for the achieved photoprotection. To determine whether time spent on sunscreen application is related to the amount of sunscreen used during a first and second application. Thirty-one volunteers wearing swimwear applied sunscreen twice in a laboratory environment. Time spent and the amount of sunscreen used during each application was measured. Subjects' body surface area accessible for sunscreen application (BSA) was estimated from their height, weight and swimwear worn. The average applied quantity of sunscreen after each application was calculated. Subjects spent on average 4 minutes and 15 seconds on the first application and approximately 85% of that time on the second application. There was a linear relationship between time spent on application and amount of sunscreen used during both the first and the second application (P < .0001). Participants applied 2.21 grams of sunscreen per minute during both applications. After the first application, subjects had applied a mean quantity of sunscreen of 0.71 mg/cm 2 on the BSA, and after the second application, a mean total quantity of 1.27 mg/cm 2 had been applied. We found that participants applied a constant amount of sunscreen per minute during both a first and a second application. Measurement of time spent on application of sunscreen on different body sites may be useful in investigating the distribution of sunscreen in real-life settings. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Nisenson, P.; Papaliolios, C.
1983-01-01
An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.
Cosmic radiation dose measurements from the RaD-X flight campaign
NASA Astrophysics Data System (ADS)
Mertens, Christopher J.; Gronoff, Guillaume P.; Norman, Ryan B.; Hayes, Bryan M.; Lusby, Terry C.; Straume, Tore; Tobiska, W. Kent; Hands, Alex; Ryden, Keith; Benton, Eric; Wiley, Scott; Gersey, Brad; Wilkins, Richard; Xu, Xiaojing
2016-10-01
The NASA Radiation Dosimetry Experiment (RaD-X) stratospheric balloon flight mission obtained measurements for improving the understanding of cosmic radiation transport in the atmosphere and human exposure to this ionizing radiation field in the aircraft environment. The value of dosimetric measurements from the balloon platform is that they can be used to characterize cosmic ray primaries, the ultimate source of aviation radiation exposure. In addition, radiation detectors were flown to assess their potential application to long-term, continuous monitoring of the aircraft radiation environment. The RaD-X balloon was successfully launched from Fort Sumner, New Mexico (34.5°N, 104.2°W) on 25 September 2015. Over 18 h of flight data were obtained from each of the four different science instruments at altitudes above 20 km. The RaD-X balloon flight was supplemented by contemporaneous aircraft measurements. Flight-averaged dosimetric quantities are reported at seven altitudes to provide benchmark measurements for improving aviation radiation models. The altitude range of the flight data extends from commercial aircraft altitudes to above the Pfotzer maximum where the dosimetric quantities are influenced by cosmic ray primaries. The RaD-X balloon flight observed an absence of the Pfotzer maximum in the measurements of dose equivalent rate.
Cosmic Radiation Dose Measurements from the RaD-X Flight Campaign
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Gronoff, Guillaume P.; Norman, Ryan B.; Hayes, Bryan M.; Lusby, Terry C.; Straume, Tore; Tobiska, W. Kent; Hands, Alex; Ryden, Keith; Benton, Eric;
2016-01-01
The NASA Radiation Dosimetry Experiment (RaD-X) stratospheric balloon flight mission obtained measurements for improving the understanding of cosmic radiation transport in the atmosphere and human exposure to this ionizing radiation field in the aircraft environment. The value of dosimetric measurements from the balloon platform is that they can be used to characterize cosmic ray primaries, the ultimate source of aviation radiation exposure. In addition, radiation detectors were flown to assess their potential application to long-term, continuous monitoring of the aircraft radiation environment. The RaD-X balloon was successfully launched from Fort Sumner, New Mexico (34.5 degrees North, 104.2 degrees West) on 25 September 2015. Over 18 hours of flight data were obtained from each of the four different science instruments at altitudes above 20 kilometers. The RaD-X balloon flight was supplemented by contemporaneous aircraft measurements. Flight-averaged dosimetric quantities are reported at seven altitudes to provide benchmark measurements for improving aviation radiation models. The altitude range of the flight data extends from commercial aircraft altitudes to above the Pfotzer maximum where the dosimetric quantities are influenced by cosmic ray primaries. The RaD-X balloon flight observed an absence of the Pfotzer maximum in the measurements of dose equivalent rate.
Evaluation of the Analysis Influence on Transport in Reanalysis Regional Water Cycles
NASA Technical Reports Server (NTRS)
Bosilovich, M. G.; Chen, J.; Robertson, F. R.
2011-01-01
Regional water cycles of reanalyses do not follow theoretical assumptions applicable to pure simulated budgets. The data analysis changes the wind, temperature and moisture, perturbing the theoretical balance. Of course, the analysis is correcting the model forecast error, so that the state fields should be more aligned with observations. Recently, it has been reported that the moisture convergence over continental regions, even those with significant quantities of radiosonde profiles present, can produce long term values not consistent with theoretical bounds. Specifically, long averages over continents produce some regions of moisture divergence. This implies that the observational analysis leads to a source of water in the region. One such region is the Unite States Great Plains, which many radiosonde and lidar wind observations are assimilated. We will utilize a new ancillary data set from the MERRA reanalysis called the Gridded Innovations and Observations (GIO) which provides the assimilated observations on MERRA's native grid allowing more thorough consideration of their impact on regional and global climatology. Included with the GIO data are the observation minus forecast (OmF) and observation minus analysis (OmA). Using OmF and OmA, we can identify the bias of the analysis against each observing system and gain a better understanding of the observations that are controlling the regional analysis. In this study we will focus on the wind and moisture assimilation.
Mass Function of Galaxy Clusters in Relativistic Inhomogeneous Cosmology
NASA Astrophysics Data System (ADS)
Ostrowski, Jan J.; Buchert, Thomas; Roukema, Boudewijn F.
The current cosmological model (ΛCDM) with the underlying FLRW metric relies on the assumption of local isotropy, hence homogeneity of the Universe. Difficulties arise when one attempts to justify this model as an average description of the Universe from first principles of general relativity, since in general, the Einstein tensor built from the averaged metric is not equal to the averaged stress-energy tensor. In this context, the discrepancy between these quantities is called "cosmological backreaction" and has been the subject of scientific debate among cosmologists and relativists for more than 20 years. Here we present one of the methods to tackle this problem, i.e. averaging the scalar parts of the Einstein equations, together with its application, the cosmological mass function of galaxy clusters.
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
Quantitative Criteria to Screen for Cannabis Use Disorder.
Casajuana, Cristina; López-Pelayo, Hugo; Miquel, Laia; Balcells-Oliveró, María Mercedes; Colom, Joan; Gual, Antoni
2018-06-27
The Standard Joint Unit (1 SJU = 7 mg of 9-Tetrahydrocannabinol) simplifies the exploration of risky patterns of cannabis use. This study proposes a preliminary quantitative cutoff criterion to screen for cannabis use disorder (CUD). Socio-demographical data and information on cannabis quantities, frequency of use, and risk for CUD (measured with the Cannabis Abuse Screening Test (CAST) of cannabis users recruited in Barcelona (from February 2015 to June 2016) were collected. CAST scores were categorized into low, moderate, and high risk for CUD, based on the SJU consumed and frequency. Receiver operating characteristic (ROC) analysis related daily SJU with CUD. Participants (n = 473) were on average 29 years old (SD = 10), men (77.1%), and single (74.6%). With an average of 4 joints per smoking day, 82.5% consumed cannabis almost every day. Risk for CUD (9.40% low, 23.72% moderate, 66.88% high) increased significantly with more frequency and quantities consumed. The ROC analyses suggest 1.2 SJU per day as a cutoff criterion to screen for at least moderate risk for CUD (sensitivity 69.4%, specificity 63.6%). Frequency and quantity should be considered when exploring cannabis risks. A 1 SJU per day is proposed as a preliminary quantitative-based criterion to screen users with at least a moderate risk for CUD. © 2018 S. Karger AG, Basel.
Determination of groundwater abstractions by means of GRACE data and Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Gemitzi, Alexandra; Tsagkarakis, Konstantinos; Lakshmi, Venkat
2017-04-01
The EU Water Framework Directive requires for each groundwater body the determination of annual average rates of abstraction from all points providing more than 10m3 per day as well as groundwater level monitoring, so as to ensure that the available groundwater resource is not exceeded by the long-term annual average rate of abstraction. In order to acquire such information in situ observation networks are necessary. However, there are cases, e.g. Greece where WFD monitoring programme has not yet become operational due to bureaucratic, socioeconomic and often political constraints. The present study aims at determining groundwater use at the aquifer scale by using Gravity Recovery and Climate Experiment (GRACE) satellite data coupled with readily available meteorological data. Traditionally, GRACE data have been used at the global and regional scale due to their coarse resolution and the difficulties in disaggregating the various Total Water Storage (TWS) components. Previous works have evaluated the subsurface anomalies (ΔGW), using supplementary data sets and hydrologic modeling results in order to disaggregate GRACE TWS anomalies into their various components. Recent works however, have shown that changes in groundwater storage are dominating the GRACE Total Water Storage (TWS) changes, therefore it was though reasonable to use changes in Grace derived TWS in order to quantify abstractions from a groundwater body. Statistical downscaling was performed using an Artificial Neural Network in the form a Multilayer Perceptron model, in conjunction with local meteorological data. An ensemble of 100 ANNs provided a means of quantifying uncertainty and improving generalization. The methodology was applied in Rhodope area (NE Greece) and proved to be an efficient way of downscaling GRACE data in order to estimate the monthly quantity of water extracted from a certain aquifer. Although our methodology does not aim at estimating abstractions at single points, it manages to capture the total monthly abstracted quantities from a groundwater body The developed herein approach offers a handy advantage to water managers who will be able to acquire information on groundwater uses without having to adhere to in situ costly observations.
Quantity Stickiness versus Stackelberg Leadership
NASA Astrophysics Data System (ADS)
Ferreira, F. A.
2008-10-01
We study the endogenous Stackelberg relations in a dynamic market. We analyze a twice-repeated duopoly where, in the beginning, each firm chooses either a quantity-sticky production mode or a quantity-flexible production mode. The size of the market becomes observable after the first period. In the second period, a firm can adjust its quantity if, and only if, it has adopted the flexible mode. Hence, if one firm chooses the sticky mode whilst the other chooses the flexible mode, then they respectively play the roles of a Stackelberg leader and a Stackelberg follower in the second marketing period. We compute the supply quantities at equilibrium and the corresponding expected profits of the firms. We also analyze the effect of the slope parameter of the demand curve on the expected supply quantities and on the profits.
Goldman, Howard H; Barry, Colleen L; Normand, Sharon-Lise T; Azzone, Vanessa; Busch, Alisa B; Huskamp, Haiden A
2012-02-01
The impact of parity coverage on the quantity of behavioral health services used by enrollees and on the prices of these services was examined in a set of Federal Employees Health Benefit (FEHB) Program plans. After parity implementation, the quantity of services used in the FEHB plans declined in five service categories, compared with plans that did not have parity coverage. The decline was significant for all service types except inpatient care. Because a previous study of the FEHB Program found that total spending on behavioral health services did not increase after parity implementation, it can be inferred that average prices must have increased over the period. The finding of a decline in service use and increase in prices provides an empirical window on what might be expected after implementation of the federal parity law and the parity requirement under the health care reform law.
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
NASA Technical Reports Server (NTRS)
Abbas, M. M.; Michelsen, H. A.; Gunson, M. R.; Abrams, M. C.; Newchurch, M. J.; Salawitch, R. J.; Chang, A. Y.; Goldman, A.; Irion, F. W.; Manney, G. L.;
1996-01-01
Stratospheric measurements of H2O and CH4 by the Atmospheric Trace Molecule Spectroscopy (ATMOS) Fourier transform spectrometer on the ATLAS-3 shuttle flight in November 1994 have been examined to investigate the altitude and geographic variability of H2O and the quantity H = (H2O + 2CH4) in the tropics and at mid-latitudes (8 to 49 deg N) in the northern hemisphere. The measurements indicate an average value of 7.24 +/- 0.44 ppmv for H between altitudes of about 18 to 35 km, corresponding to an annual average water vapor mixing ratio of 3.85 +/- 0.29 ppmv entering the stratosphere. The H2O vertical distribution in the tropics exhibits a wave-like structure in the 16- to 25-km altitude range, suggestive of seasonal variations in the water vapor transported from the troposphere to the stratosphere. The hygropause appears to be nearly coincident with the tropopause at the time of observations. This is consistent with the phase of the seasonal cycle of H2O in the lower stratosphere, since the ATMOS observations were made in November when the H2O content of air injected into the stratosphere from the troposphere is decreasing from its seasonal peak in July-August.
NASA Astrophysics Data System (ADS)
Krčma, F.; Kozáková, Z.; Mazánková, V.; Horák, J.; Dostál, L.; Obradović, B.; Nikiforov, A.; Belmonte, T.
2018-06-01
A recently presented novel plasma source generating discharge in liquids based on the pin-hole discharge configuration is characterized in detail. The system is supplied by DC non-pulsing high voltage of both polarities in NaCl water solutions at a conductivity range of 100–15 000 μS/cm. The discharge itself shows self-pulsing operation. The discharge ignition is observed in micro bubbles by transient discharge followed by a glow discharge in positive polarity at lower conductivities propagating inside the bubbles. At high conductivities, the glow regime is particularly replaced by a more energetic sequence of transient discharges followed by a shorter glow mode operation. The transient regime probability and its intensity are higher in the negative discharge polarity. The transient discharge produces acoustic waves and shock waves, which are observed at the moment of the bubble cavitation. The average gas temperature of 700–1500 K was calculated from the lowest OH (A-X) 0-0 band transitions. The average electron concentrations of 1020–1023 m‑3 were calculated from H α and H β line profiles. Finally, the production of a chemically active species is determined by hydrogen peroxide energy yields related to the energy consumption of the whole interelectrode system. All these quantities are dependent on the solution conductivity, the discharge polarity, and the applied power.
Perugia, Giulia; Rodríguez-Martín, Daniel; Boladeras, Marta Díaz; Mallofré, Andreu Català; Barakova, Emilia; Rauterberg, Matthias
2017-01-01
Engagement in activities is crucial to improve quality of life in dementia. Yet, its measurement relies exclusively on behavior observation and the influence that behavioral and psychological symptoms of dementia (BPSD) have on it is overlooked. This study investigated whether quantity of movement, gauged with a wrist-worn accelerometer, could be a sound measure of engagement and whether apathy and depression negatively affected engagement. Fourteen participants with dementia took part in 6 sessions of activities: 3 of cognitive games (eg, jigsaw puzzles) and 3 of robot play (Pleo). Results highlighted significant correlations between quantity of movement and observational scales of engagement and a strong negative influence of apathy and depression on engagement. Overall, these findings suggest that quantity of movement could be used as an ancillary measure of engagement and underline the need to profile people with dementia according to their concurrent BPSD to better understand their engagement in activities. PMID:29148293
Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C
1978-06-01
Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.
VISUALIZATION AND SIMULATION OF NON-AQUEOUS PHASE LIQUIDS SOLUBILIZATION IN PORE NETWORKS
The design of in-situ remediation of contaminated soils is mostly based on a description at the macroscopic scale using a averaged quantities. These cannot address issues at the pore and pore network scales. In this paper, visualization experiments and numerical simulations in ...
Code of Federal Regulations, 2013 CFR
2013-07-01
... any form of solid, liquid, or gaseous fuel derived from such material. Fossil fuel-fired means the... average quantity of fossil fuel consumed by a unit, measured in millions of British Thermal Units... high relative to the reference value. Boiler means an enclosed fossil or other fuel-fired combustion...
Code of Federal Regulations, 2011 CFR
2011-07-01
... any form of solid, liquid, or gaseous fuel derived from such material. Fossil fuel-fired means the... average quantity of fossil fuel consumed by a unit, measured in millions of British Thermal Units... high relative to the reference value. Boiler means an enclosed fossil or other fuel-fired combustion...
Code of Federal Regulations, 2012 CFR
2012-07-01
... any form of solid, liquid, or gaseous fuel derived from such material. Fossil fuel-fired means the... average quantity of fossil fuel consumed by a unit, measured in millions of British Thermal Units... high relative to the reference value. Boiler means an enclosed fossil or other fuel-fired combustion...
Code of Federal Regulations, 2010 CFR
2010-07-01
... component failure or condition. Fossil fuel means natural gas, petroleum, coal, or any form of solid, liquid... average quantity of fossil fuel consumed by a unit, measured in millions of British Thermal Units... high relative to the reference value. Boiler means an enclosed fossil or other fuel-fired combustion...
Code of Federal Regulations, 2014 CFR
2014-07-01
... any form of solid, liquid, or gaseous fuel derived from such material. Fossil fuel-fired means the... average quantity of fossil fuel consumed by a unit, measured in millions of British Thermal Units... high relative to the reference value. Boiler means an enclosed fossil or other fuel-fired combustion...
EVALUATION AND ANALYSIS OF MICROSCALE FLOW AND TRANSPORT DURING REMEDIATION
The design of in-situ remediation is currently based on a description at the macroscopic scale. Phenomena at the pore and pore-network scales are typically lumped in terms of averaged quantities, using empirical or ad hoc expressions. These models cannot address fundamental rem...
FRBCAT: The Fast Radio Burst Catalogue
NASA Astrophysics Data System (ADS)
Petroff, E.; Barr, E. D.; Jameson, A.; Keane, E. F.; Bailes, M.; Kramer, M.; Morello, V.; Tabbara, D.; van Straten, W.
2016-09-01
Here, we present a catalogue of known Fast Radio Burst sources in the form of an online catalogue, FRBCAT. The catalogue includes information about the instrumentation used for the observations for each detected burst, the measured quantities from each observation, and model-dependent quantities derived from observed quantities. To aid in consistent comparisons of burst properties such as width and signal-to-noise ratios, we have re-processed all the bursts for which we have access to the raw data, with software which we make available. The originally derived properties are also listed for comparison. The catalogue is hosted online as a Mysql database which can also be downloaded in tabular or plain text format for off-line use. This database will be maintained for use by the community for studies of the Fast Radio Burst population as it grows.
Abdominal auscultation does not provide clear clinical diagnoses.
Durup-Dickenson, Maja; Christensen, Marie Kirk; Gade, John
2013-05-01
Abdominal auscultation is a part of the clinical examination of patients, but the determining factors in bowel sound evaluation are poorly described. The aim of this study was to assess inter- and intra-observer agreement in physicians' evaluation of pitch, intensity and quantity in abdominal auscultation. A total of 100 physicians were presented with 20 bowel sound recordings in a blinded set-up. Recordings had been made in a mix of healthy volunteers and emergency patients. They evaluated pitch, intensity and quantity of bowel sounds in a questionnaire with three, three and four categories of answers, respectively. Fleiss' multi-rater kappa (κ) coefficients were calculated for inter-observer agreement; for intra-observer agreement, calculation of probability was performed. Inter-observer agreement regarding pitch, intensity and quantity yielded κ-values of 0.19 (p < 0.0001), 0.30 (p < 0.0001) and 0.24 (p < 0.0001), respectively, corresponding to slight, fair and fair agreement. Regarding intra-observer agreement, the probability of agreement was 0.55 (95% confidence interval (CI): 0.51-0.59), 0.45 (95% CI: 0.42-0.49) and 0.41 (95% CI: 0.38-0.45) for pitch, intensity and quantity, respectively. Although relatively poor, observer agreement was slight to fair and thus better than expected by chance. Since the diagnostic value of auscultation increases with addition of history and clinics, and may be further improved by systematic training, it should still be used in the examination of patients with acute abdominal pain. not relevant. not relevant.
Inventory of interbasin transfers of water in the western conterminous United States
Petsch, H.E.
1989-01-01
Information is presented on the quantity of water transferred from one river basin to another in the western conterminous United States. The information is needed by water system managers and planners to develop water budgets for major river basins, to examine the relative extent of existing interbasin transfers, and to define the importance of transferring water to meet regional water demands. All or parts of 11 major water resources regions and 111 complete subregions comprise the study area; water is exported from 39 of these subregions. The average quantity of water exported annually during 1973-82 was about 12 million acre-feet. (USGS)
Acoustic Sensing of Ocean Turbulence
1991-12-01
quantities and of fast varying quantities, requiring high spatial resolution, fast response sensors and stable observation platforms. A classical approach to...with this type of sensor . Moum et.al. [Ref.l0] performed upper ocean observations with this instrument where they were able to 60 characterize the fine...platform orientation using the 3 axis accelerometer as tiltmeters . E. NON-ACOUSTIC DATA The non-acoustic channels on the CDV package are: 3 component
Heishman, Aaron D; Curtis, Michael A; Saliba, Ethan N; Hornett, Robert J; Malin, Steven K; Weltman, Arthur L
2017-06-01
Time of day is a key factor that influences the optimization of athletic performance. Intercollegiate coaches oftentimes hold early morning strength training sessions for a variety of factors including convenience. However, few studies have specifically investigated the effect of early morning vs. late afternoon strength training on performance indices of fatigue. This is athletically important because circadian and/or ultradian rhythms and alterations in sleep patterns can affect training ability. Therefore, the purpose of the present study was to examine the effects of morning vs. afternoon strength training on an acute performance index of fatigue (countermovement jump height, CMJ), player readiness (Omegawave), and self-reported sleep quantity. We hypothesized that afternoon training sessions would be associated with increased levels of performance, readiness, and self-reported sleep. A retrospective analysis was performed on data collected over the course of the preseason on 10 elite National Collegiate Athletic Association Division 1 male basketball players. All basketball-related activities were performed in the afternoon with strength and conditioning activities performed either in the morning or in the afternoon. The average values for CMJ, power output (Power), self-reported sleep quantity (sleep), and player readiness were examined. When player load and duration were matched, CMJ (58.8 ± 1.3 vs. 61.9 ± 1.6 cm, p = 0.009), Power (6,378.0 ± 131.2 vs. 6,622.1 ± 172.0 W, p = 0.009), and self-reported sleep duration (6.6 ± 0.4 vs. 7.4 ± 0.25 p = 0.016) were significantly higher with afternoon strength and conditioning training, with no differences observed in player readiness values. We conclude that performance is suppressed with morning training and is associated with a decrease in self-reported quantity of sleep.
Alcohol-Related Negative Consequences among Drinkers around the World
Graham, Kathryn; Bernards, Sharon; Knibbe, Ronald; Kairouz, Sylvia; Kuntche, Sandra; Wilsnack, Sharon C.; Greenfield, Thomas K.; Dietze, Paul; Obot, Isidore; Gmel, Gerhard
2013-01-01
Aims This paper examines (1) gender and country differences in negative consequences related to drinking; (2) relative rates of different consequences; (3) country-level predictors of consequences. Design, setting and participants Multi-level analyses used survey data from the GENACIS collaboration. Measurements Measures included 17 negative consequences grouped into (a) high endorsement acute, (b) personal and (c) social. Country-level measures included average frequency and quantity of drinking, percent current drinkers, Gross Domestic Product (GDP) and Human Development Index (HDI). Findings Overall, the three groupings of consequences were reported by 44%, 12% and 7% of men and by 31%, 6% and 3% of women, respectively. More men than women endorsed all consequences but gender differences were greatest for consequences associated with chronic drinking and social consequences related to male roles. The highest prevalence of consequences was in Uganda, lowest in Uruguay. Personal and social consequences were more likely in countries with higher usual quantity, fewer current drinkers, and lower scores on GDP and HDI. However, significant interactions with individual-level quantity indicated a stronger relationship between consequences and usual quantity among drinkers in countries with lower quantity, more current drinkers and higher scores on GDP and HDI. Conclusions Both gender and country need to be taken into consideration when assessing adverse drinking consequences. Individual measures of alcohol consumption and country-level variables are associated with experiencing such consequences. Additionally, country-level variables affect the strength of the relationship between usual quantity consumed by individuals and adverse consequences. PMID:21395893
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petitpas, Guillaume; Whitesides, Russel
UQHCCI_2 propagates the uncertainties of mass-average quantities (temperature, heat capacity ratio) and the output performances (IMEP, heat release, CA50 and RI) of a HCCI engine test bench using the pressure trace, and intake and exhaust molar fraction and IVC temperature distributions, as inputs (those inputs may be computed using another code UQHCCI_2, or entered independently).
40 CFR 86.1837-01 - Rounding of emission measurements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... additional significant figure, in accordance with 40 CFR 1065.20. (b) Fleet average NOX value calculations... calculating credits generated or needed as follows: manufacturers must round to the same number of significant figures that are contained in the quantity of vehicles in the denominator of the equation used to compute...
36 CFR 9.9 - Plan of operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... describing the quantity, quality, and any previous production of the deposit; (6) A mining reclamation plan... disturbed, proof, including production records for the years 1973, 1974, and 1975, that new disturbance is necessary to maintain an average annual rate of production not to exceed that of the years 1973, 1974, and...
40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... flows and/or tracer gas concentrations for transient and ramped modal cycles to validate the minimum... mode-average values instead of continuous measurements for discrete mode steady-state duty cycles... molar flow data. This involves determination of at least two of the following three quantities: Raw...
Code of Federal Regulations, 2013 CFR
2013-01-01
... history yield means the average of the actual production history yields for each insurable or noninsurable..., excluding value loss crops, the product obtained by multiplying: (i) 100 percent of the per unit price for... established price for the crop, times (ii) The relevant per unit quantity of the crop produced on the farm...
Comprehensive overview of the Point-by-Point model of prompt emission in fission
NASA Astrophysics Data System (ADS)
Tudora, A.; Hambsch, F.-J.
2017-08-01
The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for 252Cf(SF) and 235 ( n, f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν (A,TKE) and γ-ray energy E_{γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A ( e.g., ν(A), E_{γ}(A), < ɛ > (A) etc.), as a function of TKE ( e.g., ν (TKE), E_{γ}(TKE)) up to the prompt neutron distribution P (ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning nuclei and incident energies for which the experimental information is completely missing. The PbP treatment can also provide input parameters of the improved Los Alamos model with non-equal residual temperature distributions recently reported by Madland and Kahler, especially for fissioning nuclei without any experimental information concerning the prompt emission.
Commercial turbofan engine exhaust nozzle flow analyses using PAB3D
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Uenishi, K.; Carlson, John R.; Keith, B. D.
1992-01-01
Recent developments of a three-dimensional (PAB3D) code have paved the way for a computational investigation of complex aircraft aerodynamic components. The PAB3D code was developed for solving the simplified Reynolds Averaged Navier-Stokes equations in a three-dimensional multiblock/multizone structured mesh domain. The present analysis was applied to commercial turbofan exhaust flow systems. Solution sensitivity to grid density is presented. Laminar flow solutions were developed for all grids and two-equation k-epsilon solutions were developed for selected grids. Static pressure distributions, mass flow and thrust quantities were calculated for on-design engine operating conditions. Good agreement between predicted surface static pressures and experimental data was observed at different locations. Mass flow was predicted within 0.2 percent of experimental data. Thrust forces were typically within 0.4 percent of experimental data.
Remote sensing for oceanography: Past, present, future
NASA Technical Reports Server (NTRS)
Mcgoldrick, L. F.
1984-01-01
Oceanic dynamics was traditionally investigated by sampling from instruments in situ, yielding quantitative measurements that are intermittent in both space and time; the ocean is undersampled. The need to obtain proper sampling of the averaged quantities treated in analytical and numerical models is at present the most significant limitation on advances in physical oceanography. Within the past decade, many electromagnetic techniques for the study of the Earth and planets were applied to the study of the ocean. Now satellites promise nearly total coverage of the world's oceans using only a few days to a few weeks of observations. Both a review of the early and present techniques applied to satellite oceanography and a description of some future systems to be launched into orbit during the remainder of this century are presented. Both scientific and technologic capabilities are discussed.
VizieR Online Data Catalog: BVI photometry of LMC bar variables (Di Fabrizio+, 2005)
NASA Astrophysics Data System (ADS)
di Fabrizio, L.; Clementini, G.; Maio, M.; Bragaglia, A.; Carretta, E.; Gratton, R.; Montegriffo, P.; Zoccali, M.
2005-01-01
We present the Johnson-Cousins B,V and I time series data obtained for 162 variable stars (135 RR Lyrae, 4 candidate Anomalous Cepheids, 11 Classical Cepheids, 11 eclipsing binaries and 1 delta Scuti star) in two 13x13 square arcmin areas close to the bar of the Large Magellanic Cloud. The photometric observations presented in this paper were carried out at the 1.54m Danish telescope located in La Silla, Chile, on the nights 4-7 January 1999, UT, and 23-24 January 2001, UT, respectively. In the paper we give coordinates, finding charts, periods, epochs, amplitudes, and mean quantities (intensity- and magnitude-averaged luminosities) of the variables with full coverage of the light variations, along with a discussion of the pulsation properties of the RR Lyrae stars in the sample. (8 data files).
Resolution of Probabilistic Weather Forecasts with Application in Disease Management.
Hughes, G; McRoberts, N; Burnett, F J
2017-02-01
Predictive systems in disease management often incorporate weather data among the disease risk factors, and sometimes this comes in the form of forecast weather data rather than observed weather data. In such cases, it is useful to have an evaluation of the operational weather forecast, in addition to the evaluation of the disease forecasts provided by the predictive system. Typically, weather forecasts and disease forecasts are evaluated using different methodologies. However, the information theoretic quantity expected mutual information provides a basis for evaluating both kinds of forecast. Expected mutual information is an appropriate metric for the average performance of a predictive system over a set of forecasts. Both relative entropy (a divergence, measuring information gain) and specific information (an entropy difference, measuring change in uncertainty) provide a basis for the assessment of individual forecasts.
NASA Astrophysics Data System (ADS)
Stackhouse, Paul; Wong, Takmeng; Kratz, David; Gupta, Shashi; Wiber, Anne; Edwards, Anne
2010-05-01
The FLASHFlux (Fast Longwave and Shortwave radiative Fluxes from CERES and MODIS) project derives daily averaged gridded top-of-atmosphere (TOA) and surface radiative fluxes within one week of observation. Production of CERES based TOA and surface fluxes is achieved by using the latest CERES calibration that is assumed constant in time and by making simplifying assumptions in the computation of time and space averaged quantities. Together these assumptions result in approximately a 1% increase in the uncertainty for FLASHFlux products over CERES. Analysis has clearly demonstrated that the global-annual mean outgoing longwave radiation shows a decrease of ~0.75 Wm-2, from 2007 to 2008, while the global-annual mean reflected shortwave radiation shows a decrease of 0.14 Wm-2 over that same period. Thus, the combined longwave and shortwave changes have resulted in an increase of ~0.89 Wm-2 in net radiation into the Earth climate system in 2008. A time series of TOA fluxes was constructed from CERES EBAF, CERES ERBE-like and FLASHFLUX. Relative to this multi-dataset average from 2001 to 2008, the 2008 global-annual mean anomalies are -0.54/-0.26/+0.80 Wm-2, respectively, for the longwave/shortwave/net radiation. These flux values, which were published in the NOAA 2008 State of the Climate Report, are within their corresponding 2-sigma interannual variabilities for this period. This paper extends these results through 2009, where the net flux is observed to recover. The TOA LW variability is also compared to AIRS OLR showing excellent agreement in the anomalies. The variability appears very well correlated to the to the 2007-2009 La Nina/El Nino cycles, which altered the global distribution of clouds, total column water vapor and temperature. Reassessments of these results are expected when newer Clouds and the Earth's Radiant Energy System (CERES) data are released.
NASA Astrophysics Data System (ADS)
Fujiyama, Kazunari; Kimachi, Hirohisa; Tsuboi, Toshiki; Hagiwara, Hiroyuki; Ogino, Shotaro; Mizutani, Yoshiki
EBSD(Electron BackScatter Diffraction) analyses were conducted for studying the quantitative microstructural metrics of creep and creep-fatigue damage for austenitic SUS304HTB boiler tube steel and ferritic Mod.9Cr piping steel. KAM(Kernel Average Misorientation) maps and GOS(Grain Orientation Spread) maps were obtained for these samples and the area averaged values KAMave and GOSave were obtained. While the increasing trends of these misorientation metrics were observed for SUS304HTB steel, the decreasing trends were observed for damaged Mod.9Cr steel with extensive recovery of subgrain structure. To establish more universal parameter representing the accumulation of damage to compensate these opposite trends, the EBSD strain parameters were introduced for converting the misorientation changes into the quantities representing accumulated permanent strains during creep and creep-fatigue damage process. As KAM values were dependent on the pixel size (inversely proportional to the observation magnification) and the permanent strain could be expressed as the shear strain which was the product of dislocation density, Burgers vector and dislocation movement distance, two KAM strain parameters MεKAMnet and MεδKAMave were introduced as the sum of product of the noise subtracted KAMnet and the absolute change from initial value δKAMave with dislocation movement distance divided by pixel size. MεδKAMave parameter showed better relationship both with creep strain in creep tests and accumulated creep strain range in creep-fatigue tests. This parameter can be used as the strain-based damage evaluation and detector of final failure.
Intraarterial infusion chemotherapy with lipioclol-CDDP suspension for hepatocellular carcinoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Kazuhiro; Shimizu, Tadafumi; Narabayashi, Isamu
Purpose: To quantitatively evaluate the usefulness of lipiodol-CDDP suspension (LCS) chemotherapy in hepatocellular carcinoma (HCC).Methods: CDDP (cis-diamminedichloroplatinum) powder was prepared by removing the water and NaCl from aqueous CDDP. Two quantities of prepared CDDP powder, 10 mg and 20 mg, were mixed with 1 ml each of iopamidol 300 mgl/ml (IP300) and lipiodol (LPD) using a high pressure pumping method, thus producing LCS. Thirty-two patients with HCC, who had good renal function [creatinine clearance (Ccr) 50 ml/min or more], received additional intraarterial infusion chemotherapy with LCS or LCS alone.Results: The most frequently observed CDDP powder sizes were 5.95-10.90 {mu}m (average:more » 11.59 {mu}m). The LCS obtained demonstrated a suspension of 2-12 {mu}m (average 3.69 {mu}m) immediately after mixing, and no significant changes were observed in LCS particle sizes 3 hr after mixing. Moreover, the sustained release with LCS was observed for up to 3 hr. Meanwhile, the peripheral free platinum concentration between intraarterial infusion chemotherapy with LCS and intraarterial infusion with the aqueous solution of CDDP, with respect to variance residence time (VRT), showed a significant difference, with a p value of 0.0382. The survival rate was 89.84% at 1 year, 73.787c at 2 years, and 68.51% at 3 years. Furthermore, the platinum concentration in the tumor was 25-95 times the concentration in the surrounding liver parenchyma.Conclusion: Good clinical results can be expected by applying LCS to HCC.« less
Intraarterial Infusion Chemotherapy with Lipiodol-CDDP Suspension for Hepatocellular Carcinoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Kazuhiro; Shimizu, Tadafumi; Narabayashi, Isamu
Purpose: To quantitatively evaluate the usefulness of lipiodol-CDDP suspension (LCS) chemotherapy in hepatocellular carcinoma (HCC).Methods: CDDP (cis-diamminedichloroplatinum) powder was prepared by removing the water and NaCl from aqueous CDDP. Two quantities of prepared CDDP powder, 10 mg and 20 mg, were mixed with 1 ml each of iopamidol 300 mgI/ml (IP300) and lipiodol (LPD) using a high pressure pumping method, thus producing LCS. Thirty-two patients with HCC, who had good renal function [creatinine clearance (Ccr) 50 ml/min or more], received additional intraarterial infusion chemotherapy with LCS or LCS alone.Results: The most frequently observed CDDP powder sizes were 5.95-10.90 {mu}m (average:more » 11.59 {mu}m). The LCS obtained demonstrated a suspension of 2-12 {mu}m (average 3.69 {mu}m) immediately after mixing, and no significant changes were observed in LCS particle sizes 3 hr after mixing. Moreover, the sustained release with LCS was observed for up to 3 hr. Meanwhile, the peripheral free platinum concentration between intraarterial infusion chemotherapy with LCS and intraarterial infusion with the aqueous solution of CDDP, with respect to variance residence time (VRT), showed a significant difference, with a p value of 0.0382. The survival rate was 89.84% at 1 year, 73.78% at 2 years, and 68.51% at 3 years. Furthermore, the platinum concentration in the tumor was 25-95 times the concentration in the surrounding liver parenchyma.Conclusion: Good clinical results can be expected by applying LCS to HCC.« less
Markus, Marcello Ricardo Paulista; Lieb, Wolfgang; Stritzke, Jan; Siewert, Ulrike; Troitzsch, Paulina; Koch, Manja; Dörr, Marcus; Felix, Stephan Burkhard; Völzke, Henry; Schunkert, Heribert; Baumeister, Sebastian Edgar
2015-05-01
In developed countries, sclerotic and calcific degeneration of the aortic valve is a common disorder showing pathophysiologic similarities with atherothrombotic coronary disease. Light to moderate alcohol consumption has been associated with a lower risk for atherothrombotic coronary disease and mortality. Whether alcohol consumption affects the development of aortic valve sclerosis (AVS) is not well known. In the present study, we aim to analyze the cross-sectional association between average daily alcohol consumption and AVS in the general population. We analyzed cross-sectional data from 2022 men and women, aged 45 to 81 years, from the population-based Study of Health in Pomerania. We used a computer-assisted interview that included beverage-specific questions about quantity and frequency of alcohol over the last 30 days to calculate the average quantity of alcohol consumption (in grams of ethanol per day). AVS was ascertained by echocardiography. The prevalence of AVS was 32.3%. Average daily alcohol intake displayed a J-type relation with AVS (fully adjusted P value: 0.005). Compared with individuals with an average consumption of 10 g of alcohol per day, multivariable-adjusted odds ratios were 1.60 (95% confidence interval, 1.19-2.14) among current abstainers and 1.56 (95% confidence interval, 1.01-2.41) among individuals with an average consumption of 60 g per day. Our findings indicate that light to moderate alcohol consumption was associated with a lower odd of having AVS. Prospective data need to address whether alcohol consumption and related changes over time in several biological markers affect the progression of AVS. © 2015 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Xia, Yi
Fractures and associated bone fragility induced by osteoporosis and osteopenia are widespread health threat to current society. Early detection of fracture risk associated with bone quantity and quality is important for both the prevention and treatment of osteoporosis and consequent complications. Quantitative ultrasound (QUS) is an engineering technology for monitoring bone quantity and quality of humans on earth and astronauts subjected to long duration microgravity. Factors currently limiting the acceptance of QUS technology involve precision, accuracy, single index and standardization. The objective of this study was to improve the accuracy and precision of an image-based QUS technique for non-invasive evaluation of trabecular bone quantity and quality by developing new techniques and understanding ultrasound/tissue interaction. Several new techniques have been developed in this dissertation study, including the automatic identification of irregular region of interest (iROI) in bone, surface topology mapping (STM) and mean scattering spacing (MSS) estimation for evaluating trabecular bone structure. In vitro results have shown that (1) the inter- and intra-observer errors in QUS measurement were reduced two to five fold by iROI compared to previous results; (2) the accuracy of QUS parameter, e.g., ultrasound velocity (UV) through bone, was improved 16% by STM; and (3) the averaged trabecular spacing can be estimated by MSS technique (r2=0.72, p<0.01). The measurement errors of BUA and UV introduced by the soft tissue and cortical shells in vivo can be quantified by developed foot model and simplified cortical-trabecular-cortical sandwich model, which were verified by the experimental results. The mechanisms of the errors induced by the cortical and soft tissues were revealed by the model. With developed new techniques and understanding of sound-tissue interaction, in vivo clinical trail and bed rest study were preformed to evaluate the performance of QUS in clinical applications. It has been demonstrated that the QUS has similar performance for in vivo bone density measurement compared to current gold-standard method, i.e., DXA, while additional information are obtained by the QUS for predicting fracture risk by monitoring of bone's quality. The developed QUS imaging technique can be used to assess bone's quantity and quality with improved accuracy and precision.
Observable quantities for electrodiffusion processes in membranes.
Garrido, Javier
2008-03-13
Electrically driven ion transport processes in a membrane system are analyzed in terms of observable quantities, such as the apparent volume flow, the time dependence of the electrolyte concentration in one cell compartment, and the electrical potential difference between the electrodes. The relations between the fluxes and these observable quantities are rigorously deduced from balances for constituent mass and solution volume. These relations improve the results for the transport coefficients up to 25% with respect to those obtained using simplified expressions common in the literature. Given the practical importance of ionic transport numbers and the solvent transference number in the phenomenological description of electrically driven processes, the transport equations are presented using the electrolyte concentration difference and the electric current as the drivers of the different constituents. Because various electric potential differences can be used in this traditional irreversible thermodynamics approach, the advantages of the formulation of the transport equations in terms of concentration difference and electric current are emphasized.
1987-04-24
eliminated. Averaging the mass spectra from only 500 laser shots (50 seconds with this system) resulted in a detection limit of r15 ppb. The...resolution. Fluctuations in laser pulse energy from shot to shot appear as noise in the interleaved data, but averaging of several such traces gives a good...ranging from 0to 120 ix Wm- 2. quantity of material volatilized was proportional to the number of lase shots . A simple time-of-flight mass spectrometer was
NASA Astrophysics Data System (ADS)
Dasenbrock-Gammon, Nathan; Zacate, Matthew O.
2017-05-01
Baker et al. derived time-dependent expressions for calculating average number of jumps per encounter and displacement probabilities for vacancy diffusion in crystal lattice systems with infinitesimal vacancy concentrations. As shown in this work, their formulation is readily expanded to include finite vacancy concentration, which allows calculation of concentration-dependent, time-averaged quantities. This is useful because it provides a computationally efficient method to express lineshapes of nuclear spectroscopic techniques through the use of stochastic fluctuation models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beck, C.; Fabbian, D.; Rezaei, R.
2017-06-10
Before using three-dimensional (3D) magnetohydrodynamical (MHD) simulations of the solar photosphere in the determination of elemental abundances, one has to ensure that the correct amount of magnetic flux is present in the simulations. The presence of magnetic flux modifies the thermal structure of the solar photosphere, which affects abundance determinations and the solar spectral irradiance. The amount of magnetic flux in the solar photosphere also constrains any possible heating in the outer solar atmosphere through magnetic reconnection. We compare the polarization signals in disk-center observations of the solar photosphere in quiet-Sun regions with those in Stokes spectra computed on themore » basis of 3D MHD simulations having average magnetic flux densities of about 20, 56, 112, and 224 G. This approach allows us to find the simulation run that best matches the observations. The observations were taken with the Hinode SpectroPolarimeter (SP), the Tenerife Infrared Polarimeter (TIP), the Polarimetric Littrow Spectrograph (POLIS), and the GREGOR Fabry–Pèrot Interferometer (GFPI), respectively. We determine characteristic quantities of full Stokes profiles in a few photospheric spectral lines in the visible (630 nm) and near-infrared (1083 and 1565 nm). We find that the appearance of abnormal granulation in intensity maps of degraded simulations can be traced back to an initially regular granulation pattern with numerous bright points in the intergranular lanes before the spatial degradation. The linear polarization signals in the simulations are almost exclusively related to canopies of strong magnetic flux concentrations and not to transient events of magnetic flux emergence. We find that the average vertical magnetic flux density in the simulation should be less than 50 G to reproduce the observed polarization signals in the quiet-Sun internetwork. A value of about 35 G gives the best match across the SP, TIP, POLIS, and GFPI observations.« less
Majak, W; Hall, J W; Rode, L M; Kalnin, C M
1986-06-01
Ruminal chlorophyll and rates of passage of two water-soluble markers were simultaneously determined in cattle with different susceptibilities to alfalfa bloat. The markers showed a slower rate of passage from the rumens of more susceptible cattle where the average half-lives for cobalt-ethylenediaminetetraacetic acid and chromium-ethylenediaminetetraacetic acid were 12 to 17 h. Average half-life of the markers was 8 h in the rumens of the less susceptible animals. In agreement, chloroplast particles in the liquid phase of rumen contents showed greater accumulation in animals susceptible to bloat, but many more observations were required to detect differences in chlorophyll among animals. This was partly due to the unhomogeneous dispersion of chloroplast fragments in the reticulorumen compared with the uniform distribution of the inert markers. Differences in rumen volumes (estimated from the quantity of marker administered and its initial concentration) were detected among animals, but these did not show a relationship to bloat susceptibility. In vitro studies indicated that alfalfa chloroplast particles were not readily degraded by rumen microorganisms. Our results support earlier conclusions on slower rates of salivation for cattle that bloat compared with those that do not.
Large deviations of a long-time average in the Ehrenfest urn model
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Zilber, Pini
2018-05-01
Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .
Large eddy simulation for aerodynamics: status and perspectives.
Sagaut, Pierre; Deck, Sébastien
2009-07-28
The present paper provides an up-to-date survey of the use of large eddy simulation (LES) and sequels for engineering applications related to aerodynamics. Most recent landmark achievements are presented. Two categories of problem may be distinguished whether the location of separation is triggered by the geometry or not. In the first case, LES can be considered as a mature technique and recent hybrid Reynolds-averaged Navier-Stokes (RANS)-LES methods do not allow for a significant increase in terms of geometrical complexity and/or Reynolds number with respect to classical LES. When attached boundary layers have a significant impact on the global flow dynamics, the use of hybrid RANS-LES remains the principal strategy to reduce computational cost compared to LES. Another striking observation is that the level of validation is most of the time restricted to time-averaged global quantities, a detailed analysis of the flow unsteadiness being missing. Therefore, a clear need for detailed validation in the near future is identified. To this end, new issues, such as uncertainty and error quantification and modelling, will be of major importance. First results dealing with uncertainty modelling in unsteady turbulent flow simulation are presented.
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Test of quantum thermalization in the two-dimensional transverse-field Ising model
Blaß, Benjamin; Rieger, Heiko
2016-01-01
We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. PMID:27905523
Direct numerical simulation of turbulent H2-O2 combustion using reduced chemistry
NASA Technical Reports Server (NTRS)
Montgomery, Christopher J.; Kosaly, George; Riley, James J.
1993-01-01
Results of direct numerical simulations of hydrogen-oxygen combustion using a partial-equilibrium chemistry scheme in constant density, decaying, isotropic turbulence are reported. The simulations qualitatively reproduce many features of experimental results, such as superequilibrium radical species mole fractions, with temperature and major species mole fractions closer to chemical equilibrium. It was also observed that the peak reaction rates occur in narrow zones where the stoichiometric surface intersects regions of high scalar dissipation, as might be expected for combustion conditions close to chemical equilibrium. Another finding was that high OH mole fraction correspond more closely to the stoichiometric surface than to areas of high reaction rate for conditions of the simulations. Simulation results were compared to predictions of the Conditional Moment Closure model. This model was found to give good results for all quantities of interest when the conditionally averaged scalar dissipation was used in the prediction. When the nonconditioned average dissipation was used, the predictions compared well to the simulations for most of the species and temperature, but not for the reaction rate. The comparison would be expected to improve for higher Reynolds number flows, however.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
On the Statistical Properties of the Lower Main Sequence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelou, George C.; Bellinger, Earl P.; Hekker, Saskia
Astronomy is in an era where all-sky surveys are mapping the Galaxy. The plethora of photometric, spectroscopic, asteroseismic, and astrometric data allows us to characterize the comprising stars in detail. Here we quantify to what extent precise stellar observations reveal information about the properties of a star, including properties that are unobserved, or even unobservable. We analyze the diagnostic potential of classical and asteroseismic observations for inferring stellar parameters such as age, mass, and radius from evolutionary tracks of solar-like oscillators on the lower main sequence. We perform rank correlation tests in order to determine the capacity of each observablemore » quantity to probe structural components of stars and infer their evolutionary histories. We also analyze the principal components of classic and asteroseismic observables to highlight the degree of redundancy present in the measured quantities and demonstrate the extent to which information of the model parameters can be extracted. We perform multiple regression using combinations of observable quantities in a grid of evolutionary simulations and appraise the predictive utility of each combination in determining the properties of stars. We identify the combinations that are useful and provide limits to where each type of observable quantity can reveal information about a star. We investigate the accuracy with which targets in the upcoming TESS and PLATO missions can be characterized. We demonstrate that the combination of observations from GAIA and PLATO will allow us to tightly constrain stellar masses, ages, and radii with machine learning for the purposes of Galactic and planetary studies.« less
Simulation of a Synthetic Jet in Quiescent Air Using TLNS3D Flow Code
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Turkel, Eli
2007-01-01
Although the actuator geometry is highly three-dimensional, the outer flowfield is nominally two-dimensional because of the high aspect ratio of the rectangular slot. For the present study, this configuration is modeled as a two-dimensional problem. A multi-block structured grid available at the CFDVAL2004 website is used as a baseline grid. The periodic motion of the diaphragm is simulated by specifying a sinusoidal velocity at the diaphragm surface with a frequency of 450 Hz, corresponding to the experimental setup. The amplitude is chosen so that the maximum Mach number at the jet exit is approximately 0.1, to replicate the experimental conditions. At the solid walls zero slip, zero injection, adiabatic temperature and zero pressure gradient conditions are imposed. In the external region, symmetry conditions are imposed on the side (vertical) boundaries and far-field conditions are imposed on the top boundary. A nominal free-stream Mach number of 0.001 is imposed in the free stream to simulate incompressible flow conditions in the TLNS3D code, which solves compressible flow equations. The code was run in unsteady (URANS) mode until the periodicity was established. The time-mean quantities were obtained by running the code for at least another 15 periods and averaging the flow quantities over these periods. The phase-locked average of flow quantities were assumed to be coincident with their values during the last full time period.
Magnetospheric-ionospheric Poynting flux
NASA Technical Reports Server (NTRS)
Thayer, Jeffrey P.
1994-01-01
Over the past three years of funding SRI, in collaboration with the University of Texas at Dallas, has been involved in determining the total electromagnetic energy flux into the upper atmosphere from DE-B electric and magnetic field measurements and modeling the electromagnetic energy flux at high latitudes, taking into account the coupled magnetosphere-ionosphere system. This effort has been very successful in establishing the DC Poynting flux as a fundamental quantity in describing the coupling of electromagnetic energy between the magnetosphere and ionosphere. The DE-B satellite electric and magnetic field measurements were carefully scrutinized to provide, for the first time, a large data set of DC, field-aligned, Poynting flux measurement. Investigations describing the field-aligned Poynting flux observations from DE-B orbits under specific geomagnetic conditions and from many orbits were conducted to provide a statistical average of the Poynting flux distribution over the polar cap. The theoretical modeling effort has provided insight into the observations by formulating the connection between Poynting's theorem and the electromagnetic energy conversion processes that occur in the ionosphere. Modeling and evaluation of these processes has helped interpret the satellite observations of the DC Poynting flux and improved our understanding of the coupling between the ionosphere and magnetosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malgin, A. S., E-mail: malgin@lngs.infn.it
Characteristics of cosmogenic neutrons, such as the yield, production rate, and flux, were determined for a standard rock. The dependences of these quantities on the standard-rock depth and on the average muon energy were obtained. These properties and dependences make it possible to estimate easy the muon-induced neutron background in underground laboratories for various chemical compositions of rock.
40 CFR 63.498 - Back-end process provisions-recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... be the crumb rubber dry weight of the rubber leaving the stripper. (iv) The organic HAP content of... stripper. (B) For solution processes, this quantity shall be the crumb rubber dry weight of the crumb rubber leaving the stripper. (iii) The hourly average of all stripper parameter results; (iv) If one or...
Vaganan, M Mayil; Sarumathi, S; Nandakumar, A; Ravi, I; Mustaffa, M M
2015-02-01
Four protocols viz., the trichloroacetic acid-acetone (TCA), phenol-ammonium acetate (PAA), phenol/SDS-ammonium acetate (PSA) and trisbase-acetone (TBA) were evaluated with modifications for protein extraction from banana (Grand Naine) roots, considered as recalcitrant tissues for proteomic analysis. The two-dimensional electrophoresis (2-DE) separated proteins were compared based on protein yield, number of resolved proteins, sum of spot quantity, average spot intensity and proteins resolved in 4-7 pI range. The PAA protocol yielded more proteins (0.89 mg/g of tissues) and protein spots (584) in 2-DE gel than TCA and other protocols. Also, the PAA protocol was superior in terms of sum of total spot quantity and average spot intensity than TCA and other protocols, suggesting phenol as extractant and ammonium acetate as precipitant of proteins were the most suitable for banana rooteomics analysis by 2-DE. In addition, 1:3 ratios of root tissue to extraction buffer and overnight protein precipitation were most efficient to obtain maximum protein yield.
Zhang, Heng; Yang, Sheng-Long; Meng, Hai-Xing
2012-06-01
Based on four surveys of eggs and larvae in the Yangtze estuary in 2005 (April and November) and 2006 (April and September), combined with the historical data of the wetland in 1990 (September) and 1991 (March), we analyzed seasonal changes in fish species composition and quantity of ichthyoplankton. Thirty-six species of egg and larvae were collected and marine fish species were the highest represented ecological guild. Average fish species and average abundance in spring were lower than in autumn for every survey. The total number of eggs in brackish water was higher than in fresh water, but the total number of larvae and juveniles in brackish water was lower. The abundance of eggs and larvae during from 2005 to 2006 in both spring and autumn was higher compared to those from 1990 to 1991. Obvious differences in species composition in September between 1990 and 2006 were found, especially for Erythroculter ilishaeformis and Neosalanx taihuensis. Fish species composition and quantity within the ichthyoplankton community has obviously changed in the Yangtze estuary over the last 20 years.
Economy-wide material input/output and dematerialization analysis of Jilin Province (China).
Li, MingSheng; Zhang, HuiMin; Li, Zhi; Tong, LianJun
2010-06-01
In this paper, both direct material input (DMI) and domestic processed output (DPO) of Jilin Province in 1990-2006 were calculated and then based on these two indexes, a dematerialization model was established. The main results are summarized as follows: (1) both direct material input and domestic processed output increase at a steady rate during 1990-2006, with average annual growth rates of 4.19% and 2.77%, respectively. (2) The average contribution rate of material input to economic growth is 44%, indicating that the economic growth is visibly extensive. (3) During the studied period, accumulative quantity of material input dematerialization is 11,543 x 10(4) t and quantity of waste dematerialization is 5,987 x10(4) t. Moreover, dematerialization gaps are positive, suggesting that the potential of dematerialization has been well fulfilled. (4) In most years of the analyzed period, especially 2003-2006, the economic system of Jilin Province represents an unsustainable state. The accelerated economic growth relies mostly on excessive resources consumption after the Revitalization Strategy of Northeast China was launched.
Potentiometric sensors for the selective determination of sulbutiamine.
Ahmed, M A; Elbeshlawy, M M
1999-11-01
Five novel polyvinyl chloride (PVC) matrix membrane sensors for the selective determination of sulbutiamine (SBA) cation are described. These sensors are based on molybdate, tetraphenylborate, reineckate, phosphotun gestate and phosphomolybdate, as possible ion-pairing agents. These sensors display rapid near-Nernstian stable response over a relatively wide concentration range 1x10(-2)-1x10(-6) M of sulbutiamine, with calibration slopes 28 32.6 mV decade(-1) over a reasonable pH range 2-6. The proposed sensors proved to have a good selectivity for SBA over some inorganic and organic cations. The five potentiometric sensors were applied successfully in the determination of SBA in a pharmaceutical preparation (arcalion-200) using both direct potentiometry and potentiometric titration. Direct potentiometric determination of microgram quantities of SBA gave average recoveries of 99.4 and 99.3 with mean standard deviation of 0.7 and 0.3 for pure SBA and arcalion-200 formulation respectively. Potentiometric titration of milligram quantities of SBA gave average recoveries of 99.3 and 98.7% with mean standard deviation of 0.7 and 1.2 for pure SBA and arcalion-200 formulation, respectively.
Analogy between the Navier-Stokes equations and Maxwell's equations: Application to turbulence
NASA Astrophysics Data System (ADS)
Marmanis, Haralambos
1998-06-01
A new theory of turbulence is initiated, based on the analogy between electromagnetism and turbulent hydrodynamics, for the purpose of describing the dynamical behavior of averaged flow quantities in incompressible fluid flows of high Reynolds numbers. The starting point is the recognition that the vorticity (w=∇×u) and the Lamb vector (l=w×u) should be taken as the kernel of a dynamical theory of turbulence. The governing equations for these fields can be obtained by the Navier-Stokes equations, which underlie the whole evolution. Then whatever parts are not explicitly expressed as a function of w or l only are gathered and treated as source terms. This is done by introducing the concepts of turbulent charge and turbulent current. Thus we are led to a closed set of linear equations for the averaged field quantities. The premise is that the earlier introduced sources will be apt for modeling, in the sense that their distribution will depend only on the geometry and the total energetics of the flow. The dynamics described in the preceding manner is what we call the metafluid dynamics.
[Green space vegetation quantity in workshop area of Wuhan Iron and Steel Company].
Chen, Fang; Zhou, Zhixiang; Wang, Pengcheng; Li, Haifang; Zhong, Yingfei
2006-04-01
Aimed at the complex community structure and higher fragmentation of urban green space, and based on the investigation of synusia structure and its coverage, this paper studied the vegetation quantity of ornamental green space in the workshop area of Wuhan Iron and Steel Company, with the help of GIS. The results showed that different life forms of ornamental plants in this area had a greater difference in their single leaf area and leaf area index (LAI), and the LAI was not only depended on single leaf area, but also governed by the shape of tree crown and the intensive degree of branches and leaves. The total vegetation quantity was 1 694.2 hm2, with the average LAI being 7.75, and the vegetation quantity of arbor-shrub-herb and arbor-shrub communities accounted for 79.7% and 92.3% of the total, respectively, reflecting that the green space structure was dominated by arbor species and by arbor-shrub-herb and arbor-shrub community types. Single layer-structured lawn had a less percentage, while the vegetation quantity of herb synusia accounted for 22.9% of the total, suggesting an afforestation characteristic of "making use of every bit of space" in the workshop area. The vegetation quantity of urban ornamental green space depended on the area of green space, its synusia structure, and the LAI and coverage of ornamental plants. In enlarging urban green space, ornamental plant species with high LAI should be selected, and community structure should be improved to have a higher vegetation quantity in urban area. To quantify the vegetation quantity of urban ornamental green space more accurately, synusia should be taken as the unit to measure the LAI of typical species, and the synusia structure and its coverage of different community types should be investigated with the help of remote sensing images and GIS.
Wardell, Jeffrey D.; Read, Jennifer P.
2012-01-01
Social learning mechanisms, such as descriptive norms for drinking behavior (norms) and positive alcohol expectancies (PAEs), play a major role in college student alcohol use. According to the principle of reciprocal determinism (Bandura, 1977), norms and PAEs should be reciprocally associated with alcohol use, each influencing one another over time. However, the nature of these prospective relationships for college students is in need of further investigation. This study provided the first examination of the unique reciprocal associations among norms, PAEs, and drinking together in a single model. PAEs become more stable with age, whereas norms are likely to be more dynamic upon college entry. Thus, we hypothesized that alcohol use would show stronger reciprocal associations with norms than with PAEs for college students. Students (N=557; 67% female) completed online measures of PAEs, norms and quantity and frequency of alcohol use in September of their first (T1), second (T2), and third (T3) years of college. Reciprocal associations were analyzed using a cross-lagged panel design. PAEs had unidirectional influences on frequency and quantity of alcohol use, with no prospective effects from alcohol use to PAEs. Reciprocal associations were observed between norms and alcohol use, but only for quantity and not frequency. Specifically, drinking quantity prospectively predicted quantity norms and quantity norms prospectively predicted drinking quantity. This effect was observed across both years in the model. These findings support the reciprocal determinism hypothesis for norms but not for PAEs in college students, and may help to inform norm-based interventions. PMID:23088403
Wardell, Jeffrey D; Read, Jennifer P
2013-03-01
Social learning mechanisms, such as descriptive norms for drinking behavior (norms) and positive alcohol expectancies (PAEs), play a major role in college student alcohol use. According to the principle of reciprocal determinism (Bandura, 1977), norms and PAEs should be reciprocally associated with alcohol use, each influencing one another over time. However, the nature of these prospective relationships for college students is in need of further investigation. This study provided the first examination of the unique reciprocal associations among norms, PAEs, and drinking together in a single model. PAEs become more stable with age, whereas norms are likely to be more dynamic upon college entry. Thus, we hypothesized that alcohol use would show stronger reciprocal associations with norms than with PAEs for college students. Students (N = 557; 67% women) completed online measures of PAEs, norms, and quantity and frequency of alcohol use in September of their first (T1), second (T2), and third (T3) years of college. Reciprocal associations were analyzed using a cross-lagged panel design. PAEs had unidirectional influences on frequency and quantity of alcohol use, with no prospective effects from alcohol use to PAEs. Reciprocal associations were observed between norms and alcohol use, but only for quantity and not for frequency. Specifically, drinking quantity prospectively predicted quantity norms and quantity norms prospectively predicted drinking quantity. This effect was observed across both years in the model. These findings support the reciprocal determinism hypothesis for norms but not for PAEs in college students and may help to inform norm-based interventions. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Zhang, Xin; Wu, Qunhong; Liu, Guoxiang; Li, Ye; Gao, Lijun; Guo, Bin; Fu, Wenqi; Hao, Yanhua; Cui, Yu; Huang, Weidong; Coyte, Peter C
2014-01-01
Objectives The government of China has introduced a National Essential Medicines Policy (NEMP) in the new round of health system reform. The objective of this paper is to analyse whether the NEMP can play a role in curbing the rise of medical expenditures without disrupting the availability of healthcare services at township hospitals in China. Design This study adopted a pre–post treatment-control study design. A difference-in-differences method and fixed-effects model for panel data were employed to estimate the effect of the NEMP. Setting Chongqing, Jiangsu and Henan Province, in China, in 2009 and 2010. Participants 296 township health centres. Outcome measures Outcomes for health expenditures were average outpatient drug expenses per visit, average inpatient drug expenses per discharged patient, average outpatient expenses per visit and average inpatient expenses per discharged patient. Outcomes for care delivery were the numbers of visits per certified doctor per day and the numbers of hospitalised patients per certified doctor per day. Results The township health centres that were enrolled in the NEMP reported 26% (p<0.01) lower drug expenditures for inpatient care. An 11% (p<0.05) decrease in average inpatient expenditures per discharged patient was found following the implementation of the NEMP. The impacts of the NEMP on average outpatient expenditures and outpatient drug expenditures were not statistically significant at the 5% level. No statistically significant associations were found between the NEMP and reduction in quantity of health service delivery. Conclusions The NEMP was significant in its effect in reducing inpatient medication and health service expenditures. This study shows no evidence that the quantity of healthcare service declined significantly after introduction of the NEMP over the study period, which suggests that if appropriate matching policies are introduced, the side effects of the NEMP can be counteracted to some degree. Further research including a long-term follow-up study is needed. PMID:25534214
Alcohol-related negative consequences among drinkers around the world.
Graham, Kathryn; Bernards, Sharon; Knibbe, Ronald; Kairouz, Sylvia; Kuntsche, Sandra; Wilsnack, Sharon C; Greenfield, Thomas K; Dietze, Paul; Obot, Isidore; Gmel, Gerhard
2011-08-01
This paper examines (i) gender and country differences in negative consequences related to drinking; (ii) relative rates of different consequences; and (iii) country-level predictors of consequences. Multi-level analyses used survey data from the Gender, Alcohol, and Culture: An International Study (GENACIS) collaboration. Measures included 17 negative consequences grouped into (i) high endorsement acute, (ii) personal and (iii) social. Country-level measures included average frequency and quantity of drinking, percentage who were current drinkers, gross domestic product (GDP) and Human Development Index (HDI). Overall, the three groupings of consequences were reported by 44%, 12% and 7% of men and by 31%, 6% and 3% of women, respectively. More men than women endorsed all consequences, but gender differences were greatest for consequences associated with chronic drinking and social consequences related to male roles. The highest prevalence of consequences was in Uganda and lowest in Uruguay. Personal and social consequences were more likely in countries with higher usual quantity, fewer current drinkers and lower scores on GDP and HDI. However, significant interactions with individual-level quantity indicated a stronger relationship between consequences and usual quantity among drinkers in countries with lower quantity, more current drinkers and higher scores on GDP and HDI. Both gender and country need to be taken into consideration when assessing adverse drinking consequences. Individual measures of alcohol consumption and country-level variables are associated with experiencing such consequences. Additionally, country-level variables affect the strength of the relationship between usual quantity consumed by individuals and adverse consequences. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.
Barrientos, Zaidett
2012-09-01
Little is known about how restoration strategies affect aspects like leaf litter's quantity, depth and humidity. I analyzed leaf litter's quantity, depth and humidity yearly patterns in a primary tropical lower montane wet forest and two restored areas: a 15 year old secondary forest (unassisted restoration) and a 40 year old Cupressus lusitanica plantation (natural understory). The three habitats are located in the Rio Macho Forest Reserve, Costa Rica. Twenty litter samples were taken every three months (April 2009-April 2010) in each habitat; humidity was measured in 439g samples (average), depth and quantity were measured in five points inside 50x50cm plots. None of the restoration strategies reproduced the primary forest leaf litter humidity, depth and quantity yearly patterns. Primary forest leaf litter humidity was higher and more stable (mean=73.2), followed by secondary forest (mean=63.3) and cypress plantation (mean=52.9) (Kruskall-Wallis=77.93, n=232, p=0.00). In the primary (Kruskal-Wallis=31.63, n=78, p<0.001) and secondary (Kruskal-Wallis=11.79, n=75, p=0.008) forest litter accumulation was higher during April due to strong winds. In the primary forest (Kruskal-wallis=21.83, n=78, p<0.001) and the cypress plantation (Kruskal-wallis=39.99, n=80, p<0.001) leaf litter depth was shallow in October because heavy rains compacted it. Depth patterns were different from quantity patterns and described the leaf litter's structure in different ecosystems though the year. September 01.
Solar Cycle Variation and Application to the Space Radiation Environment
NASA Technical Reports Server (NTRS)
Wilson, John W.; Kim, Myung-Hee Y.; Shinn, Judy L.; Tai, Hsiang; Cucinotta, Francis A.; Badhwar, Gautam D.; Badavi, Francis F.; Atwell, William
1999-01-01
The interplanetary plasma and fields are affected by the degree of disturbance that is related to the number and types of sunspots in the solar surface. Sunspot observations were improved with the introduction of the telescope in the seventeenth century, allowing observations which cover many centuries. A single quantity (sunspot number) was defined by Wolf in 1848 that is now known to be well correlated with many space observable quantities and is used herein to represent variations caused in the space radiation environment. The resultant environmental models are intended for future aircraft and space-travel-related exposure estimates.
Modeling the heliolatitudinal gradient of the solar wind parameters with exact MHD solutions
NASA Technical Reports Server (NTRS)
Lima, J. J. G.; Tsinganos, K.
1995-01-01
The heliolatitudinal dependence of observations of the solar wind macroscopic quantities such as the averaged proton speed, density and the mass and momentum flux are modeled. The published observations covering the last two and a half solar cycles, are obtained either via the technique of interplanetary scintillations for the last 2 solar cycles (1970-1990), or, from the plasma experiment aboard the ULYSSES spacecraft for the recent period 1990-1994. Exact, two dimensional solutions of the full set of the steady MHD equations are used which are obtained through a nonlinear separation of the variables in the MHD equations. The three parameters emerging from the solutions are fixed from these observations, as well as from observations of the solar rotation. It is found that near solar maximum the solar wind speed is uniformly low, around the 400 km/s over a wide range of latitudes. On the other hand, during solar minimum and the declining phase of the solar activity cycle, there is a strong heliolatitudinal gradient in proton speed between 400-800 from equator to pole. This modeling also agrees with previous findings that the gradient in wind speed with the latitude is offset by a gradient in density such that the mass and momentum flux vary relatively little.
An Ex Vivo Comparison of 2 Cyanoacrylate Skin Protectants.
Gibson, Daniel J
The purpose of these experiments was to compare 2 commercially available skin protectants with different chemical compositions. Two materially different skin protectants were applied to ex vivo pig skin, subjected to stresses, and the resulting skin was observed and analyzed. Using ex vivo pig skin, we sought to better understand the physical differences between a cyanoacrylate-based and a mixed cyanoacrylate/acrylic polymer-based skin protectant. A combination of imaging techniques and microscopic analyses was used to observe and quantify differences in layer thickness and the degree of steadfastness of the layers to liquid stresses. The experiments revealed that the solely cyanoacrylate-based protectant created a layer that was, on average, 5.1 times thicker than the mixed polymer product (p= 1.8 × 10). Observation via electron microscopy also revealed that the extent of coverage varied between the 2 products. In a final experiment, we observed that the mixed polymer product maintained a high degree of adhesiveness, which led to the removal of sheets of epithelium upon gentle blotting. The experiments revealed that while the 2 skin protectants share a common ingredient, both the quantity of that ingredient and the inclusion of other materials in one of them lead to substantially different properties when tested in the research setting.
Observational determination of albedo decrease caused by vanishing Arctic sea ice
Pistone, Kristina; Eisenman, Ian; Ramanathan, V.
2014-01-01
The decline of Arctic sea ice has been documented in over 30 y of satellite passive microwave observations. The resulting darkening of the Arctic and its amplification of global warming was hypothesized almost 50 y ago but has yet to be verified with direct observations. This study uses satellite radiation budget measurements along with satellite microwave sea ice data to document the Arctic-wide decrease in planetary albedo and its amplifying effect on the warming. The analysis reveals a striking relationship between planetary albedo and sea ice cover, quantities inferred from two independent satellite instruments. We find that the Arctic planetary albedo has decreased from 0.52 to 0.48 between 1979 and 2011, corresponding to an additional 6.4 ± 0.9 W/m2 of solar energy input into the Arctic Ocean region since 1979. Averaged over the globe, this albedo decrease corresponds to a forcing that is 25% as large as that due to the change in CO2 during this period, considerably larger than expectations from models and other less direct recent estimates. Changes in cloudiness appear to play a negligible role in observed Arctic darkening, thus reducing the possibility of Arctic cloud albedo feedbacks mitigating future Arctic warming. PMID:24550469
Change of spatial information under rescaling: A case study using multi-resolution image series
NASA Astrophysics Data System (ADS)
Chen, Weirong; Henebry, Geoffrey M.
Spatial structure in imagery depends on a complicated interaction between the observational regime and the types and arrangements of entities within the scene that the image portrays. Although block averaging of pixels has commonly been used to simulate coarser resolution imagery, relatively little attention has been focused on the effects of simple rescaling on spatial structure and the explanation and a possible solution to the problem. Yet, if there are significant differences in spatial variance between rescaled and observed images, it may affect the reliability of retrieved biogeophysical quantities. To investigate these issues, a nested series of high spatial resolution digital imagery was collected at a research site in eastern Nebraska in 2001. An airborne Kodak DCS420IR camera acquired imagery at three altitudes, yielding nominal spatial resolutions ranging from 0.187 m to 1 m. The red and near infrared (NIR) bands of the co-registered image series were normalized using pseudo-invariant features, and the normalized difference vegetation index (NDVI) was calculated. Plots of grain sorghum planted in orthogonal crop row orientations were extracted from the image series. The finest spatial resolution data were then rescaled by averaging blocks of pixels to produce a rescaled image series that closely matched the spatial resolution of the observed image series. Spatial structures of the observed and rescaled image series were characterized using semivariogram analysis. Results for NDVI and its component bands show, as expected, that decreasing spatial resolution leads to decreasing spatial variability and increasing spatial dependence. However, compared to the observed data, the rescaled images contain more persistent spatial structure that exhibits limited variation in both spatial dependence and spatial heterogeneity. Rescaling via simple block averaging fails to consider the effect of scene object shape and extent on spatial information. As the features portrayed by pixels are equally weighted regardless of the shape and extent of the underlying scene objects, the rescaled image retains more of the original spatial information than would occur through direct observation at a coarser sensor spatial resolution. In contrast, for the observed images, due to the effect of the modulation transfer function (MTF) of the imaging system, high frequency features like edges are blurred or lost as the pixel size increases, resulting in greater variation in spatial structure. Successive applications of a low-pass spatial convolution filter are shown to mimic a MTF. Accordingly, it is recommended that such a procedure be applied prior to rescaling by simple block averaging, if insufficient image metadata exist to replicate the net MTF of the imaging system, as might be expected in land cover change analysis studies using historical imagery.
Wang, Cong; Liu, Qiang; Guo, Gang; Huo, WenJie; Ma, Le; Zhang, YanLi; Pei, CaiXia; Zhang, ShuanLin; Wang, Hao
2016-12-01
The present experiment was undertaken to determine the effects of dietary addition of rumen-protected folic acid (RPFA) on ruminal fermentation, nutrient degradability, enzyme activity and the relative quantity of ruminal cellulolytic bacteria in growing beef steers. Eight rumen-cannulated Jinnan beef steers averaging 2.5 years of age and 419 ± 1.9 kg body weight were used in a replicated 4 × 4 Latin square design. The four treatments comprised supplementation levels of 0 (Control), 70, 140 and 210 mg RPFA/kg dietary dry matter (DM). On DM basis, the ration consisted of 50% corn silage, 47% concentrate and 3% soybean oil. The DM intake (averaged 8.5 kg/d) was restricted to 95% of ad libitum intake. The intake of DM, crude protein (CP) and net energy for growth was not affected by treatments. In contrast, increasing RPFA supplementation increased average daily gain and the concentration of total volatile fatty acid and reduced ruminal pH linearly. Furthermore, increasing RPFA supplementation enhanced the acetate to propionate ratio and reduced the ruminal ammonia N content linearly. The ruminal effective degradability of neutral detergent fibre from corn silage and CP from concentrate improved linearly and was highest for the highest supplementation levels. The activities of cellobiase, xylanase, pectinase and α-amylase linearly increased, but carboxymethyl-cellulase and protease were not affected by the addition of RPFA. The relative quantities of Butyrivibrio fibrisolvens, Ruminococcus albus, Ruminococcus flavefaciens and Fibrobacter succinogenes increased linearly. With increasing RPFA supplementation levels, the excretion of urinary purine derivatives was also increased linearly. The present results indicated that the supplementation of RPFA improved ruminal fermentation, nutrient degradability, activities of microbial enzymes and the relative quantity of the ruminal cellulolytic bacteria in a dose-dependent manner. According to the conditions of this experiment, the optimum supplementation level of RPFA was 140 mg/kg DM.
Avallone, Sylvie; Brault, Sophie; Mouquet, Claire; Treche, Serge
2007-03-01
The diet of 200 randomly selected 1-year-old to 5-year-old children was studied in a rural area of Burkina Faso. The mothers took part in a questionnaire survey and a 24-h dietary recall to index the type and the ingested quantities of the food consumed by the child the previous day. The average percentages of the Recommended Nutrient Intake met by the consumption of a dish component per meal did not exceed 25% for energy, iron, zinc and vitamin A. With respect to their initial composition and the quantities ingested, several dish components such as starchy-based products (millet-based-tô) or sauces (red sorrel leaves, dried okra) were good sources of micronutrients in the children's diets. Several dish components were selected and their preparation observed in six households to obtain precise details of the recipe. Several ingredients (42) and unit operations (nine) were used to prepare the local foods. Cooking in water (boiling), which was the main unit operation, did not exceed 43 min and the temperature used was under 100 degrees C. Several ingredients were subjected to two or three thermal treatments and the duration of cooking reached 56 min in groundnut sauce. The most at-risk unit operations likely to decrease the nutritional quality were cooking in water followed by draining or cooking for a long time.
Estimating the usage of allograft in the treatment of major burns.
Horner, C W M; Atkins, J; Simpson, L; Philp, B; Shelley, O; Dziewulski, P
2011-06-01
To assess the amount of allograft used in the past treatment of major burns and calculate a figure to guide estimation of the quantity of allograft required to treat future patients and aid resource planning. A retrospective observational study. Records of 143 patients treated with major burns at a regional centre, from January 2004 to November 2008 were accessed with biometric data and quantity of allograft used being recorded. This data was used to calculate an allograft index (cm² allograft used/burn surface area (cm²)) (AI) for each patient. 112 of the 143 patients had complete sets of data, of the 112, 89 patients survived the initial stay in hospital. For all data average AI=1.077 ± 0.090. AI varied according to burn % area with burns < 40% requiring 0.490 cm² allo/cm²burn, increasing in a logarithmic fashion (R²=0.995) for burn areas > 40%. The ability to estimate deceased donor skin requirements based on % body surface area affected is important in the care planning for patients with major burns. Our findings of 0.5 cm² allograft/cm² burn for injuries less than 40% TBSA, increasing to 1.82 cm² allograft/cm² burn for injuries up to 80% TBSA can be used for planning purposes for individual services and for burn disaster planning. Copyright © 2010 Elsevier Ltd and ISBI. All rights reserved.
Does quality of drinking water matter in kidney stone disease: A study in West Bengal, India.
Mitra, Pubali; Pal, Dilip Kumar; Das, Madhusudan
2018-05-01
The combined interaction of epidemiology, environmental exposure, dietary habits, and genetic factors causes kidney stone disease (KSD), a common public health problem worldwide. Because a high water intake (>3 L daily) is widely recommended by physicians to prevent KSD, the present study evaluated whether the quantity of water that people consume daily is associated with KSD and whether the quality of drinking water has any effect on disease prevalence. Information regarding residential address, daily volume of water consumption, and source of drinking water was collected from 1,266 patients with kidney stones in West Bengal, India. Drinking water was collected by use of proper methods from case (high stone prevalence) and control (zero stone prevalence) areas thrice yearly. Water samples were analyzed for pH, alkalinity, hardness, total dissolved solutes, electrical conductivity, and salinity. Average values of the studied parameters were compared to determine if there were any statistically significant differences between the case and control areas. We observed that as many as 53.6% of the patients consumed <3 L of water daily. Analysis of drinking water samples from case and control areas, however, did not show any statistically significant alterations in the studied parameters. All water samples were found to be suitable for consumption. It is not the quality of water, rather the quantity of water consumed that matters most in the occurrence of KSD.
Habitual tea drinking associated with a lower risk of type 2 diabetes in Vietnamese adults.
Nguyen, Chung Thanh; Lee, Andy H; Pham, Ngoc Minh; Do, Vuong Van; Ngu, Nghia Duy; Tran, Binh Quang; Binns, Colin
2018-01-01
The association between tea consumption and type 2 diabetes risk remains inconsistent in Asian populations. This case-control study investigated the association between habitual tea consumption and the risk of type 2 diabetes among Vietnamese adults. A hospital-based case-control study was conducted during 2013-2015 in Vietnam. A total of 599 newly diagnosed diabetic cases (aged 40-65 years) and 599 hospital-based controls, frequency matched by age and sex, were recruited. Information about frequency, quantity, and duration of tea drinking, together with demographics, habitual diet and lifestyle characteristics, was obtained from direct interviews using a validated and reliable questionnaire. Unconditional logistic regression analyses were performed to assess the association between different metrics of tea consumption and the type 2 diabetes risk. Control subjects reported higher tea consumption levels than the cases in terms of duration, frequency, and quantity of tea drunk. After accounting for confounding factors, increasing tea consumption was found to be associated with a reduced risk of type 2 diabetes; the adjusted odds ratio (95% confidence interval) was 0.66 (0.49, 0.89) for participants drinking >2 cups/day, relative to those drinking <1 cup/day. Significant inverse dose-response relationships were also observed for average number of cups consumed daily and years of tea drinking (p<0.01). Habitual tea consumption is associated with a reduced risk of type 2 diabetes among Vietnamese adults.
Carbon Storage in US Wetlands. | Science Inventory | US EPA
Background/Question/Methods Wetland soils contain some of the highest stores of soil carbon in the biosphere. However, there is little understanding of the quantity and distribution of carbon stored in US wetlands or of the potential effects of human disturbance on these stocks. We provide unbiased estimates of soil carbon stocks for wetlands at regional and national scales and describe how soil carbon stocks vary by anthropogenic disturbance to the wetland. To estimate the quantity and distribution of carbon stocks in wetlands of the conterminous US, we used data gathered in the field as part of the 2011 National Wetland Condition Assessment (NWCA) conducted by USEPA. During the growing season, field crews collected soil samples by horizon from 120-cm deep soil pits at 967 randomly selected wetland sites. Soil samples were analyzed for bulk density and organic carbon. We applied site carbon stock averages by soil depth back to the national population of wetlands and to several subpopulations, including five geographic areas and anthropogenic disturbance level. Disturbance levels were categorized by the NWCA as least, intermediately, or most disturbed using a priori defined physical, chemical, and biological indicators that were observable at the time of the site visit.Results/Conclusions We find that wetlands in the conterminous US store a total of 11.52 PgC – roughly equivalent to four years of annual carbon emissions by the US, with the greatest soil ca
Zinc and copper mineralization of the Vazante area, Minas Gerais, Brazil
Moore, Samuel L.
1956-01-01
A large body of zinc and copper mineralization is exposed in a line of low hills about 5 kilometers east of the small village of Vazante in the northwestern part of the state of Minas Gerais, Brazil. The Vazante area can be reached by roads leading north from the State of Sao Paulo, via Araxa; west from Balo Horizonte, Minas Gerais; and south from Paracatu, Minas Gerais. The deposit is in branching, sub-parallel fault breccia zones. Calamine (H2Zn2SiO5), and willomite (ZnSiO4), along with small quantities of smithsonite (ZnCO3), form the matrix of the fault breccia. The zinc mineralization is cut by narrow veins of chalcocite in platy crystal aggregate thought to be pseudomorphous after covellite. The chalcocite veins contain small quantities of sphalterite, galena, covellite and calamine. Faults that contain breccia zones displace shale and dolomite. The sedimentary rocks are thought to be Silurian in age. The fault breccia zones have a regional trend of N 40 degrees E and crop out over a strike length of more than four kilometers. The mineralization of the fault zones was observed to continue to the north for an additional four kilometers. The mineralized fault breccia zones range from a few meters to 60 meters in width. A large ore body is indicated that from available samples may average 35 percent zinc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharif, M., E-mail: msharif.math@pu.edu.pk; Nawazish, I., E-mail: iqranawazish07@gmail.com
We attempt to find exact solutions of the Bianchi I model in f(R) gravity using the Noether symmetry approach. For this purpose, we take a perfect fluid and formulate conserved quantities for the power-law f(R) model. We discuss some cosmological parameters for the resulting solution which are responsible for expanding behavior of the universe. We also explore Noether gauge symmetry and the corresponding conserved quantity. It is concluded that symmetry generators as well as conserved quantities exist in all cases and the behavior of cosmological parameters shows consistency with recent observational data.
Sleep quantity, quality and optimism in children
Lemola, Sakari; Räikkönen, Katri; Scheier, Michael F.; Matthews, Karen A.; Pesonen, Anu-Katriina; Heinonen, Kati; Lahti, Jari; Komsi, Niina; Paavonen, E. Juulia; Kajantie, Eero
2014-01-01
We tested the relationship of objectively-measured sleep quantity and quality with positive characteristics of the child. Sleep duration, sleep latency, and sleep efficiency were measured by an actigraph for an average seven (range = 3 to 14) consecutive nights in 291 eight-year-old children (SD = 0.3 years). Children's optimism, self-esteem, and social competence were rated by parents and/or teachers. Sleep duration showed a non-linear, reverse J-shaped relationship with optimism (P = 0.02) such that children with sleep duration in the middle of the distribution scored higher in optimism compared to children who slept relatively little. Shorter sleep latency was related to higher optimism (P = 0.01). The associations remained when adjusting for child's age, sex, body mass index and parental level of education; the effects of sleep on optimism were neither changed when the parents' own optimism was controlled. In conclusion, sufficient sleep quantity and good sleep quality are associated with positive characteristics of the child, further underlining their importance in promoting well-being in children. PMID:20561178
Migratory behavior of eastern North Pacific gray whales tracked using a hydrophone array
Helble, Tyler A.; D’Spain, Gerald L.; Weller, David W.; Wiggins, Sean M.; Hildebrand, John A.
2017-01-01
Eastern North Pacific gray whales make one of the longest annual migrations of any mammal, traveling from their summer feeding areas in the Bering and Chukchi Seas to their wintering areas in the lagoons of Baja California, Mexico. Although a significant body of knowledge on gray whale biology and behavior exists, little is known about their vocal behavior while migrating. In this study, we used a sparse hydrophone array deployed offshore of central California to investigate how gray whales behave and use sound while migrating. We detected, localized, and tracked whales for one full migration season, a first for gray whales. We verified and localized 10,644 gray whale M3 calls and grouped them into 280 tracks. Results confirm that gray whales are acoustically active while migrating and their swimming and acoustic behavior changes on daily and seasonal time scales. The seasonal timing of the calls verifies the gray whale migration timing determined using other methods such as counts conducted by visual observers. The total number of calls and the percentage of calls that were part of a track changed significantly over both seasonal and daily time scales. An average calling rate of 5.7 calls/whale/day was observed, which is significantly greater than previously reported migration calling rates. We measured a mean speed of 1.6 m/s and quantified heading, direction, and water depth where tracks were located. Mean speed and water depth remained constant between night and day, but these quantities had greater variation at night. Gray whales produce M3 calls with a root mean square source level of 156.9 dB re 1 μPa at 1 m. Quantities describing call characteristics were variable and dependent on site-specific propagation characteristics. PMID:29084266
Migratory behavior of eastern North Pacific gray whales tracked using a hydrophone array.
Guazzo, Regina A; Helble, Tyler A; D'Spain, Gerald L; Weller, David W; Wiggins, Sean M; Hildebrand, John A
2017-01-01
Eastern North Pacific gray whales make one of the longest annual migrations of any mammal, traveling from their summer feeding areas in the Bering and Chukchi Seas to their wintering areas in the lagoons of Baja California, Mexico. Although a significant body of knowledge on gray whale biology and behavior exists, little is known about their vocal behavior while migrating. In this study, we used a sparse hydrophone array deployed offshore of central California to investigate how gray whales behave and use sound while migrating. We detected, localized, and tracked whales for one full migration season, a first for gray whales. We verified and localized 10,644 gray whale M3 calls and grouped them into 280 tracks. Results confirm that gray whales are acoustically active while migrating and their swimming and acoustic behavior changes on daily and seasonal time scales. The seasonal timing of the calls verifies the gray whale migration timing determined using other methods such as counts conducted by visual observers. The total number of calls and the percentage of calls that were part of a track changed significantly over both seasonal and daily time scales. An average calling rate of 5.7 calls/whale/day was observed, which is significantly greater than previously reported migration calling rates. We measured a mean speed of 1.6 m/s and quantified heading, direction, and water depth where tracks were located. Mean speed and water depth remained constant between night and day, but these quantities had greater variation at night. Gray whales produce M3 calls with a root mean square source level of 156.9 dB re 1 μPa at 1 m. Quantities describing call characteristics were variable and dependent on site-specific propagation characteristics.
Comparison between two models of energy balance in coronal loops
NASA Astrophysics Data System (ADS)
Mac Cormack, C.; López Fuentes, M.; Vásquez, A. M.; Nuevo, F. A.; Frazin, R. A.; Landi, E.
2017-10-01
In this work we compare two models to analyze the energy balance along coronal magnetic loops. For the first stationary model we deduce an expression of the energy balance along the loops expressed in terms of quantities provided by the combination of differential emission measure tomography (DEMT) applied to EUV images time series and potential extrapolations of the coronal magnetic field. The second applied model is a 0D hydrodynamic model that provides the evolution of the average properties of the coronal plasma along the loops, using as input parameters the loop length and the heating rate obtained with the first model. We compare the models for two Carrington rotations (CR) corresponding to different periods of activity: CR 2081, corresponding to a period of minimum activity observed with the Extreme Ultraviolet Imager (EUVI) on board of the Solar Terrestrial Relations Observatory (STEREO), and CR 2099, corresponding to a period of activity increase observed with the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO). The results of the models are consistent for both rotations.
Predicting ICME properties at 1AU
NASA Astrophysics Data System (ADS)
Lago, A.; Braga, C. R.; Mesquita, A. L.; De Mendonça, R. R. S.
2017-12-01
Coronal mass ejections (CMEs) are among the main origins of geomagnetic disturbances. They change the properties of the near-earth interplanetary medium, enhancing some key parameters, such as the southward interplanetary magnetic field and the solar wind speed. Both quantities are known to be related to the energy transfer from the solar wind to the Earth's magnetosphere via the magnetic reconnection process. Many attempts have been made to predict the magnetic filed and the solar wind speed from coronagraph observations. However, we still have much to learn about the dynamic evolution of ICMEs as they propagate through the interplanetary space. Increased observation capability is probably needed. Among the several attempts to establish correlations between CME and ICME properties, it was found that the average CME propagation speed to 1AU is highly correlated to the ICME peak speed (Dal Lago et al, 2004). In this work, we present an extended study of such correlation, which confirms the results found in our previous study. Some suggestions on how to use this kind of results for space weather estimates are explored.
Estimated dietary exposure to principal food mycotoxins from the first French Total Diet Study.
Leblanc, J-C; Tard, A; Volatier, J-L; Verger, P
2005-07-01
This study reports estimates on dietary exposure from the first French Total Diet Study (FTDS) and compares these estimates with both existing tolerable daily intakes for these toxins and the intakes calculated during previous French studies. To estimate the dietary exposure of the French population to the principal mycotoxins in the French diet (as consumed), 456 composite samples were prepared from 2280 individual samples and analysed for aflatoxins, ochratoxin A, trichothecenes, zearalenone, fumonisins and patulin. Average and high percentile intakes were calculated taking account of different eating patterns for adults, children and vegetarians. The results showed that contaminant levels observed in the foods examined 'as consumed' complied fully with current European legislation. However, particular attention needs to be paid to the exposure of specific population groups, such as children and vegans/macrobiotics, who could be exposed to certain mycotoxins in quantities that exceed the tolerable or weekly daily intake levels. This observation is particularly relevant with respect to ochratoxin A, deoxynivalenol and zearalenone. For these mycotoxins, cereals and cereal products were the main contributors to high exposure.
Hoang, V; Delatolla, R; Abujamel, T; Mottawea, W; Gadbois, A; Laflamme, E; Stintzi, A
2014-02-01
This study aims to investigate moving bed biofilm reactor (MBBR) nitrification rates, nitrifying biofilm morphology, biomass viability as well as bacterial community shifts during long-term exposure to 1 °C. Long-term exposure to 1 °C is the key operational condition for potential ammonia removal upgrade units to numerous northern region treatment systems. The average laboratory MBBR ammonia removal rate after long-term exposure to 1 °C was measured to be 18 ± 5.1% as compared to the average removal rate at 20 °C. Biofilm morphology and specifically the thickness along with biomass viability at various depths in the biofilm were investigated using variable pressure electron scanning microscope (VPSEM) imaging and confocal laser scanning microscope (CLSM) imaging in combination with viability live/dead staining. The biofilm thickness along with the number of viable cells showed significant increases after long-term exposure to 1 °C. Hence, this study observed nitrifying bacteria with higher activities at warm temperatures and a slightly greater quantity of nitrifying bacteria with lower activities at cold temperatures in nitrifying MBBR biofilms. Using DNA sequencing analysis, Nitrosomonas and Nitrosospira (ammonia oxidizers) as well as Nitrospira (nitrite oxidizer) were identified and no population shift was observed between 20 °C and after long-term exposure to 1 °C. Copyright © 2013 Elsevier Ltd. All rights reserved.
Jaiswal, Abhishek; Egami, Takeshi; Zhang, Yang
2015-04-01
The phase behavior of multi-component metallic liquids is exceedingly complex because of the convoluted many-body and many-elemental interactions. Herein, we present systematic studies of the dynamic aspects of such a model ternary metallic liquid Cu 40Zr 51Al 9 using molecular dynamics simulation with embedded atom method. We observed a dynamical crossover from Arrhenius to super-Arrhenius behavior in the transport properties (diffusion coefficient, relaxation times, and shear viscosity) bordered at T x ~1300K. Unlike in many molecular and macromolecular liquids, this crossover phenomenon occurs in the equilibrium liquid state well above the melting temperature of the system (T m ~ 900K),more » and the crossover temperature is roughly twice of the glass-transition temperature (T g). Below T x, we found the elemental dynamics decoupled and the Stokes-Einstein relation broke down, indicating the onset of heterogeneous spatially correlated dynamics in the system mediated by dynamic communications among local configurational excitations. To directly characterize and visualize the correlated dynamics, we employed a non-parametric, unsupervised machine learning technique and identified dynamical clusters of atoms with similar atomic mobility. The revealed average dynamical cluster size shows an accelerated increase below T x and mimics the trend observed in other ensemble averaged quantities that are commonly used to quantify the spatially heterogeneous dynamics such as the non-Gaussian parameter and the four-point correlation function.« less
Tropospheric and lower stratospheric vertical profiles of ethane and acetylene
NASA Technical Reports Server (NTRS)
Cronn, D.; Robinson, E.
1979-01-01
The first known vertical distributions of ethane and acetylene which extend into the lower stratosphere are reported. The average upper tropospheric concentrations, between 20,000 ft and 35,000 ft, near 37 deg N-123 deg W were 1.2 micrograms/cu m (1.0 ppb) for ethane and 0.24 micrograms /cu m (0.23 ppb) for acetylene while the values near 9 N-80 W were 0.95 micrograms/cu m (0.77 ppb) and 0.09 micrograms/cu m (0.09 ppb), respectively. Detectable quantities of both ethane and acetylene are present in the lower stratosphere. There is a sharp decrease in the levels of these two compounds as one crosses the tropopause and ascends into the lower stratosphere. The observed levels of ethane and acetylene may allow some impact on the background chemistry of the troposphere and stratosphere.
NASA Astrophysics Data System (ADS)
Moritz, R. E.
2005-12-01
The properties, distribution and temporal variation of sea-ice are reviewed for application to problems of ice-atmosphere chemical processes. Typical vertical structure of sea-ice is presented for different ice types, including young ice, first-year ice and multi-year ice, emphasizing factors relevant to surface chemistry and gas exchange. Time average annual cycles of large scale variables are presented, including ice concentration, ice extent, ice thickness and ice age. Spatial and temporal variability of these large scale quantities is considered on time scales of 1-50 years, emphasizing recent and projected changes in the Arctic pack ice. The amount and time evolution of open water and thin ice are important factors that influence ocean-ice-atmosphere chemical processes. Observations and modeling of the sea-ice thickness distribution function are presented to characterize the range of variability in open water and thin ice.
Electrostatic Charging of the Pathfinder Rover
NASA Technical Reports Server (NTRS)
Siebert, Mark W.; Kolecki, Joseph C.
1996-01-01
The Mars Pathfinder mission will send a lander and a rover to the martian surface. Because of the extremely dry conditions on Mars, electrostatic charging of the rover is expected to occur as it moves about. Charge accumulation may result in high electrical potentials and discharge through the martian atmosphere. Such discharge could interfere with the operation of electrical elements on the rover. A strategy was sought to mitigate this charge accumulation as a precautionary measure. Ground tests were performed to demonstrate charging in laboratory conditions simulating the surface conditions expected at Mars. Tests showed that a rover wheel, driven at typical rover speeds, will accumulate electrical charge and develop significant electrical potentials (average observed, 110 volts). Measurements were made of wheel electrical potential, and wheel capacitance. From these quantities, the amount of absolute charge was estimated. An engineering solution was developed and recommended to mitigate charge accumulation. That solution has been implemented on the actual rover.
Biopolymer dynamics driven by helical flagella
NASA Astrophysics Data System (ADS)
Balin, Andrew K.; Zöttl, Andreas; Yeomans, Julia M.; Shendruk, Tyler N.
2017-11-01
Microbial flagellates typically inhabit complex suspensions of polymeric material which can impact the swimming speed of motile microbes, filter feeding of sessile cells, and the generation of biofilms. There is currently a need to better understand how the fundamental dynamics of polymers near active cells or flagella impacts these various phenomena, in particular, the hydrodynamic and steric influence of a rotating helical filament on suspended polymers. Our Stokesian dynamics simulations show that as a stationary rotating helix pumps fluid along its long axis, polymers migrate radially inward while being elongated. We observe that the actuation of the helix tends to increase the probability of finding polymeric material within its pervaded volume. This accumulation of polymers within the vicinity of the helix is stronger for longer polymers. We further analyze the stochastic work performed by the helix on the polymers and show that this quantity is positive on average and increases with polymer contour length.
Anderson-Cook, Christine Michaela
2017-03-01
Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less
Microstructure of Turbulence in the Stably Stratified Boundary Layer
NASA Astrophysics Data System (ADS)
Sorbjan, Zbigniew; Balsley, Ben B.
2008-11-01
The microstructure of a stably stratified boundary layer, with a significant low-level nocturnal jet, is investigated based on observations from the CASES-99 campaign in Kansas, U.S.A. The reported, high-resolution vertical profiles of the temperature, wind speed, wind direction, pressure, and the turbulent dissipation rate, were collected under nocturnal conditions on October 14, 1999, using the CIRES Tethered Lifting System. Two methods for evaluating instantaneous (1-sec) background profiles are applied to the raw data. The background potential temperature is calculated using the “bubble sort” algorithm to produce a monotonically increasing potential temperature with increasing height. Other scalar quantities are smoothed using a running vertical average. The behaviour of background flow, buoyant overturns, turbulent fluctuations, and their respective histograms are presented. Ratios of the considered length scales and the Ozmidov scale are nearly constant with height, a fact that can be applied in practice for estimating instantaneous profiles of the dissipation rate.
Pore Water Transport of Enterococci out of Beach Sediments
Phillips, Matthew C.; Solo-Gabriele, Helena M.; Reniers, Adrianus J. H. M.; Wang, John D.; Kiger, Russell T.; Abdel-Mottaleb, Noha
2011-01-01
Enterococci are used to evaluate the safety of beach waters and studies have identified beach sands as a source of these bacteria. In order to study and quantify the release of microbes from beach sediments, flow column systems were built to evaluate flow of pore water out of beach sediments. Results show a peak in enterococci (average of 10% of the total microbes in core) released from the sand core within one pore water volume followed by a marked decline to below detection. These results indicate that few enterococci are easily removed and that factors other than simple pore water flow control the release of the majority of enterococci within beach sediments. A significantly larger quantity and release of enterococci were observed in cores collected after a significant rain event suggesting the influx of fresh water can alter the release pattern as compared to cores with no antecedent rainfall. PMID:21945015
Densities and temperatures in the polar thermosphere
NASA Technical Reports Server (NTRS)
Gardner, L. J.
1977-01-01
The atomic oxygen density at 120 km, the 630 nm airglow temperature, the helium density at 300 km and the molecular nitrogen density near 400 km were examined as functions of geomagnetic latitude, geomagnetic time, season and magnetic activity level. The long-term averages of these quantities were examined so as to provide a baseline of these thermospheric parameters from which future studies may be made for comparison. The hours around magnetic noon are characterized by low temperatures, high 0 and He densities, and median nitrogen densities. The pre-midnight hours exhibit high temperatures, high He density, low nitrogen density and median 0 densities. The post-midnight sector shows low 0 and He densities, median temperatures and high nitrogen densities. These results are compared to recent models and observations and are discussed with respect to their causes due to divergence of the wind field and energy deposition in the thermosphere.
NASA Technical Reports Server (NTRS)
Kalnay, Eugenia; Dalcher, Amnon
1987-01-01
It is shown that it is possible to predict the skill of numerical weather forecasts - a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite-data-impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems. When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such a large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when regional verifications were used, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C Flynn; AS Koontz; JH Mather
The uncertainties in current estimates of anthropogenic radiative forcing are dominated by the effects of aerosols, both in relation to the direct absorption and scattering of radiation by aerosols and also with respect to aerosol-related changes in cloud formation, longevity, and microphysics (See Figure 1; Intergovernmental Panel on Climate Change, Assessment Report 4, 2008). Moreover, the Arctic region in particular is especially sensitive to changes in climate with the magnitude of temperature changes (both observed and predicted) being several times larger than global averages (Kaufman et al. 2009). Recent studies confirm that aerosol-cloud interactions in the arctic generate climatologically significantmore » radiative effects equivalent in magnitude to that of green house gases (Lubin and Vogelmann 2006, 2007). The aerosol optical depth is the most immediate representation of the aerosol direct effect and is also important for consideration of aerosol-cloud interactions, and thus this quantity is essential for studies of aerosol radiative forcing.« less
Influence of tyre-road contact model on vehicle vibration response
NASA Astrophysics Data System (ADS)
Múčka, Peter; Gagnon, Louis
2015-09-01
The influence of the tyre-road contact model on the simulated vertical vibration response was analysed. Three contact models were compared: tyre-road point contact model, moving averaged profile and tyre-enveloping model. In total, 1600 real asphalt concrete and Portland cement concrete longitudinal road profiles were processed. The linear planar model of automobile with 12 degrees of freedom (DOF) was used. Five vibration responses as the measures of ride comfort, ride safety and dynamic load of cargo were investigated. The results were calculated as a function of vibration response, vehicle velocity, road quality and road surface type. The marked differences in the dynamic tyre forces and the negligible differences in the ride comfort quantities were observed among the tyre-road contact models. The seat acceleration response for three contact models and 331 DOF multibody model of the truck semi-trailer was compared with the measured response for a known profile of test section.
Zou, Yuming; Li, Quan; Xu, Weidong
2016-09-16
Orthopaedics-related diseases and conditions are a significant burden worldwide. In this study, we aimed to compare the quantity and quality of research output in the field of orthopaedics from Mainland China (MC), USA, UK, Japan and Germany. The USA, UK, Japan, Germany and MC. We selected orthopaedics journals from the subject category 'orthopedics' from the Science Citation Index Expanded (SCIE). The number of publications, the number of publications in the surveyed publication types, impact factor (IF) and citations from the corresponding country from 2005 to 2014 were collected for quantity and quality comparisons. A total of 128 895 articles were published worldwide in orthopaedics-related journals from 2005 to 2014. The USA contributed the largest proportion (31 190 (24.20%)), followed by the UK (6703 (5.20%)), Japan (5718 (4.41%)), Germany (4701 (3.66%)) and MC (3389 (2.63%)). Publications from MC represented the fewest, but this quantity is rapidly increasing. The quantity of annual publications from MC has exceeded that of Germany since 2012. The USA plays a predominant role in all kinds of publication types under investigation in the study, except in the category of meta-analysis. MC was in the last place for cumulative IFs, and the average IF actually decreased from the beginning of the study. For total and average citations, MC still lags behind the other countries in the study. The USA has occupied the dominant place in orthopaedics-related research for the last 10 years. Although MC has made great progress in the number of published works in the field of orthopaedics over the last 10 years, the quality of these publishing efforts needs further improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
General framework for fluctuating dynamic density functional theory
NASA Astrophysics Data System (ADS)
Durán-Olivencia, Miguel A.; Yatsyshin, Peter; Goddard, Benjamin D.; Kalliadasis, Serafim
2017-12-01
We introduce a versatile bottom-up derivation of a formal theoretical framework to describe (passive) soft-matter systems out of equilibrium subject to fluctuations. We provide a unique connection between the constituent-particle dynamics of real systems and the time evolution equation of their measurable (coarse-grained) quantities, such as local density and velocity. The starting point is the full Hamiltonian description of a system of colloidal particles immersed in a fluid of identical bath particles. Then, we average out the bath via Zwanzig’s projection-operator techniques and obtain the stochastic Langevin equations governing the colloidal-particle dynamics. Introducing the appropriate definition of the local number and momentum density fields yields a generalisation of the Dean-Kawasaki (DK) model, which resembles the stochastic Navier-Stokes description of a fluid. Nevertheless, the DK equation still contains all the microscopic information and, for that reason, does not represent the dynamical law of observable quantities. We address this controversial feature of the DK description by carrying out a nonequilibrium ensemble average. Adopting a natural decomposition into local-equilibrium and nonequilibrium contribution, where the former is related to a generalised version of the canonical distribution, we finally obtain the fluctuating-hydrodynamic equation governing the time-evolution of the mesoscopic density and momentum fields. Along the way, we outline the connection between the ad hoc energy functional introduced in previous DK derivations and the free-energy functional from classical density-functional theory. The resultant equation has the structure of a dynamical density-functional theory (DDFT) with an additional fluctuating force coming from the random interactions with the bath. We show that our fluctuating DDFT formalism corresponds to a particular version of the fluctuating Navier-Stokes equations, originally derived by Landau and Lifshitz. Our framework thus provides the formal apparatus for ab initio derivations of fluctuating DDFT equations capable of describing the dynamics of soft-matter systems in and out of equilibrium.
Correlation consistent basis sets for lanthanides: The atoms La–Lu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu
Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples,more » CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.« less
Varieties of quantity estimation in children.
Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco
2015-06-01
In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3 children were asked to map continuous quantities, discrete nonsymbolic quantities (numerosities), and symbolic (Arabic) numbers onto a visual line. Numerical quantity was matched for the symbolic and discrete nonsymbolic conditions, whereas cumulative surface area was matched for the continuous and discrete quantity conditions. Crucially, in the discrete condition children's estimation could rely either on the cumulative area or numerosity. All children showed a linear mapping for continuous quantities, whereas a developmental shift from a logarithmic to a linear mapping was observed for both nonsymbolic and symbolic numerical quantities. Analyses on individual estimates suggested the presence of two distinct strategies in estimating discrete nonsymbolic quantities: one based on numerosity and the other based on spatial extent. In Experiment 2, a non-spatial continuous quantity (shades of gray) and new discrete nonsymbolic conditions were added to the set used in Experiment 1. Results confirmed the linear patterns for the continuous tasks, as well as the presence of a subset of children relying on numerosity for the discrete nonsymbolic numerosity conditions despite the availability of continuous visual cues. Overall, our findings demonstrate that estimation of numerical and non-numerical quantities is based on different processing strategies and follow different developmental trajectories. (c) 2015 APA, all rights reserved).
Statistical errors in molecular dynamics averages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiferl, S.K.; Wallace, D.C.
1985-11-15
A molecular dynamics calculation produces a time-dependent fluctuating signal whose average is a thermodynamic quantity of interest. The average of the kinetic energy, for example, is proportional to the temperature. A procedure is described for determining when the molecular dynamics system is in equilibrium with respect to a given variable, according to the condition that the mean and the bandwidth of the signal should be sensibly constant in time. Confidence limits for the mean are obtained from an analysis of a finite length of the equilibrium signal. The role of serial correlation in this analysis is discussed. The occurence ofmore » unstable behavior in molecular dynamics data is noted, and a statistical test for a level shift is described.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.; Mughabghab, S.F.
We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present papermore » contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.« less
NASA Astrophysics Data System (ADS)
Castillo, Richard; Castillo, Edward; Fuentes, David; Ahmad, Moiz; Wood, Abbie M.; Ludwig, Michelle S.; Guerrero, Thomas
2013-05-01
Landmark point-pairs provide a strategy to assess deformable image registration (DIR) accuracy in terms of the spatial registration of the underlying anatomy depicted in medical images. In this study, we propose to augment a publicly available database (www.dir-lab.com) of medical images with large sets of manually identified anatomic feature pairs between breath-hold computed tomography (BH-CT) images for DIR spatial accuracy evaluation. Ten BH-CT image pairs were randomly selected from the COPDgene study cases. Each patient had received CT imaging of the entire thorax in the supine position at one-fourth dose normal expiration and maximum effort full dose inspiration. Using dedicated in-house software, an imaging expert manually identified large sets of anatomic feature pairs between images. Estimates of inter- and intra-observer spatial variation in feature localization were determined by repeat measurements of multiple observers over subsets of randomly selected features. 7298 anatomic landmark features were manually paired between the 10 sets of images. Quantity of feature pairs per case ranged from 447 to 1172. Average 3D Euclidean landmark displacements varied substantially among cases, ranging from 12.29 (SD: 6.39) to 30.90 (SD: 14.05) mm. Repeat registration of uniformly sampled subsets of 150 landmarks for each case yielded estimates of observer localization error, which ranged in average from 0.58 (SD: 0.87) to 1.06 (SD: 2.38) mm for each case. The additions to the online web database (www.dir-lab.com) described in this work will broaden the applicability of the reference data, providing a freely available common dataset for targeted critical evaluation of DIR spatial accuracy performance in multiple clinical settings. Estimates of observer variance in feature localization suggest consistent spatial accuracy for all observers across both four-dimensional CT and COPDgene patient cohorts.
Tuition at PhD-Granting Institutions: A Supply and Demand Model.
ERIC Educational Resources Information Center
Koshal, Rajindar K.; And Others
1994-01-01
Builds and estimates a model that explains educational supply and demand behavior at PhD-granting institutions in the United States. The statistical analysis based on 1988-89 data suggests that student quantity, educational costs, average SAT score, class size, percentage of faculty with a PhD, graduation rate, ranking, and existence of a medical…
NASA Astrophysics Data System (ADS)
Jones, Emlyn M.; Baird, Mark E.; Mongin, Mathieu; Parslow, John; Skerratt, Jenny; Lovell, Jenny; Margvelashvili, Nugzar; Matear, Richard J.; Wild-Allen, Karen; Robson, Barbara; Rizwi, Farhan; Oke, Peter; King, Edward; Schroeder, Thomas; Steven, Andy; Taylor, John
2016-12-01
Skillful marine biogeochemical (BGC) models are required to understand a range of coastal and global phenomena such as changes in nitrogen and carbon cycles. The refinement of BGC models through the assimilation of variables calculated from observed in-water inherent optical properties (IOPs), such as phytoplankton absorption, is problematic. Empirically derived relationships between IOPs and variables such as chlorophyll-a concentration (Chl a), total suspended solids (TSS) and coloured dissolved organic matter (CDOM) have been shown to have errors that can exceed 100 % of the observed quantity. These errors are greatest in shallow coastal regions, such as the Great Barrier Reef (GBR), due to the additional signal from bottom reflectance. Rather than assimilate quantities calculated using IOP algorithms, this study demonstrates the advantages of assimilating quantities calculated directly from the less error-prone satellite remote-sensing reflectance (RSR). To assimilate the observed RSR, we use an in-water optical model to produce an equivalent simulated RSR and calculate the mismatch between the observed and simulated quantities to constrain the BGC model with a deterministic ensemble Kalman filter (DEnKF). The traditional assumption that simulated surface Chl a is equivalent to the remotely sensed OC3M estimate of Chl a resulted in a forecast error of approximately 75 %. We show this error can be halved by instead using simulated RSR to constrain the model via the assimilation system. When the analysis and forecast fields from the RSR-based assimilation system are compared with the non-assimilating model, a comparison against independent in situ observations of Chl a, TSS and dissolved inorganic nutrients (NO3, NH4 and DIP) showed that errors are reduced by up to 90 %. In all cases, the assimilation system improves the simulation compared to the non-assimilating model. Our approach allows for the incorporation of vast quantities of remote-sensing observations that have in the past been discarded due to shallow water and/or artefacts introduced by terrestrially derived TSS and CDOM or the lack of a calibrated regional IOP algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B., E-mail: pritychenko@bnl.gov
Nuclear astrophysics and californium fission neutron spectrum averaged cross sections and their uncertainties for ENDF materials have been calculated. Absolute values were deduced with Maxwellian and Mannhart spectra, while uncertainties are based on ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0 and Low-Fidelity covariances. These quantities are compared with available data, independent benchmarks, EXFOR library, and analyzed for a wide range of cases. Recommendations for neutron cross section covariances are given and implications are discussed.
NASA Astrophysics Data System (ADS)
Stolz, Douglas C.; Rutledge, Steven A.; Pierce, Jeffrey R.; van den Heever, Susan C.
2017-07-01
The objective of this study is to determine the relative contributions of normalized convective available potential energy (NCAPE), cloud condensation nuclei (CCN) concentrations, warm cloud depth (WCD), vertical wind shear (SHEAR), and environmental relative humidity (RH) to the variability of lightning and radar reflectivity within convective features (CFs) observed by the Tropical Rainfall Measuring Mission (TRMM) satellite. Our approach incorporates multidimensional binned representations of observations of CFs and modeled thermodynamics, kinematics, and CCN as inputs to develop approximations for total lightning density (TLD) and the average height of 30 dBZ radar reflectivity (AVGHT30). The results suggest that TLD and AVGHT30 increase with increasing NCAPE, increasing CCN, decreasing WCD, increasing SHEAR, and decreasing RH. Multiple-linear approximations for lightning and radar quantities using the aforementioned predictors account for significant portions of the variance in the binned data set (R2 ≈ 0.69-0.81). The standardized weights attributed to CCN, NCAPE, and WCD are largest, the standardized weight of RH varies relative to other predictors, while the standardized weight for SHEAR is comparatively small. We investigate these statistical relationships for collections of CFs within various geographic areas and compare the aerosol (CCN) and thermodynamic (NCAPE and WCD) contributions to variations in the CF population in a partial sensitivity analysis based on multiple-linear regression approximations computed herein. A global lightning parameterization is developed; the average difference between predicted and observed TLD decreases from +21.6 to +11.6% when using a hybrid approach to combine separate approximations over continents and oceans, thus highlighting the need for regionally targeted investigations in the future.
NASA Astrophysics Data System (ADS)
Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu
2017-12-01
In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.
3D RISM theory with fast reciprocal-space electrostatics.
Heil, Jochen; Kast, Stefan M
2015-03-21
The calculation of electrostatic solute-solvent interactions in 3D RISM ("three-dimensional reference interaction site model") integral equation theory is recast in a form that allows for a computational treatment analogous to the "particle-mesh Ewald" formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.
Number versus Continuous Quantity in Numerosity Judgments by Fish
ERIC Educational Resources Information Center
Agrillo, Christian; Piffer, Laura; Bisazza, Angelo
2011-01-01
In quantity discrimination tasks, adults, infants and animals have been sometimes observed to process number only after all continuous variables, such as area or density, have been controlled for. This has been taken as evidence that processing number may be more cognitively demanding than processing continuous variables. We tested this hypothesis…
Improving the assessment of prescribing: use of a 'substitution index'.
Kunisawa, Susumu; Otsubo, Tetsuya; Lee, Jason; Imanaka, Yuichi
2013-07-01
To analyse the current and potential utilization of generic drugs in Japan, to examine the maximum possible cost savings from generic drug use and to develop a fairer measure to assess the level of generic drug substitution. We conducted a cross-sectional retrospective analysis of nine million dispensing records during January to March 2010 in Kyoto Prefecture. Maximum potential quantity-based shares were defined as the quantity of generic drugs used plus the quantity of branded drugs that could have been replaced by generic drugs divided by the quantity of all drugs dispensed. We developed a 'substitution index', defined as the proportion of generic drugs out of the total drugs substitutable with generic drugs (based on quantity rather than cost). Generic drugs had a quantity-based share of 17.9%, a cost-based share of 8.9% and a maximum potential quantity-based share of 50.1%, which is lower than the actual generic drug shares of some other countries. The maximum possible cost savings as a result of generic drug substitution was 16.5%. We also observed wide variations in maximum potential quantity-based shares between health care sectors and health care institutions. Simple comparisons based on quantity-based shares may misrepresent the actual generic drug use. A substitution index that takes into account the maximum potential quantity-based share of generic drugs as a fairer measure may promote more realistic goals and encourage generic drug usage.
Ring-averaged ion velocity distribution function probe for laboratory magnetized plasma experiment
NASA Astrophysics Data System (ADS)
Kawamori, Eiichirou; Chen, Jinting; Lin, Chiahsuan; Lee, Zongmau
2017-10-01
Ring-averaged velocity distribution function of ions at a fixed guiding center position is a fundamental quantity in the gyrokinetic plasma physics. We have developed a diagnostic tool for the ring averaged velocity distribution function of ions for laboratory plasma experiments, which is named as the ring-averaged ion distribution function probe (RIDFP). The RIDFP is a set of ion collectors for different velocities. It is designed to be immersed in magnetized plasmas and achieves momentum selection of incoming ions by the selection of the ion Larmor radii. To nullify the influence of the sheath potential surrounding the RIDFP on the orbits of the incoming ions, the electrostatic potential of the RIDFP body is automatically adjusted to coincide with the space potential of the target plasma with the use of an emissive probe and a voltage follower. The developed RIDFP successfully measured the equilibrium ring-averaged velocity distribution function of a laboratory magnetized plasma, which was in accordance with the Maxwellian distribution having an ion temperature of 0.2 eV.
NASA Technical Reports Server (NTRS)
North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.
1982-01-01
Geometric characteristics of the spherical earth are shown to be responsible for the increase of variance with latitude of zonally averaged meteorological statistics. An analytic model is constructed to display the effect of a spherical geometry on zonal averages, employing a sphere labeled with radial unit vectors in a real, stochastic field expanded in complex spherical harmonics. The variance of a zonally averaged field is found to be expressible in terms of the spectrum of the vector field of the spherical harmonics. A maximum variance is then located at the poles, and the ratio of the variance to the zonally averaged grid-point variance, weighted by the cosine of the latitude, yields the zonal correlation typical of the latitude. An example is provided for the 500 mb level in the Northern Hemisphere compared to 15 years of data. Variance is determined to increase north of 60 deg latitude.
Ring-averaged ion velocity distribution function probe for laboratory magnetized plasma experiment.
Kawamori, Eiichirou; Chen, Jinting; Lin, Chiahsuan; Lee, Zongmau
2017-10-01
Ring-averaged velocity distribution function of ions at a fixed guiding center position is a fundamental quantity in the gyrokinetic plasma physics. We have developed a diagnostic tool for the ring averaged velocity distribution function of ions for laboratory plasma experiments, which is named as the ring-averaged ion distribution function probe (RIDFP). The RIDFP is a set of ion collectors for different velocities. It is designed to be immersed in magnetized plasmas and achieves momentum selection of incoming ions by the selection of the ion Larmor radii. To nullify the influence of the sheath potential surrounding the RIDFP on the orbits of the incoming ions, the electrostatic potential of the RIDFP body is automatically adjusted to coincide with the space potential of the target plasma with the use of an emissive probe and a voltage follower. The developed RIDFP successfully measured the equilibrium ring-averaged velocity distribution function of a laboratory magnetized plasma, which was in accordance with the Maxwellian distribution having an ion temperature of 0.2 eV.
NASA Astrophysics Data System (ADS)
Parry, Louise; Neely, Ryan, III; Bennett, Lindsay; Collier, Chris; Dufton, David
2017-04-01
The Scottish Environment Protection Agency (SEPA) has a statutory responsibility to provide flood warning across Scotland. It achieves this through an operational partnership with the UK Met Office wherein meteorological forecasts are applied to a national distributed hydrological model, Grid- to- Grid (G2G), and catchment specific lumped PDM models. Both of these model types rely on observed precipitation input for model development and calibration, and operationally for historical runs to generate initial conditions. Scotland has an average annual precipitation of 1430mm per annum (1971-2000), but the spatial variability in totals is high, predominantly in relation to the topography and prevailing winds, which poses different challenges to both radar and point measurement methods of observation. In addition, the high elevations mean that in winter a significant proportion of precipitation falls as snow. For the operational forecasting models, observed rainfall data is provided in Near Real Time (NRT) from SEPA's network of approximately 260 telemetered TBR gauges and 4 UK Met Office C-band radars. Both data sources have their strengths and weaknesses, particularly in relation to the orography and spatial representativeness, but estimates of rainfall from the two methods can vary greatly. Northern Scotland, particularly near Inverness, is a comparatively sparse part of the radar network. Rainfall totals and distribution in this area are determined by the Northern Western Highlands and Cairngorms mountain ranges, which also have a negative impact on radar observations. In recognition of this issue, the NCAS mobile X-band weather radar (MXWR) was deployed in this area between February and August 2016. This study presents a comparison of rainfall estimates for the Inverness and Moray Firth region generated from the operational radar network, the TBR network, and the MXWR. Quantitative precipitation estimates (QPEs) from both sources of radar data were compared to point estimates of precipitation as well as catchment average estimates generated using different spatial averaging methods, including the operationally applied Thiessen polygons. In addition, the QPEs were applied to operational PDM models to compare the effect on the simulated runoff. The results highlight the hydrological significance of uncertainty in observed rainfall. Recommendations for future investigations are to improve the estimate of radar QPEs through improvement of the correction for orography and the correction for different precipitation types, as well as to analyse the benefits of the UK Met Office radar-raingauge merged product. In addition, we need to quantity the cost-benefit of deploying more radars in Scotland in light of the problems posed by the orography.
Finite coupling corrections to holographic predictions for hot QCD
Waeber, Sebastian; Schafer, Andreas; Vuorinen, Aleksi; ...
2015-11-13
Finite ’t Hooft coupling corrections to multiple physical observables in strongly coupled N=4 supersymmetric Yang-Mills plasma are examined, in an attempt to assess the stability of the expansion in inverse powers of the ’t Hooft coupling λ. Observables considered include thermodynamic quantities, transport coefficients, and quasinormal mode frequencies. Furthermore large λ expansions for quasinormal mode frequencies are notably less well behaved than the expansions of other quantities, we find that a partial resummation of higher order corrections can significantly reduce the sensitivity of the results to the value of λ.
Norrie, John; Davidson, Kate; Tata, Philip; Gumley, Andrew
2013-01-01
Objectives We investigated the treatment effects reported from a high-quality randomized controlled trial of cognitive behavioural therapy (CBT) for 106 people with borderline personality disorder attending community-based clinics in the UK National Health Service – the BOSCOT trial. Specifically, we examined whether the amount of therapy and therapist competence had an impact on our primary outcome, the number of suicidal acts†, using instrumental variables regression modelling. Design Randomized controlled trial. Participants from across three sites (London, Glasgow, and Ayrshire/Arran) were randomized equally to CBT for personality disorders (CBTpd) plus Treatment as Usual or to Treatment as Usual. Treatment as Usual varied between sites and individuals, but was consistent with routine treatment in the UK National Health Service at the time. CBTpd comprised an average 16 sessions (range 0–35) over 12 months. Method We used instrumental variable regression modelling to estimate the impact of quantity and quality of therapy received (recording activities and behaviours that took place after randomization) on number of suicidal acts and inpatient psychiatric hospitalization. Results A total of 101 participants provided full outcome data at 2 years post randomization. The previously reported intention-to-treat (ITT) results showed on average a reduction of 0.91 (95% confidence interval 0.15–1.67) suicidal acts over 2 years for those randomized to CBT. By incorporating the influence of quantity of therapy and therapist competence, we show that this estimate of the effect of CBTpd could be approximately two to three times greater for those receiving the right amount of therapy from a competent therapist. Conclusions Trials should routinely control for and collect data on both quantity of therapy and therapist competence, which can be used, via instrumental variable regression modelling, to estimate treatment effects for optimal delivery of therapy. Such estimates complement rather than replace the ITT results, which are properly the principal analysis results from such trials. Practitioner points Assessing the impact of the quantity and quality of therapy (competence of therapists) is complex. More competent therapists, trained in CBTpd, may significantly reduce the number of suicidal act in patients with borderline personality disorder. PMID:23420622
HIV prevention costs and their predictors: evidence from the ORPHEA Project in Kenya
Galárraga, Omar; Wamai, Richard G; Sosa-Rubí, Sandra G; Mugo, Mercy G; Contreras-Loya, David; Bautista-Arredondo, Sergio; Nyakundi, Helen; Wang’ombe, Joseph K
2017-01-01
Abstract We estimate costs and their predictors for three HIV prevention interventions in Kenya: HIV testing and counselling (HTC), prevention of mother-to-child transmission (PMTCT) and voluntary medical male circumcision (VMMC). As part of the ‘Optimizing the Response of Prevention: HIV Efficiency in Africa’ (ORPHEA) project, we collected retrospective data from government and non-governmental health facilities for 2011–12. We used multi-stage sampling to determine a sample of health facilities by type, ownership, size and interventions offered totalling 144 sites in 78 health facilities in 33 districts across Kenya. Data sources included key informants, registers and time-motion observation methods. Total costs of production were computed using both quantity and unit price of each input. Average cost was estimated by dividing total cost per intervention by number of clients accessing the intervention. Multivariate regression methods were used to analyse predictors of log-transformed average costs. Average costs were $7 and $79 per HTC and PMTCT client tested, respectively; and $66 per VMMC procedure. Results show evidence of economies of scale for PMTCT and VMMC: increasing the number of clients per year by 100% was associated with cost reductions of 50% for PMTCT, and 45% for VMMC. Task shifting was associated with reduced costs for both PMTCT (59%) and VMMC (54%). Costs in hospitals were higher for PMTCT (56%) in comparison to non-hospitals. Facilities that performed testing based on risk factors as opposed to universal screening had higher HTC average costs (79%). Lower VMMC costs were associated with availability of male reproductive health services (59%) and presence of community advisory board (52%). Aside from increasing production scale, HIV prevention costs may be contained by using task shifting, non-hospital sites, service integration and community supervision. PMID:29029086
Levitt, Steven D.; List, John A.; Neckermann, Susanne; Nelson, David
2016-01-01
We report on a natural field experiment on quantity discounts involving more than 14 million consumers. Implementing price reductions ranging from 9–70% for large purchases, we found remarkably little impact on revenue, either positively or negatively. There was virtually no increase in the quantity of customers making a purchase; all the observed changes occurred for customers who already were buyers. We found evidence that infrequent purchasers are more responsive to discounts than frequent purchasers. There was some evidence of habit formation when prices returned to pre-experiment levels. There also was some evidence that consumers contemplating small purchases are discouraged by the presence of extreme quantity discounts for large purchases. PMID:27382146
Mutagens and carcinogens in foods. Epidemiologic review.
Hislop, T. G.
1993-01-01
Evidence that diet contributes to the development of cancer is strengthening. This paper examines mutagens and carcinogens, such as naturally occurring substances, products of cooking and food processing, intentional and unintentional additives, and contaminants, found in foods. Such substances are present in minute quantities in the diets of average Canadians. Indication of health risk is largely limited to experimental laboratory evidence. PMID:8499796
Szilard, L.
1963-09-10
A breeder reactor is described, including a mass of fissionable material that is less than critical with respect to unmoderated neutrons and greater than critical with respect to neutrons of average energies substantially greater than thermal, a coolant selected from sodium or sodium--potassium alloys, a control liquid selected from lead or lead--bismuth alloys, and means for varying the quantity of control liquid in the reactor. (AEC)
Middle Atmosphere Program. Handbook for MAP, Volume 5
NASA Technical Reports Server (NTRS)
Sechrist, C. F., Jr. (Editor)
1982-01-01
The variability of the stratosphere during the winter in the Northern Hemisphere is considered. Long term monthly mean 30-mbar maps are presented that include geopotential heights, temperatures, and standard deviations of 15 year averages. Latitudinal profiles of mean zonal winds and temperatures are given along with meridional time sections of derived quantities for the winters 1965/66 to 1980/81.
ERIC Educational Resources Information Center
Ayalon, Michal; Watson, Anne; Lerman, Steve
2016-01-01
Identifying and expressing relations between quantities is a key aspect of understanding and using functions. We are aiming to understand the development of functions understanding throughout school years in Israel. A survey instrument was developed with teachers and given to 20 high and average achieving students from each of years 7-11 and to 10…
Multi-window PIV measurements around a breathing manikin
NASA Astrophysics Data System (ADS)
Marr, David
2005-11-01
The presented work includes multi-scale measurements via a stereo article Image Velocimetry (PIV) system to view a pair of two-component windows of dissimilar scale using a varied focal length. These measurements are taken in the breathing zone of an isothermal breathing manikin (from mouth) in an environmental chamber of average office cubicle dimensions without ventilation and are analogous to an oscillatory jet. From these phase-averaged measurements, we can extract information concerning length scales, turbulence quantities and low dimensional information in order to both determine correlation between data at different length scales as well as continuing research in exposure assessment for the indoor environment. In this talk we will present these turbulence quantities and interpret their influence on the breathing zone. While the largest scale is that of the room itself, we find that the relevant spatial scales associated with the breathing zone are much lower in magnitude. In future experiments, we will expand the multi window PIV technique to include PIV window configured to obtain scales of order the cubicle simultaneously with those of the breathing zone. This will aid in our understanding of the combined impact of these multiple scales on occupant exposure in the indoor environment.
NASA Astrophysics Data System (ADS)
Adzima, Ashley; Tireman, William; C-Gen Collaboration
The electric form factor is an important quantity to further the understanding of the atom and its constituent parts. The C-GEN collaboration at Jefferson National Laboratory plans to measure this fundamental quantity using recoil polarimetry. An efficient neutron polarimeter is essential for the collection of precise data and involves maximizing the ratio of elastic to inelastic events identified. The determination of the elastic to inelastic ratio of neutron events was simulated using GEANT-4 on 5 cm, 10 cm, and 15 cm thick detectors. Specific requirements were set in place by C-GEN to determine what marks an elastic event. Plots of neutron scattering events versus detector thickness were analyzed, and the ratio of elastic to inelastic events was extracted for each section per vertical slice, as well as an average ratio. The average ratio of elastic to inelastic events were 0.2206, 0.1706, and 0.1507 for the 5 cm, 10 cm, and 15 cm detectors, respectfully. The impact of these ratios on the statistics and costs of altering the polarimeter's original 10 cm detector design will be further discussed. U.S. Department of Education - TRIO McNair Scholars Program.
Lv, Yipeng; Tang, Bihan; Liu, Xu; Xue, Chen; Liu, Yuan; Kang, Peng; Zhang, Lulu
2015-01-01
In this study, we aimed to compare the quantity and quality of publications in health care sciences and services journals from the Chinese mainland, Taiwan, Japan, and India. Journals in this category of the Science Citation Index Expanded were included in the study. Scientific papers were retrieved from the Web of Science online database. Quality was measured according to impact factor, citation of articles, number of articles published in top 10 journals, and the 10 most popular journals by country (area). In the field of health care sciences and services, the annual incremental rates of scientific articles published from 2007 to 2014 were higher than rates of published scientific articles in all fields. Researchers from the Chinese mainland published the most original articles and reviews and had the highest accumulated impact factors, highest total article citations, and highest average citation. Publications from India had the highest average impact factor. In the field of health care sciences and services, China has made remarkable progress during the past eight years in the annual number and percentage of scientific publications. Yet, there is room for improvement in the quantity and quality of such articles. PMID:26712774
Applications of Ergodic Theory to Coverage Analysis
NASA Technical Reports Server (NTRS)
Lo, Martin W.
2003-01-01
The study of differential equations, or dynamical systems in general, has two fundamentally different approaches. We are most familiar with the construction of solutions to differential equations. Another approach is to study the statistical behavior of the solutions. Ergodic Theory is one of the most developed methods to study the statistical behavior of the solutions of differential equations. In the theory of satellite orbits, the statistical behavior of the orbits is used to produce 'Coverage Analysis' or how often a spacecraft is in view of a site on the ground. In this paper, we consider the use of Ergodic Theory for Coverage Analysis. This allows us to greatly simplify the computation of quantities such as the total time for which a ground station can see a satellite without ever integrating the trajectory, see Lo 1,2. More over, for any quantity which is an integrable function of the ground track, its average may be computed similarly without the integration of the trajectory. For example, the data rate for a simple telecom system is a function of the distance between the satellite and the ground station. We show that such a function may be averaged using the Ergodic Theorem.
Lv, Yipeng; Tang, Bihan; Liu, Xu; Xue, Chen; Liu, Yuan; Kang, Peng; Zhang, Lulu
2015-12-24
In this study, we aimed to compare the quantity and quality of publications in health care sciences and services journals from the Chinese mainland, Taiwan, Japan, and India. Journals in this category of the Science Citation Index Expanded were included in the study. Scientific papers were retrieved from the Web of Science online database. Quality was measured according to impact factor, citation of articles, number of articles published in top 10 journals, and the 10 most popular journals by country (area). In the field of health care sciences and services, the annual incremental rates of scientific articles published from 2007 to 2014 were higher than rates of published scientific articles in all fields. Researchers from the Chinese mainland published the most original articles and reviews and had the highest accumulated impact factors, highest total article citations, and highest average citation. Publications from India had the highest average impact factor. In the field of health care sciences and services, China has made remarkable progress during the past eight years in the annual number and percentage of scientific publications. Yet, there is room for improvement in the quantity and quality of such articles.
NASA Astrophysics Data System (ADS)
Cannon, Bradford E.; Smith, Charles W.; Isenberg, Philip A.; Vasquez, Bernard J.; Murphy, Neil; Nuno, Raquel G.
2014-04-01
We have examined Ulysses magnetic field data using dynamic spectrogram techniques that compute wave amplitude, polarization, and direction of propagation over a broad range of frequencies and time. Events were identified that showed a strong polarization signature and an enhancement of power above the local proton gyrofrequency. We perform a statistical study of 502 wave events in an effort to determine when, where, and why they are observed. Most notably, we find that waves arising from newborn interstellar pickup ions are relatively rare and difficult to find. The quantities normally employed in theories of wave growth are neutral atom density and quantities related to their ionization and the subsequent dynamics such as wind speed, solar wind flux, and magnetic field orientation. We find the observations of waves to be largely uncorrelated to these quantities except for mean field direction where quasi-radial magnetic fields are favored and solar wind proton flux where wave observations appear to be favored by low flux conditions which runs contrary to theoretical expectations of wave generation. It would appear that an explanation based on source physics and instability growth rates alone is not adequate to account for the times when these waves are seen.
Testing Photoionization Calculations Using Chandra X-ray Spectra
NASA Technical Reports Server (NTRS)
Kallman, Tim
2008-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
NASA Astrophysics Data System (ADS)
Guidi, Giovanni; Scannapieco, Cecilia; Walcher, C. Jakob
2015-12-01
We study the sources of biases and systematics in the derivation of galaxy properties from observational studies, focusing on stellar masses, star formation rates, gas and stellar metallicities, stellar ages, magnitudes and colours. We use hydrodynamical cosmological simulations of galaxy formation, for which the real quantities are known, and apply observational techniques to derive the observables. We also analyse biases that are relevant for a proper comparison between simulations and observations. For our study, we post-process the simulation outputs to calculate the galaxies' spectral energy distributions (SEDs) using stellar population synthesis models and also generate the fully consistent far-UV-submillimetre wavelength SEDs with the radiative transfer code SUNRISE. We compared the direct results of simulations with the observationally derived quantities obtained in various ways, and found that systematic differences in all studied galaxy properties appear, which are caused by: (1) purely observational biases, (2) the use of mass-weighted and luminosity-weighted quantities, with preferential sampling of more massive and luminous regions, (3) the different ways of constructing the template of models when a fit to the spectra is performed, and (4) variations due to different calibrations, most notably for gas metallicities and star formation rates. Our results show that large differences can appear depending on the technique used to derive galaxy properties. Understanding these differences is of primary importance both for simulators, to allow a better judgement of similarities and differences with observations, and for observers, to allow a proper interpretation of the data.
Jagals, P
2006-01-01
The concept of safe water is defined by three principles: the health-related quality must be suitable, the supply/source must be accessible and the water must constantly be available in quantities sufficient for the intended use. If any one (or more) of these three elements is missing from a water services improvement programme, providing safe water is not successfully achieved. A study in a deep rural area in South Africa showed that providing small communities, using untreated river water as their only water source, with good quality water through a piped distribution system and accessible at communal taps did not fall within our parameters of safe water. The parameters for measuring the three principles were: absence of Escherichia coli in drinking water samples; accessibility by improving tap distances to within 200 m from each household; availability by assessing whether households have at least 25 L per person per day. Results show that although E. coli levels were reduced significantly, households were still consuming water with E. coli numbers at non-compliant levels. Access (distance) was improved from an average of 750 m from households to river source to an average of 120 m to new on-tap source points. This did not result in significant increases in household quantities, which on average remained around 18 L per person per day.
NASA Astrophysics Data System (ADS)
Takano, Yukinori; Hirata, Akimasa; Fujiwara, Osamu
Human exposed to electric and/or magnetic fields at low frequencies may cause direct effect such as nerve stimulation and excitation. Therefore, basic restriction is regulated in terms of induced current density in the ICNIRP guidelines and in-situ electric field in the IEEE standard. External electric or magnetic field which does not produce induced quantities exceeding the basic restriction is used as a reference level. The relationship between the basic restriction and reference level for low-frequency electric and magnetic fields has been investigated using European anatomic models, while limited for Japanese model, especially for electric field exposures. In addition, that relationship has not well been discussed. In the present study, we calculated the induced quantities in anatomic Japanese male and female models exposed to electric and magnetic fields at reference level. A quasi static finite-difference time-domain (FDTD) method was applied to analyze this problem. As a result, spatially averaged induced current density was found to be more sensitive to averaging algorithms than that of in-situ electric field. For electric and magnetic field exposure at the ICNIRP reference level, the maximum values of the induced current density for different averaging algorithm were smaller than the basic restriction for most cases. For exposures at the reference level in the IEEE standard, the maximum electric fields in the brain were larger than the basic restriction in the brain while smaller for the spinal cord and heart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishimoto, M.; Meng, C.; Romick, G.J.
The UV spectra over the southern hemisphere nightside auroral oval have been obtained from an AFGL spectral/photometric experiment on board the low-altitude polar-orbiting S3-4 satellite. A detailed analysis of nightside auroral spectra from seven orbits between mid-May and June 1978 was performed to estimate the average energy and total energy flux of incident electrons. This study was based on observations of the N{sub 2} LBH (3-10) (1928-A) band and the N{sub 2} VK (0-5) (2604 A) band emission intensities and the application of model calculations. Comparison of the estimated quantities with the statistical satellite measurement of incident particles indicates thatmore » the LBH (3-10) band emission intensity can be used to estimate the total energy flux of incident electrons, similar to the N{sub 2}(+) 1N (0-0) (3914 A) band emission intensity in the visible region. In addition, the ratio of the LBH (3-10) to the VK (0-5) bande mission intensities indicates the average energy of incident auroral electrons in much the same way that the N{sub 2}(+)1N (0-0) and O I (6300 A) emission ratio does in the visible region. This study shows the use of different constituent emissions, model calculations, and synthetic spectra to infer the inherent possibilities in these types of studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishimoto, M.; Meng, C.; Romick, G.J.
The UV spectra over the southern hemisphere nightside auroral oval have been obtained from an AFGL spectral/photometric experiment on board the low-altitude polar-orbiting S3--4 satellite. A detailed analysis of nightside auroral spectra from seven orbits between mid-May and June 1978 was performed to estimate the average energy and total energy flux of incident electrons. This study was based on observations of the N/sub 2/ LBH (3--10) (1928 A) band and the N/sub 2/ VK (0--5) (2604 A) band emission intensities and the application of model calculations by Strickland et al. (1983) and Daniell and Strickland (1986). Comparison of the estimatedmore » quantities with the statistical satellite measurement of incident particles by Hardy et al. (1985) indicates that the LBH (3--10) band emission intensity can be used to estimate the total energy flux of incident electrons, similar to the N/sub 2//sup +/ 1N (0--0) (3914 A) band emission intensity in the visible region. In addition, the ratio of the BLH (3--10) to the VK (0--5) band emission intensities indicates the average energy of incident auroral electrons in much the same way that the N/sub 2//sup +/ IN (0--0) and O I (6300 A) emission ratio does in the visible region.« less
A Numerical Study of the Effect of Wake Passing on Turbine Blade Film Cooling
NASA Technical Reports Server (NTRS)
Heidmann, James D.
1995-01-01
Time-accurate and steady three-dimensional viscous turbulent numerical simulations were performed to study the effect of upstream blade wake passing unsteadiness on the performance of film cooling on a downstream axial turbine blade. The simulations modeled the blade as spanwise periodic and of infinite span. Both aerodynamic and heat transfer quantities were explored. A showerhead film cooling arrangement typical of modern gas turbine engines was employed. Showerhead cooling was studied because of its anticipated strong sensitivity to upstream flow fluctuations. The wake was modeled as a region of zero axial velocity on the upstream computational boundary which translated with each iteration. This model is compatible with a planned companion experiment in which the wakes will be produced by a rotating row of cylindrical rods upstream of an annular turbine cascade. It was determined that a steady solution with appropriate upstream swirl and stagnation pressure predicted the span-average film effectiveness quite well. The major difference is a 2 to 3 percent overprediction of span-average film effectiveness by the steady simulation on the pressure surface and in the showerhead region. Local overpredictions of up to 8 percent were observed in the showerhead region. These differences can be explained by the periodic relative lifting of the boundary layer and enhanced mixing in the unsteady simulations.
[Navigation in implantology: Accuracy assessment regarding the literature].
Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József
2016-06-01
Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.
Plasmaspheric H+, He+, O+, He++, and O++ Densities and Temperatures
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Craven, P. D.; Comfort H.
2013-01-01
Thermal plasmaspheric densities and temperatures for five ion species have recently become available, even though these quantities were derived some time ago from the Retarding Ion Mass Spectrometer onboard the Dynamics Explorer 1 satellite over the years 1981-1984. The quantitative properties will be presented. Densities are found to have one behavior with lessor statistical variation below about L=2 and another with much greater variability above that Lshell. Temperatures also have a behavior difference between low and higher L-values. The density ratio He++/H+ is the best behaved with values of about 0.2% that slightly increase with increasing L. Unlike the He+/H+ density ratio that on average decreases with increasing Lvalue, the O+/H+ and O++/H+ density ratios have decreasing values below about L=2 and increasing average ratios at higher L-values. Hydrogen ion temperatures range from about 0.2 eV to several 10s of eV for a few measurements, although the bulk of the observations are of temperatures below 3 eV, again increasing with L-value. The temperature ratios of He+/H+ are tightly ordered around 1.0 except for the middle plasmasphere between L=3.5 and 4.5 where He+ temperatures can be significantly higher. The temperatures of He++, O+, and O++ are consistently higher than H+.
Plasmaspheric H+, He+, He++, O+, and O++ Densities and Temperatures
NASA Technical Reports Server (NTRS)
Gallagher, G. L.; Craven, P. D.; Comfort, R. H.
2013-01-01
Thermal plasmaspheric densities and temperatures for five ion species have recently become available, even though these quantities were derived some time ago from the Retarding Ion Mass Spectrometer onboard the Dynamics Explorer 1 satellite over the years 1981-1984. The quantitative properties will be presented. Densities are found to have one behavior with lessor statistical variation below about L=2 and another with much greater variability above that Lshell. Temperatures also have a behavior difference between low and higher L-values. The density ratio He++/H+ is the best behaved with values of about 0.2% that slightly increase with increasing L. Unlike the He+/H+ density ratio that on average decreases with increasing Lvalue, the O+/H+ and O++/H+ density ratios have decreasing values below about L=2 and increasing average ratios at higher L-values. Hydrogen ion temperatures range from about 0.2 eV to several 10s of eV for a few measurements, although the bulk of the observations are of temperatures below 3 eV, again increasing with L-value. The temperature ratios of He+/H+ are tightly ordered around 1.0 except for the middle plasmasphere between L=3.5 and 4.5 where He+ temperatures can be significantly higher. The temperatures of He++, O+, and O++ are consistently higher than H+.
NASA Technical Reports Server (NTRS)
Brown, Robert B.; Klaus, D.; Todd, P.
2002-01-01
Cultures of Escherichia coli grown in space reached a 25% higher average final cell population than those in comparably matched ground controls (p<0.05). However, both groups consumed the same quantity of glucose, which suggests that space flight not only stimulated bacterial growth as has been previously reported, but also resulted in a 25% more efficient utilization of the available nutrients. Supporting experiments performed in "simulated weightlessness" under clinorotation produced similar trends of increased growth and efficiency, but to a lesser extent in absolute values. These experiments resulted in increases of 12% and 9% in average final cell population (p<0.05), while the efficiency of substrate utilization improved by 6% and 9% relative to static controls (p=0.12 and p<0.05, respectively). In contrast, hypergravity, produced by centrifugation, predictably resulted in the opposite effect--a decrease of 33% to 40% in final cell numbers with corresponding 29% to 40% lower net growth efficiencies (p<0.01). Collectively, these findings support the hypothesis that the increased bacterial growth observed in weightlessness is a result of reduced extracellular mass transport that occurs in the absence of sedimentation and buoyancy-driven convection, which consequently also improves substrate utilization efficiency in suspended cultures.
NASA Astrophysics Data System (ADS)
Hartland, Tucker A.; Schilling, Oleg
2016-11-01
Analytical self-similar solutions corresponding to Rayleigh-Taylor, Richtmyer-Meshkov and Kelvin-Helmholtz instability are combined with observed values of the growth parameters in these instabilities to derive coefficient sets for K- ɛ and K- L- a Reynolds-averaged turbulence models. It is shown that full numerical solutions of the model equations give mixing layer widths, fields, and budgets in good agreement with the corresponding self-similar quantities for small Atwood number. Both models are then applied to Rayleigh-Taylor instability with increasing density contrasts to estimate the Atwood number above which the self-similar solutions become invalid. The models are also applied to a reshocked Richtmyer-Meshkov instability, and the predictions are compared with data. The expressions for the growth parameters obtained from the similarity analysis are used to develop estimates for the sensitivity of their values to changes in important model coefficients. Numerical simulations using these modified coefficient values are then performed to provide bounds on the model predictions associated with uncertainties in these coefficient values. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This work was supported by the 2016 LLNL High-Energy-Density Physics Summer Student Program.
A multiscale strength model for tantalum over an extended range of strain rates
NASA Astrophysics Data System (ADS)
Barton, N. R.; Rhee, M.
2013-09-01
A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].
Recent changes and drivers of the atmospheric evaporative demand in the Canary Islands
NASA Astrophysics Data System (ADS)
Vicente-Serrano, Sergio M.; Azorin-Molina, Cesar; Sanchez-Lorenzo, Arturo; El Kenawy, Ahmed; Martín-Hernández, Natalia; Peña-Gallardo, Marina; Beguería, Santiago; Tomas-Burguera, Miquel
2016-08-01
We analysed recent evolution and meteorological drivers of the atmospheric evaporative demand (AED) in the Canary Islands for the period 1961-2013. We employed long and high-quality time series of meteorological variables to analyse current AED changes in this region and found that AED has increased during the investigated period. Overall, the annual ETo, which was estimated by means of the FAO-56 Penman-Monteith equation, increased significantly by 18.2 mm decade-1 on average, with a stronger trend in summer (6.7 mm decade-1). In this study we analysed the contribution of (i) the aerodynamic (related to the water vapour that a parcel of air can store) and (ii) radiative (related to the available energy to evaporate a quantity of water) components to the decadal variability and trends of ETo. More than 90 % of the observed ETo variability at the seasonal and annual scales can be associated with the variability in the aerodynamic component. The variable that recorded more significant changes in the Canary Islands was relative humidity, and among the different meteorological factors used to calculate ETo, relative humidity was the main driver of the observed ETo trends. The observed trend could have negative consequences in a number of water-depending sectors if it continues in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.
Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance ( w' 2) from zenith stares. However, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w' 2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skillmore » in correcting for volume-averaging effects in the calculation of w' 2 is also assessed. Additionally, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w' 2. Through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w' 2. After the autocovariance technique is applied, values of w' 2 from the Doppler lidars are generally in close agreement ( R 2≈0.95-0.98) with those calculated from sonic anemometer measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.
Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance ( w' 2) from zenith stares. But, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w' 2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skillmore » in correcting for volume-averaging effects in the calculation of w' 2 is also assessed. In addition, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w' 2. And through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w' 2. After the autocovariance technique is applied, values of w' 2 from the Doppler lidars are generally in close agreement ( R 2 ≈ 0.95 -0.98) with those calculated from sonic anemometer measurements.« less
Improved observations of turbulence dissipation rates from wind profiling radars
McCaffrey, Katherine; Bianco, Laura; Wilczak, James M.
2017-07-20
Observations of turbulence dissipation rates in the planetary boundary layer are crucial for validation of parameterizations in numerical weather prediction models. However, because dissipation rates are difficult to obtain, they are infrequently measured through the depth of the boundary layer. For this reason, demonstrating the ability of commonly used wind profiling radars (WPRs) to estimate this quantity would be greatly beneficial. During the XPIA field campaign at the Boulder Atmospheric Observatory, two WPRs operated in an optimized configuration, using high spectral resolution for increased accuracy of Doppler spectral width, specifically chosen to estimate turbulence from a vertically pointing beam. Multiplemore » post-processing techniques, including different numbers of spectral averages and peak processing algorithms for calculating spectral moments, were evaluated to determine the most accurate procedures for estimating turbulence dissipation rates using the information contained in the Doppler spectral width, using sonic anemometers mounted on a 300 m tower for validation. Furthermore, the optimal settings were determined, producing a low bias, which was later corrected. Resulting estimations of turbulence dissipation rates correlated well ( R 2 = 0.54 and 0.41) with the sonic anemometers, and profiles up to 2 km from the 449 MHz WPR and 1 km from the 915 MHz WPR were observed.« less
Improved observations of turbulence dissipation rates from wind profiling radars
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaffrey, Katherine; Bianco, Laura; Wilczak, James M.
Observations of turbulence dissipation rates in the planetary boundary layer are crucial for validation of parameterizations in numerical weather prediction models. However, because dissipation rates are difficult to obtain, they are infrequently measured through the depth of the boundary layer. For this reason, demonstrating the ability of commonly used wind profiling radars (WPRs) to estimate this quantity would be greatly beneficial. During the XPIA field campaign at the Boulder Atmospheric Observatory, two WPRs operated in an optimized configuration, using high spectral resolution for increased accuracy of Doppler spectral width, specifically chosen to estimate turbulence from a vertically pointing beam. Multiplemore » post-processing techniques, including different numbers of spectral averages and peak processing algorithms for calculating spectral moments, were evaluated to determine the most accurate procedures for estimating turbulence dissipation rates using the information contained in the Doppler spectral width, using sonic anemometers mounted on a 300 m tower for validation. Furthermore, the optimal settings were determined, producing a low bias, which was later corrected. Resulting estimations of turbulence dissipation rates correlated well ( R 2 = 0.54 and 0.41) with the sonic anemometers, and profiles up to 2 km from the 449 MHz WPR and 1 km from the 915 MHz WPR were observed.« less
Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.; ...
2016-12-06
Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance ( w' 2) from zenith stares. But, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w' 2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skillmore » in correcting for volume-averaging effects in the calculation of w' 2 is also assessed. In addition, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w' 2. And through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w' 2. After the autocovariance technique is applied, values of w' 2 from the Doppler lidars are generally in close agreement ( R 2 ≈ 0.95 -0.98) with those calculated from sonic anemometer measurements.« less
Shock Interaction with Random Spherical Particle Beds
NASA Astrophysics Data System (ADS)
Neal, Chris; Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S. "Bala"; Thakur, Siddharth
2016-11-01
In this talk we present results on fully resolved simulations of shock interaction with randomly distributed bed of particles. Multiple simulations were carried out by varying the number of particles to isolate the effect of volume fraction. Major focus of these simulations was to understand 1) the effect of the shockwave and volume fraction on the forces experienced by the particles, 2) the effect of particles on the shock wave, and 3) fluid mediated particle-particle interactions. Peak drag force for particles at different volume fractions show a downward trend as the depth of the bed increased. This can be attributed to dissipation of energy as the shockwave travels through the bed of particles. One of the fascinating observations from these simulations was the fluctuations in different quantities due to presence of multiple particles and their random distribution. These are large simulations with hundreds of particles resulting in large amount of data. We present statistical analysis of the data and make relevant observations. Average pressure in the computational domain is computed to characterize the strengths of the reflected and transmitted waves. We also present flow field contour plots to support our observations. U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
Investigation of cadmium pollution in the spruce saplings near the metal production factory.
Hashemi, Seyed Armin; Farajpour, Ghasem
2016-02-01
Toxic metals such as lead and cadmium are among the pollutants that are created by the metal production factories and disseminated in the nature. In order to study the quantity of cadmium pollution in the environment of the metal production factories, 50 saplings of the spruce species at the peripheries of the metal production factories were examined and the samples of the leaves, roots, and stems of saplings planted around the factory and the soil of the environment of the factory were studied to investigate pollution with cadmium. They were compared to the soil and saplings of the spruce trees planted outside the factory as observer region. The results showed that the quantity of pollution in the leaves, stems, and roots of the trees planted inside the factory environment were estimated at 1.1, 1.5, and 2.5 mg/kg, respectively, and this indicated a significant difference with the observer region (p < 0.05). The quantity of cadmium in the soil of the peripheries of the metal production factory was estimated at 6.8 mg/kg in the depth of 0-10 cm beneath the level of the soil. The length of roots in the saplings planted around the factory of metal production stood at 11 and 14.5 cm in the observer region which had a significant difference with the observer region (p < 0.05). The quantity of soil resources and spruce species' pollution with cadmium in the region has been influenced by the production processes in the factory. © The Author(s) 2013.
Heishman, Aaron D.; Curtis, Michael A.; Saliba, Ethan N.; Hornett, Robert J.; Malin, Steven K.
2017-01-01
Abstract Heishman, AD, Curtis, MA, Saliba, EN, Hornett, RJ, Malin, SK, and Weltman, AL. Comparing performance during morning vs. afternoon training sessions in intercollegiate basketball players. J Strength Cond Res 31(6): 1557–1562, 2017—Time of day is a key factor that influences the optimization of athletic performance. Intercollegiate coaches oftentimes hold early morning strength training sessions for a variety of factors including convenience. However, few studies have specifically investigated the effect of early morning vs. late afternoon strength training on performance indices of fatigue. This is athletically important because circadian and/or ultradian rhythms and alterations in sleep patterns can affect training ability. Therefore, the purpose of the present study was to examine the effects of morning vs. afternoon strength training on an acute performance index of fatigue (countermovement jump height, CMJ), player readiness (Omegawave), and self-reported sleep quantity. We hypothesized that afternoon training sessions would be associated with increased levels of performance, readiness, and self-reported sleep. A retrospective analysis was performed on data collected over the course of the preseason on 10 elite National Collegiate Athletic Association Division 1 male basketball players. All basketball-related activities were performed in the afternoon with strength and conditioning activities performed either in the morning or in the afternoon. The average values for CMJ, power output (Power), self-reported sleep quantity (sleep), and player readiness were examined. When player load and duration were matched, CMJ (58.8 ± 1.3 vs. 61.9 ± 1.6 cm, p = 0.009), Power (6,378.0 ± 131.2 vs. 6,622.1 ± 172.0 W, p = 0.009), and self-reported sleep duration (6.6 ± 0.4 vs. 7.4 ± 0.25 p = 0.016) were significantly higher with afternoon strength and conditioning training, with no differences observed in player readiness values. We conclude that performance is suppressed with morning training and is associated with a decrease in self-reported quantity of sleep. PMID:28538305
Susong, David D.
1995-01-01
Ground-water recharge to basin-fill aquifers from unconsumed irrigation water in the western United States is being reduced as irrigators convert to more efficient irrigation systems. In some areas, these changes in irrigation methods may be contributing to ground-water-level declines and reducing the quantity of water available to downgradient users. The components of the water budget were measured or calculated for each field for the 1992 and 1993 irrigation seasons. Precipitation was about 6.5 cm (2.6 inches) both years. The flood-irrigated field received 182 and 156 centimeters (71.6 and 61.4 inches) of irrigation water in 1992 and 1993, and the sprinkler-irrigated field received 52.8 and 87.2 centimeters (20.8 and 34.3 inches) of water, respectively. Evapotranspiration for alfalfa was calculated using the Penman-Monteith combination equation and was 95.4 and 84.3 centimeters (37.2 and 33.2 inches) for 1992 and 1993, respectively. No runoff and no significant change in soil moisture in storage was observed from either field. Recharge to the aquifer from the flood-irrigated field was 93.3 and 78.1 centimeters (36.7 and 30.7 inches) in 1992 and 1993 and from the sprinkler-irrigated field was -35.9 and 9.3 centimeters (-14.1 and 3.7 inches), respectively. The daily water budget and soil-moisture profiles in the upper 6.4 meters (21 feet) of the unsaturated zone were simulated with an unsaturated flow model for average climate conditions. Simulated recharge was 57.4 and 50.5 percent of the quantity of irrigation water applied to the flood-irrigated field during 1992 and 1993, respectively, and was 8.7 and 13.8 percent of the quantity of irrigation water applied to the sprinkler- irrigated field.
Susong, D.D.
1995-01-01
Ground-water recharge to basin-fill aquifers from unconsumed irrigation water in the western United States is being reduced as irrigators convert to more efficient irrigation systems. In some areas, these changes in irrigation methods may be contributing to ground-water-level declines and reducing the quantity of water available to downgradient users. The components of the water budget were measured or calculated for each field for the 1992 and 1993 irrigation seasons. Precipitation was about 6.5 cm (2.6 inches) both years. The flood-irrigated field received 182 and 156 centimeters (71.6 and 61.4 inches) of irrigation water in 1992 and 1993, and the sprinkler-irrigated field received 52.8 and 87.2 centimeters (20.8 and 34.3 inches) of water, respectively. Evapotrans- piration for alfalfa was calculated using the Penman-Monteith combination equation and was 95.4 and 84.3 centimeters (37.2 and 33.2 inches) for 1992 and 1993, respectively. No runoff and no signifi- cant change in soil moisture in storage was observed from either field. Recharge to the aquifer from the flood-irrigated field was 93.3 and 78.1 centimeters (36.7 and 30.7 inches) in 1992 and 1993 and from the sprinkler-irrigated field was -35.9 and 9.3 centimeters (-14.1 and 3.7 inches), respectively. The daily water budget and soil-moisture profiles in the upper 6.4 meters (21 feet) of the unsaturated zone were simulated with an unsaturated flow model for average climate conditions. Simulated recharge was 57.4 and 50.5 percent of the quantity of irrigation water applied to the flood-irrigated field during 1992 and 1993, respectively, and was 8.7 and 13.8 percent of the quantity of irrigation water applied to the sprinkler-irrigated field.
Lee, Jane J; Pedley, Alison; Hoffmann, Udo; Massaro, Joseph M; Fox, Caroline S
2016-10-04
Subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) are associated with adverse cardiometabolic risk profiles. This study explored the degree to which changes in abdominal fat quantity and quality are associated with changes in cardiovascular disease (CVD) risk factors. Study participants (n = 1,106; 44.1% women; mean baseline age 45.1 years) were drawn from the Framingham Heart Study Third Generation cohort who participated in the computed tomography (CT) substudy Exams 1 and 2. Participants were followed for 6.1 years on average. Abdominal adipose tissue volume in cm(3) and attenuation in Hounsfield units (HU) were determined by CT-acquired abdominal scans. The mean fat volume change was an increase of 602 cm(3) for SAT and an increase of 703 cm(3) for VAT; the mean fat attenuation change was a decrease of 5.5 HU for SAT and an increase of 0.07 HU for VAT. An increase in fat volume and decrease in fat attenuation were associated with adverse changes in CVD risk factors. An additional 500 cm(3) increase in fat volume was associated with incident hypertension (odds ratio [OR]: 1.21 for SAT; OR: 1.30 for VAT), hypertriglyceridemia (OR: 1.15 for SAT; OR: 1.56 for VAT), and metabolic syndrome (OR: 1.43 for SAT; OR: 1.82 for VAT; all p < 0.05). Similar trends were observed for each additional 5 HU decrease in abdominal adipose tissue attenuation. Most associations remained significant even after further accounting for body mass index change, waist circumference change, or respective abdominal adipose tissue volumes. Increasing accumulation of fat quantity and decreasing fat attenuation are associated with worsening of CVD risk factors beyond the associations with generalized adiposity, central adiposity, or respective adipose tissue volumes. Published by Elsevier Inc.
Satellite-based high-resolution mapping of rainfall over southern Africa
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Drönner, Johannes; Nauss, Thomas
2017-06-01
A spatially explicit mapping of rainfall is necessary for southern Africa for eco-climatological studies or nowcasting but accurate estimates are still a challenging task. This study presents a method to estimate hourly rainfall based on data from the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). Rainfall measurements from about 350 weather stations from 2010-2014 served as ground truth for calibration and validation. SEVIRI and weather station data were used to train neural networks that allowed the estimation of rainfall area and rainfall quantities over all times of the day. The results revealed that 60 % of recorded rainfall events were correctly classified by the model (probability of detection, POD). However, the false alarm ratio (FAR) was high (0.80), leading to a Heidke skill score (HSS) of 0.18. Estimated hourly rainfall quantities were estimated with an average hourly correlation of ρ = 0. 33 and a root mean square error (RMSE) of 0.72. The correlation increased with temporal aggregation to 0.52 (daily), 0.67 (weekly) and 0.71 (monthly). The main weakness was the overestimation of rainfall events. The model results were compared to the Integrated Multi-satellitE Retrievals for GPM (IMERG) of the Global Precipitation Measurement (GPM) mission. Despite being a comparably simple approach, the presented MSG-based rainfall retrieval outperformed GPM IMERG in terms of rainfall area detection: GPM IMERG had a considerably lower POD. The HSS was not significantly different compared to the MSG-based retrieval due to a lower FAR of GPM IMERG. There were no further significant differences between the MSG-based retrieval and GPM IMERG in terms of correlation with the observed rainfall quantities. The MSG-based retrieval, however, provides rainfall in a higher spatial resolution. Though estimating rainfall from satellite data remains challenging, especially at high temporal resolutions, this study showed promising results towards improved spatio-temporal estimates of rainfall over southern Africa.
Space-Time Data fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, H.; Cressie, N.
2011-01-01
NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.
Is gross moist stability a useful quantity for studying the moisture mode theory?
NASA Astrophysics Data System (ADS)
Inoue, K.; Back, L. E.
2016-12-01
The idea is growing and being accepted that the Madden-Julian Oscillation (MJO) is a moisture mode. Along with the appearance of the moisture mode theory, a conceptual quantity called gross moist stability (GMS) has gained increasing attention. However, the GMS is a vexing quantity because it can be interpreted in different ways, depending on the size of spatial domains where the GMS is computed and on computation methodologies. We present a few different illustrations of the GMS using satellite observations. We first show GMS variability as a phase transition on a phase plane that we refer to as the GMS plane. Second, we demonstrate that the GMS variability shown as a time-series, which much past literature presented, is most likely not relevant to the moisture mode theory. In this talk, we present a protocol of moisture-mode-oriented GMS analyses with satellite observations.
Impact of fog processing on water soluble organic aerosols.
NASA Astrophysics Data System (ADS)
Tripathi, S. N.; Chakraborty, A.; Gupta, T.
2017-12-01
Fog is a natural meteorological phenomenon that occurs all around the world, and contains a substantial quantity of liquid water. Fog is generally seen as a natural cleansing agent but can also form secondary organic aerosols (SOA) via aqueous processing of ambient organics. Few field studies have reported elevated O/C ratio and SOA mass during or after fog events. However, mechanism behind aqueous SOA formation and its contribution to total organic aerosols (OA) still remains unclear. In this study we have tried to explore the impact of fog/aqueous processing on the characteristics of water soluble organic aerosols (WSOC), which to our knowledge has not been studied before. To assess this, both online (using HR-ToF-AMS) and offline (using a medium volume PM2.5 sampler and quartz filter) aerosol sampling were carried out at Kanpur, India from 15 December 2014 - 10 February 2015. Further, offline analysis of the aqueous extracts of the collected filters were carried out by AMS to characterize the water soluble OA (WSOA). Several (17) fog events occurred during the campaign and high concentrations of OA (151 ± 68 µg/m3) and WSOA (47 ± 19 µg/m3) were observed. WSOA/OA ratios were similar during fog (0.36 ± 0.14) and nofog (0.34 ± 0.15) periods. WSOA concentrations were also similar (slightly higher) during foggy (49 ± 18 µg/m3) and non-foggy periods (46 ± 20 µg/m3), in spite of fog scavenging. However, WSOA was more oxidized during foggy period (average O/C = 0.81) than non foggy periods (average O/C = 0.70). Like WSOA, OA was also more oxidized during foggy periods (average O/C = 0.64) than non foggy periods (average O/C = 0.53). During fog, WSOA to WIOA (water insoluble OA) ratios were higher (0.65 ± 0.16) compared to non foggy periods (0.56 ± 0.15). These observations clearly showed that WSOA become more dominant and processed during fog events, possibly due to the presence of fog droplets. This study highlights that fog processing of soluble organics can affect the overall chemical characteristics of the entire aerosol population.
Uncertainties in Surface Layer Modeling
NASA Astrophysics Data System (ADS)
Pendergrass, W.
2015-12-01
A central problem for micrometeorologists has been the relationship of air-surface exchange rates of momentum and heat to quantities that can be predicted with confidence. The flux-gradient profile developed through Monin-Obukhov Similarity Theory (MOST) provides an integration of the dimensionless wind shear expression where is an empirically derived expression for stable and unstable atmospheric conditions. Empirically derived expressions are far from universally accepted (Garratt, 1992, Table A5). Regardless of what form of these relationships might be used, their significance over any short period of time is questionable since all of these relationships between fluxes and gradients apply to averages that might rarely occur. It is well accepted that the assumption of stationarity and homogeneity do not reflect the true chaotic nature of the processes that control the variables considered in these relationships, with the net consequence that the levels of predictability theoretically attainable might never be realized in practice. This matter is of direct relevance to modern prognostic models which construct forecasts by assuming the universal applicability of relationships among averages for the lower atmosphere, which rarely maintains an average state. Under a Cooperative research and Development Agreement between NOAA and Duke Energy Generation, NOAA/ATDD conducted atmospheric boundary layer (ABL) research using Duke renewable energy sites as research testbeds. One aspect of this research has been the evaluation of legacy flux-gradient formulations (the ϕ functions, see Monin and Obukhov, 1954) for the exchange of heat and momentum. At the Duke Energy Ocotillo site, NOAA/ATDD installed sonic anemometers reporting wind and temperature fluctuations at 10Hz at eight elevations. From these observations, ϕM and ϕH were derived from a two-year database of mean and turbulent wind and temperature observations. From this extensive measurement database, using a methodology proposed by Kanenasu, Wesely and Hicks (1979), the overall dependence of ϕM and ϕH on is characterized. Results indicate considerable scatter with the familiar relationships, such as Paulson (1970), best describing the averages; however it is the scatter that largely defines the attainable levels of predictability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kos, L.; Tskhakaya, D. D.; Jelić, N.
2015-09-15
Recent decades have seen research into the conditions necessary for the formation of the monotonic potential shape in the sheath, appearing at the plasma boundaries like walls, in fluid, and kinetic approximations separately. Although either of these approaches yields a formulation commonly known as the much-acclaimed Bohm criterion (BC), the respective results involve essentially different physical quantities that describe the ion gas behavior. In the fluid approach, such a quantity is clearly identified as the ion directional velocity. In the kinetic approach, the ion behavior is formulated via a quantity (the squared inverse velocity averaged by the ion distribution function)more » without any clear physical significance, which is, moreover, impractical. In the present paper, we try to explain this difference by deriving a condition called here the Unified Bohm Criterion, which combines an advanced fluid model with an upgraded explicit kinetic formula in a new form of the BC. By introducing a generalized polytropic coefficient function, the unified BC can be interpreted in a form that holds, irrespective of whether the ions are described kinetically or in the fluid approximation.« less
Sandoval, S; Torres, A; Pawlowsky-Reusing, E; Riechel, M; Caradot, N
2013-01-01
The present study aims to explore the relationship between rainfall variables and water quality/quantity characteristics of combined sewer overflows (CSOs), by the use of multivariate statistical methods and online measurements at a principal CSO outlet in Berlin (Germany). Canonical correlation results showed that the maximum and average rainfall intensities are the most influential variables to describe CSO water quantity and pollutant loads whereas the duration of the rainfall event and the rain depth seem to be the most influential variables to describe CSO pollutant concentrations. The analysis of partial least squares (PLS) regression models confirms the findings of the canonical correlation and highlights three main influences of rainfall on CSO characteristics: (i) CSO water quantity characteristics are mainly influenced by the maximal rainfall intensities, (ii) CSO pollutant concentrations were found to be mostly associated with duration of the rainfall and (iii) pollutant loads seemed to be principally influenced by dry weather duration before the rainfall event. The prediction quality of PLS models is rather low (R² < 0.6) but results can be useful to explore qualitatively the influence of rainfall on CSO characteristics.
Panchromatic spectral energy distributions of simulated galaxies: results at redshift z = 0
NASA Astrophysics Data System (ADS)
Goz, David; Monaco, Pierluigi; Granato, Gian Luigi; Murante, Giuseppe; Domínguez-Tenreiro, Rosa; Obreja, Aura; Annunziatella, Marianna; Tescari, Edoardo
2017-08-01
We present predictions of spectral energy distributions (SEDs), from the UV to the FIR, of simulated galaxies at z = 0. These were obtained by post-processing the results of an N-body+hydro simulation of a cosmological box of side 25 Mpc, which uses the Multi-Phase Particle Integrator (MUPPI) for star formation and stellar feedback, with the grasil-3d radiative transfer code that includes reprocessing of UV light by dust. Physical properties of our sample of ˜500 galaxies resemble observed ones, though with some tension at small and large stellar masses. Comparing predicted SEDs of simulated galaxies with different samples of local galaxies, we find that these resemble observed ones, when normalized at 3.6 μm. A comparison with the Herschel Reference Survey shows that the average SEDs of galaxies, divided in bins of star formation rate (SFR), are reproduced in shape and absolute normalization to within a factor of ˜2, while average SEDs of galaxies divided in bins of stellar mass show tensions that are an effect of the difference of simulated and observed galaxies in the stellar mass-SFR plane. We use our sample to investigate the correlation of IR luminosity in Spitzer and Herschel bands with several galaxy properties. SFR is the quantity that best correlates with IR light up to 160 μm, while at longer wavelengths better correlations are found with molecular mass and, at 500 μm, with dust mass. However, using the position of the FIR peak as a proxy for cold dust temperature, we assess that heating of cold dust is mostly determined by SFR, with stellar mass giving only a minor contribution. We finally show how our sample of simulated galaxies can be used as a guide to understand the physical properties and selection biases of observed samples.
Integrability versus Thermalizability in Isolated Quantum Systems
NASA Astrophysics Data System (ADS)
Olshanii, Maxim
2012-02-01
The purpose of this presentation is to assess the status of our understanding of the transition from integrability to thermalizability in isolated quantum systems. In Classical Mechanics, the boundary stripe between the two is relatively sharp: its integrability edge is marked by the appearance of finite Lyapunov's exponents that further converge to a unique value when the ergodicity edge is reached. Classical ergodicity is a universal property: if a system is ergodic, then every observable attains its microcanonical value in the infinite time average over the trajectory. On the contrary, in Quantum Mechanics, Lyapunov's exponents are always zero. Furthermore, since quantum dynamics necessarily invokes coherent superpositions of eigenstates of different energy, projectors to the eigenstates become more relevant; those in turn never thermalize. All of the above indicates that in quantum many-body systems, (a) the integrability-thermalizability transition is smooth, and (b) the degree of thermalizability is not absolute like in classical mechanics, but it is relative to the class of observables of interest. In accordance with these observations, we propose a concrete measure of the degree of quantum thermalizability, consistent with the expected empirical manifestations of it. As a practical application of this measure, we devise a unified recipe for choosing an optimal set of conserved quantities to govern the after-relaxation values of observables, in both integrable quantum systems and in quantum systems in between integrable and thermalizable.
Brownian systems with spatially inhomogeneous activity
NASA Astrophysics Data System (ADS)
Sharma, A.; Brader, J. M.
2017-09-01
We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.
Distribution of G concurrence of random pure states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol
2006-12-15
The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.
Validation of Ocean Color Remote Sensing Reflectance Using Autonomous Floats
NASA Technical Reports Server (NTRS)
Gerbi, Gregory P.; Boss, Emanuel; Werdell, P. Jeremy; Proctor, Christopher W.; Haentjens, Nils; Lewis, Marlon R.; Brown, Keith; Sorrentino, Diego; Zaneveld, J. Ronald V.; Barnard, Andrew H.;
2016-01-01
The use of autonomous proling oats for observational estimates of radiometric quantities in the ocean is explored, and the use of this platform for validation of satellite-based estimates of remote sensing reectance in the ocean is examined. This effort includes comparing quantities estimated from oat and satellite data at nominal wavelengths of 412, 443, 488, and 555 nm, and examining sources and magnitudes of uncertainty in the oat estimates. This study had 65 occurrences of coincident high-quality observations from oats and MODIS Aqua and 15 occurrences of coincident high-quality observations oats and Visible Infrared Imaging Radi-ometer Suite (VIIRS). The oat estimates of remote sensing reectance are similar to the satellite estimates, with disagreement of a few percent in most wavelengths. The variability of the oatsatellite comparisons is similar to the variability of in situsatellite comparisons using a validation dataset from the Marine Optical Buoy (MOBY). This, combined with the agreement of oat-based and satellite-based quantities, suggests that oats are likely a good platform for validation of satellite-based estimates of remote sensing reectance.
Identification of tower-wake distortions using sonic anemometer and lidar measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaffrey, Katherine; Quelet, Paul T.; Choukulkar, Aditya
The eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) field campaign took place in March through May 2015 at the Boulder Atmospheric Observatory, utilizing its 300 m meteorological tower, instrumented with two sonic anemometers mounted on opposite sides of the tower at six heights. This allowed for at least one sonic anemometer at each level to be upstream of the tower at all times and for identification of the times when a sonic anemometer is in the wake of the tower frame. Other instrumentation, including profiling and scanning lidars aided in the identification of the tower wake. Here we compare pairsmore » of sonic anemometers at the same heights to identify the range of directions that are affected by the tower for each of the opposing booms. The mean velocity and turbulent kinetic energy are used to quantify the wake impact on these first- and second-order wind measurements, showing up to a 50% reduction in wind speed and an order of magnitude increase in turbulent kinetic energy. Comparisons of wind speeds from profiling and scanning lidars confirmed the extent of the tower wake, with the same reduction in wind speed observed in the tower wake, and a speed-up effect around the wake boundaries. Wind direction differences between pairs of sonic anemometers and between sonic anemometers and lidars can also be significant, as the flow is deflected by the tower structure. Comparisons of lengths of averaging intervals showed a decrease in wind speed deficit with longer averages, but the flow deflection remains constant over longer averages. Furthermore, asymmetry exists in the tower effects due to the geometry and placement of the booms on the triangular tower. An analysis of the percentage of observations in the wake that must be removed from 2 min mean wind speed and 20 min turbulent values showed that removing even small portions of the time interval due to wakes impacts these two quantities. Furthermorew, a vast majority of intervals have no observations in the tower wake, so removing the full 2 or 20 min intervals does not diminish the XPIA dataset.« less
Identification of tower-wake distortions using sonic anemometer and lidar measurements
McCaffrey, Katherine; Quelet, Paul T.; Choukulkar, Aditya; ...
2017-02-02
The eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) field campaign took place in March through May 2015 at the Boulder Atmospheric Observatory, utilizing its 300 m meteorological tower, instrumented with two sonic anemometers mounted on opposite sides of the tower at six heights. This allowed for at least one sonic anemometer at each level to be upstream of the tower at all times and for identification of the times when a sonic anemometer is in the wake of the tower frame. Other instrumentation, including profiling and scanning lidars aided in the identification of the tower wake. Here we compare pairsmore » of sonic anemometers at the same heights to identify the range of directions that are affected by the tower for each of the opposing booms. The mean velocity and turbulent kinetic energy are used to quantify the wake impact on these first- and second-order wind measurements, showing up to a 50% reduction in wind speed and an order of magnitude increase in turbulent kinetic energy. Comparisons of wind speeds from profiling and scanning lidars confirmed the extent of the tower wake, with the same reduction in wind speed observed in the tower wake, and a speed-up effect around the wake boundaries. Wind direction differences between pairs of sonic anemometers and between sonic anemometers and lidars can also be significant, as the flow is deflected by the tower structure. Comparisons of lengths of averaging intervals showed a decrease in wind speed deficit with longer averages, but the flow deflection remains constant over longer averages. Furthermore, asymmetry exists in the tower effects due to the geometry and placement of the booms on the triangular tower. An analysis of the percentage of observations in the wake that must be removed from 2 min mean wind speed and 20 min turbulent values showed that removing even small portions of the time interval due to wakes impacts these two quantities. Furthermorew, a vast majority of intervals have no observations in the tower wake, so removing the full 2 or 20 min intervals does not diminish the XPIA dataset.« less
Effects of 27-day averaged tidal forcing on the thermosphere-ionosphere as examined by the TIEGCM
NASA Astrophysics Data System (ADS)
Maute, A. I.; Forbes, J. M.; Hagan, M. E.
2016-12-01
The variability of the ionosphere and thermosphere is influenced by solar and geomagnetic forcing and by lower atmosphere coupling. During the last solar minimum low- and mid-latitude ionospheric observations have shown strong longitudinal signals which are associated with upward propagating tides. Progress has been made in explaining observed ionospheric and thermospheric variations by investigating possible coupling mechanisms e.g., wind dynamo, propagation of tides into the upper thermosphere, global circulation changes, and compositional effects. To fully understand the vertical coupling a comprehensive set of simultaneous measurements of key quantities is missing. The Ionospheric Connection (ICON) explorer will provide such a data set and the data interpretation will be supported by numerical modeling to investigate the lower to upper atmosphere coupling. Due to ICON's orbit, 27 days of measurements are needed to cover all longitudes and local times and to be able to derive tidal components. In this presentation we employ the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIEGCM) to evaluate the influence of the 27-day processing window on the ionosphere and thermosphere state. Specifically, we compare TIEGCM simulations that are forced at its 97 km lower boundary by daily tidal fields from 2009 MERRA-forced TIME-GCM output [Häusler et al., 2015], and by the corresponding 27-day mean tidal fields. Apart from the expected reduced day-to-day variability when using 27-day averaged tidal forcing, the simulations indicate net NmF2 changes at low latitudes, which vary with season. First results indicate that compositional effects may influence the Nmf2 modifications. We will quantify the effect of using a 27-day averaged diurnal tidal forcing versus daily ones on the equatorial vertical drift, low and mid-latitude NmF2 and hmF2, global circulation, and composition. The possible causes for the simulated changes will be examined. The result of this study will be important for the comparison of the ICON observations with the accompanying ICON-TIEGCM simulations and guide the model-data interpretation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnard, James C.; Flynn, Donna M.
2002-10-08
The ability of the SBDART radiative transfer model to predict clear-sky diffuse and direct normal broadband shortwave irradiances is investigated. Model calculations of these quantities are compared with data from the Atmospheric Radiation Measurement (ARM) program’s Southern Great Plains (SGP) and North Slope of Alaska (NSA) sites. The model tends to consistently underestimate the direct normal irradiances at both sites by about 1%. In regards to clear-sky diffuse irradiance, the model overestimates this quantity at the SGP site in a manner similar to what has been observed in other studies (Halthore and Schwartz, 2000). The difference between the diffuse SBDARTmore » calculations and Halthore and Schwartz’s MODTRAN calculations is very small, thus demonstrating that SBDART performs similarly to MODTRAN. SBDART is then applied to the NSA site, and here it is found that the discrepancy between the model calculations and corrected diffuse measurements (corrected for daytime offsets, Dutton et al., 2001) is 0.4 W/m2 when averaged over the 12 cases considered here. Two cases of diffuse measurements from a shaded “black and white” pyranometer are also compared with the calculations and the discrepancy is again minimal. Thus, it appears as if the “diffuse discrepancy” that exists at the SGP site does not exist at the NSA sites. We cannot yet explain why the model predicts diffuse radiation well at one site but not at the other.« less
Redox chemistry in the phosphorus biogeochemical cycle
NASA Astrophysics Data System (ADS)
Pasek, Matthew A.; Sampson, Jacqueline M.; Atlas, Zachary
2014-10-01
The element phosphorus (P) controls growth in many ecosystems as the limiting nutrient, where it is broadly considered to reside as pentavalent P in phosphate minerals and organic esters. Exceptions to pentavalent P include phosphine-PH3-a trace atmospheric gas, and phosphite and hypophosphite, P anions that have been detected recently in lightning strikes, eutrophic lakes, geothermal springs, and termite hindguts. Reduced oxidation state P compounds include the phosphonates, characterized by C-P bonds, which bear up to 25% of total organic dissolved phosphorus. Reduced P compounds have been considered to be rare; however, the microbial ability to use reduced P compounds as sole P sources is ubiquitous. Here we show that between 10% and 20% of dissolved P bears a redox state of less than +5 in water samples from central Florida, on average, with some samples bearing almost as much reduced P as phosphate. If the quantity of reduced P observed in the water samples from Florida studied here is broadly characteristic of similar environments on the global scale, it accounts well for the concentration of atmospheric phosphine and provides a rationale for the ubiquity of phosphite utilization genes in nature. Phosphine is generated at a quantity consistent with thermodynamic equilibrium established by the disproportionation reaction of reduced P species. Comprising 10-20% of the total dissolved P inventory in Florida environments, reduced P compounds could hence be a critical part of the phosphorus biogeochemical cycle, and in turn may impact global carbon cycling and methanogenesis.
Does quality of drinking water matter in kidney stone disease: A study in West Bengal, India
Mitra, Pubali; Pal, Dilip Kumar
2018-01-01
Purpose The combined interaction of epidemiology, environmental exposure, dietary habits, and genetic factors causes kidney stone disease (KSD), a common public health problem worldwide. Because a high water intake (>3 L daily) is widely recommended by physicians to prevent KSD, the present study evaluated whether the quantity of water that people consume daily is associated with KSD and whether the quality of drinking water has any effect on disease prevalence. Materials and Methods Information regarding residential address, daily volume of water consumption, and source of drinking water was collected from 1,266 patients with kidney stones in West Bengal, India. Drinking water was collected by use of proper methods from case (high stone prevalence) and control (zero stone prevalence) areas thrice yearly. Water samples were analyzed for pH, alkalinity, hardness, total dissolved solutes, electrical conductivity, and salinity. Average values of the studied parameters were compared to determine if there were any statistically significant differences between the case and control areas. Results We observed that as many as 53.6% of the patients consumed <3 L of water daily. Analysis of drinking water samples from case and control areas, however, did not show any statistically significant alterations in the studied parameters. All water samples were found to be suitable for consumption. Conclusions It is not the quality of water, rather the quantity of water consumed that matters most in the occurrence of KSD. PMID:29744472
Evolution of the social network of scientific collaborations
NASA Astrophysics Data System (ADS)
Barabási, A. L.; Jeong, H.; Néda, Z.; Ravasz, E.; Schubert, A.; Vicsek, T.
2002-08-01
The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991-98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1995-01-01
This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1994-01-01
This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.
Spreading gossip in social networks.
Lind, Pedro G; da Silva, Luciano R; Andrade, José S; Herrmann, Hans J
2007-09-01
We study a simple model of information propagation in social networks, where two quantities are introduced: the spread factor, which measures the average maximal reachability of the neighbors of a given node that interchange information among each other, and the spreading time needed for the information to reach such a fraction of nodes. When the information refers to a particular node at which both quantities are measured, the model can be taken as a model for gossip propagation. In this context, we apply the model to real empirical networks of social acquaintances and compare the underlying spreading dynamics with different types of scale-free and small-world networks. We find that the number of friendship connections strongly influences the probability of being gossiped. Finally, we discuss how the spread factor is able to be applied to other situations.
Spreading gossip in social networks
NASA Astrophysics Data System (ADS)
Lind, Pedro G.; da Silva, Luciano R.; Andrade, José S., Jr.; Herrmann, Hans J.
2007-09-01
We study a simple model of information propagation in social networks, where two quantities are introduced: the spread factor, which measures the average maximal reachability of the neighbors of a given node that interchange information among each other, and the spreading time needed for the information to reach such a fraction of nodes. When the information refers to a particular node at which both quantities are measured, the model can be taken as a model for gossip propagation. In this context, we apply the model to real empirical networks of social acquaintances and compare the underlying spreading dynamics with different types of scale-free and small-world networks. We find that the number of friendship connections strongly influences the probability of being gossiped. Finally, we discuss how the spread factor is able to be applied to other situations.
Generalized ensemble theory with non-extensive statistics
NASA Astrophysics Data System (ADS)
Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke
2017-12-01
The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.
Groundspeed filtering for CTAS
NASA Technical Reports Server (NTRS)
Slater, Gary L.
1994-01-01
Ground speed is one of the radar observables which is obtained along with position and heading from NASA Ames Center radar. Within the Center TRACON Automation System (CTAS), groundspeed is converted into airspeed using the wind speeds which CTAS obtains from the NOAA weather grid. This airspeed is then used in the trajectory synthesis logic which computes the trajectory for each individual aircraft. The time history of the typical radar groundspeed data is generally quite noisy, with high frequency variations on the order of five knots, and occasional 'outliers' which can be significantly different from the probable true speed. To try to smooth out these speeds and make the ETA estimate less erratic, filtering of the ground speed is done within CTAS. In its base form, the CTAS filter is a 'moving average' filter which averages the last ten radar values. In addition, there is separate logic to detect and correct for 'outliers', and acceleration logic which limits the groundspeed change in adjacent time samples. As will be shown, these additional modifications do cause significant changes in the actual groundspeed filter output. The conclusion is that the current ground speed filter logic is unable to track accurately the speed variations observed on many aircraft. The Kalman filter logic however, appears to be an improvement to the current algorithm used to smooth ground speed variations, while being simpler and more efficient to implement. Additional logic which can test for true 'outliers' can easily be added by looking at the difference in the a priori and post priori Kalman estimates, and not updating if the difference in these quantities is too large.
Jensen, Jørgen Dejgård; Poulsen, Sanne Kellebjerg
2013-12-02
Several studies suggest that a healthy diet with high emphasis on nutritious, low-energy components such as fruits, vegetables, and seafood tends to be more costly for consumers. Derived from the ideas from the New Nordic Cuisine--and inspired by the Mediterranean diet, the New Nordic Diet (NND) has been developed as a palatable, healthy and sustainable diet based on products from the Nordic region. The objective of the study is to investigate economic consequences for the consumers of the NND, compared with an Average Danish Diet (ADD). Combine quantity data from a randomized controlled ad libitum dietary 6 month intervention for central obese adults (18-65 years) and market retail price data of the products consumed in the intervention. Adjust consumed quantities to market price incentives using econometrically estimated price elasticities. Average daily food expenditure of the ADD as represented in the unadjusted intervention (ADD-i) amounted to 36.02 DKK for the participants. The daily food expenditure in the unadjusted New Nordic Diet (NND-i) costs 44.80 DKK per day per head, and is hence about 25% more expensive than the Average Danish Diet (or about 17% when adjusting for energy content of the diet). Adjusting for price incentives in a real market setting, the estimated cost of the Average Danish Diet is reduced by 2.50 DKK (ADD-m), compared to the unadjusted ADD-i diet, whereas the adjusted cost of the New Nordic Diet (NND-m) is reduced by about 3.50 DKK, compared to the unadjusted NND-i. The distribution of food cost is however much more heterogeneous among consumers within the NND than within the ADD. On average, the New Nordic Diet is 24-25 per cent more expensive than an Average Danish Diet at the current market prices in Denmark (and 16-17 per cent, when adjusting for energy content). The relatively large heterogeneity in food costs in the NND suggests that it is possible to compose an NND where the cost exceeds that of ADD by less than the 24-25 per cent.
Zhang, Xin; Wu, Qunhong; Liu, Guoxiang; Li, Ye; Gao, Lijun; Guo, Bin; Fu, Wenqi; Hao, Yanhua; Cui, Yu; Huang, Weidong; Coyte, Peter C
2014-12-22
The government of China has introduced a National Essential Medicines Policy (NEMP) in the new round of health system reform. The objective of this paper is to analyse whether the NEMP can play a role in curbing the rise of medical expenditures without disrupting the availability of healthcare services at township hospitals in China. This study adopted a pre-post treatment-control study design. A difference-in-differences method and fixed-effects model for panel data were employed to estimate the effect of the NEMP. Chongqing, Jiangsu and Henan Province, in China, in 2009 and 2010. 296 township health centres. Outcomes for health expenditures were average outpatient drug expenses per visit, average inpatient drug expenses per discharged patient, average outpatient expenses per visit and average inpatient expenses per discharged patient. Outcomes for care delivery were the numbers of visits per certified doctor per day and the numbers of hospitalised patients per certified doctor per day. The township health centres that were enrolled in the NEMP reported 26% (p<0.01) lower drug expenditures for inpatient care. An 11% (p<0.05) decrease in average inpatient expenditures per discharged patient was found following the implementation of the NEMP. The impacts of the NEMP on average outpatient expenditures and outpatient drug expenditures were not statistically significant at the 5% level. No statistically significant associations were found between the NEMP and reduction in quantity of health service delivery. The NEMP was significant in its effect in reducing inpatient medication and health service expenditures. This study shows no evidence that the quantity of healthcare service declined significantly after introduction of the NEMP over the study period, which suggests that if appropriate matching policies are introduced, the side effects of the NEMP can be counteracted to some degree. Further research including a long-term follow-up study is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
2013-01-01
Background Several studies suggest that a healthy diet with high emphasis on nutritious, low-energy components such as fruits, vegetables, and seafood tends to be more costly for consumers. Derived from the ideas from the New Nordic Cuisine – and inspired by the Mediterranean diet, the New Nordic Diet (NND) has been developed as a palatable, healthy and sustainable diet based on products from the Nordic region. The objective of the study is to investigate economic consequences for the consumers of the NND, compared with an Average Danish Diet (ADD). Methods Combine quantity data from a randomized controlled ad libitum dietary 6 month intervention for central obese adults (18–65 years) and market retail price data of the products consumed in the intervention. Adjust consumed quantities to market price incentives using econometrically estimated price elasticities. Results Average daily food expenditure of the ADD as represented in the unadjusted intervention (ADD-i) amounted to 36.02 DKK for the participants. The daily food expenditure in the unadjusted New Nordic Diet (NND-i) costs 44.80 DKK per day per head, and is hence about 25% more expensive than the Average Danish Diet (or about 17% when adjusting for energy content of the diet). Adjusting for price incentives in a real market setting, the estimated cost of the Average Danish Diet is reduced by 2.50 DKK (ADD-m), compared to the unadjusted ADD-i diet, whereas the adjusted cost of the New Nordic Diet (NND-m) is reduced by about 3.50 DKK, compared to the unadjusted NND-i. The distribution of food cost is however much more heterogeneous among consumers within the NND than within the ADD. Conclusion On average, the New Nordic Diet is 24–25 per cent more expensive than an Average Danish Diet at the current market prices in Denmark (and 16–17 per cent, when adjusting for energy content). The relatively large heterogeneity in food costs in the NND suggests that it is possible to compose an NND where the cost exceeds that of ADD by less than the 24–25 per cent. PMID:24294977
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-01-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ≈ 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
NASA Astrophysics Data System (ADS)
Hoyos, Isabel; Baquero-Bernal, Astrid; Hagemann, Stefan
2013-09-01
In Colombia, the access to climate related observational data is restricted and their quantity is limited. But information about the current climate is fundamental for studies on present and future climate changes and their impacts. In this respect, this information is especially important over the Colombian Caribbean Catchment Basin (CCCB) that comprises over 80 % of the population of Colombia and produces about 85 % of its GDP. Consequently, an ensemble of several datasets has been evaluated and compared with respect to their capability to represent the climate over the CCCB. The comparison includes observations, reconstructed data (CPC, Delaware), reanalyses (ERA-40, NCEP/NCAR), and simulated data produced with the regional climate model REMO. The capabilities to represent the average annual state, the seasonal cycle, and the interannual variability are investigated. The analyses focus on surface air temperature and precipitation as well as on surface water and energy balances. On one hand the CCCB characteristics poses some difficulties to the datasets as the CCCB includes a mountainous region with three mountain ranges, where the dynamical core of models and model parameterizations can fail. On the other hand, it has the most dense network of stations, with the longest records, in the country. The results can be summarised as follows: all of the datasets demonstrate a cold bias in the average temperature of CCCB. However, the variability of the average temperature of CCCB is most poorly represented by the NCEP/NCAR dataset. The average precipitation in CCCB is overestimated by all datasets. For the ERA-40, NCEP/NCAR, and REMO datasets, the amplitude of the annual cycle is extremely high. The variability of the average precipitation in CCCB is better represented by the reconstructed data of CPC and Delaware, as well as by NCEP/NCAR. Regarding the capability to represent the spatial behaviour of CCCB, temperature is better represented by Delaware and REMO, while precipitation is better represented by Delaware. Among the three datasets that permit an analysis of surface water and energy balances (REMO, ERA-40, and NCEP/NCAR), REMO best demonstrates the closure property of the surface water balance within the basin, while NCEP/NCAR does not demonstrate this property well. The three datasets represent the energy balance fairly well, although some inconsistencies were found in the individual balance components for NCEP/NCAR.
ERIC Educational Resources Information Center
Galambos, Nancy L.; Dalton, Andrea L.; Maggs, Jennifer L.
2009-01-01
Daily covariation of sleep quantity and quality with affective, stressful, academic, and social experiences were observed in a sample of Canadian 17-19-year-olds in their first year of university. Participants (N = 191) completed web-based checklists for 14 consecutive days during their first semester. Multilevel models predicting sleep quantity…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Minimum Acceptable Values for the Quantity A Defined in the Retroreflective Tire and Rim Test Procedure 3 Table 3 to Part 1512 Commercial Practices... Retroreflective Tire and Rim Test Procedure Observation angle (degrees) Entrance angle (degrees) Minimum...
A New Eddy Dissipation Rate Formulation for the Terminal Area PBL Prediction System(TAPPS)
NASA Technical Reports Server (NTRS)
Charney, Joseph J.; Kaplan, Michael L.; Lin, Yuh-Lang; Pfeiffer, Karl D.
2000-01-01
The TAPPS employs the MASS model to produce mesoscale atmospheric simulations in support of the Wake Vortex project at Dallas Fort-Worth International Airport (DFW). A post-processing scheme uses the simulated three-dimensional atmospheric characteristics in the planetary boundary layer (PBL) to calculate the turbulence quantities most important to the dissipation of vortices: turbulent kinetic energy and eddy dissipation rate. TAPPS will ultimately be employed to enhance terminal area productivity by providing weather forecasts for the Aircraft Vortex Spacing System (AVOSS). The post-processing scheme utilizes experimental data and similarity theory to determine the turbulence quantities from the simulated horizontal wind field and stability characteristics of the atmosphere. Characteristic PBL quantities important to these calculations are determined based on formulations from the Blackadar PBL parameterization, which is regularly employed in the MASS model to account for PBL processes in mesoscale simulations. The TAPPS forecasts are verified against high-resolution observations of the horizontal winds at DFW. Statistical assessments of the error in the wind forecasts suggest that TAPPS captures the essential features of the horizontal winds with considerable skill. Additionally, the turbulence quantities produced by the post-processor are shown to compare favorably with corresponding tower observations.
Schnell, Sebastian; Altrell, Dan; Ståhl, Göran; Kleinn, Christoph
2015-01-01
In contrast to forest trees, trees outside forests (TOF) often are not included in the national monitoring of tree resources. Consequently, data about this particular resource is rare, and available information is typically fragmented across the different institutions and stakeholders that deal with one or more of the various TOF types. Thus, even if information is available, it is difficult to aggregate data into overall national statistics. However, the National Forest Monitoring and Assessment (NFMA) programme of FAO offers a unique possibility to study TOF resources because TOF are integrated by default into the NFMA inventory design. We have analysed NFMA data from 11 countries across three continents. For six countries, we found that more than 10% of the national above-ground tree biomass was actually accumulated outside forests. The highest value (73%) was observed for Bangladesh (total forest cover 8.1%, average biomass per hectare in forest 33.4 t ha(-1)) and the lowest (3%) was observed for Zambia (total forest cover 63.9%, average biomass per hectare in forest 32 t ha(-1)). Average TOF biomass stocks were estimated to be smaller than 10 t ha(-1). However, given the large extent of non-forest areas, these stocks sum up to considerable quantities in many countries. There are good reasons to overcome sectoral boundaries and to extend national forest monitoring programmes on a more systematic basis that includes TOF. Such an approach, for example, would generate a more complete picture of the national tree biomass. In the context of climate change mitigation and adaptation, international climate mitigation programmes (e.g. Clean Development Mechanism and Reduced Emission from Deforestation and Degradation) focus on forest trees without considering the impact of TOF, a consideration this study finds crucial if accurate measurements of national tree biomass and carbon pools are required.
NASA Astrophysics Data System (ADS)
Ventosa, Sergi; Romanowicz, Barbara
2015-11-01
Resolving the topography of the core-mantle boundary (CMB) and the structure and composition of the D″ region is key to improving our understanding of the interaction between the Earth's mantle and core. Observations of traveltimes and amplitudes of short-period teleseismic body waves sensitive to lowermost mantle provide essential constraints on the properties of this region. Major challenges are low signal-to-noise ratio of the target phases and interference with other mantle phases. In a previous paper (Part I), we introduced the slant-stacklet transform to enhance the signal of the core-reflected (PcP) phase and to isolate it from stronger signals in the coda of the P wave. Then we minimized a linear misfit between P and PcP waveforms to improve the quality of PcP-P traveltime difference measurements as compared to standard cross-correlation methods. This method significantly increases the quantity and the quality of PcP-P traveltime observations available for the modelling of structure near the CMB. Here we illustrate our approach in a series of regional studies of the CMB and D″ using PcP-P observations with unprecedented resolution from high-quality dense arrays located in North America and Japan for events with magnitude Mw>5.4 and distances up to 80°. In this process, we carefully analyse various sources of errors and show that mantle heterogeneity is the most significant. We find and correct bias due to mantle heterogeneities that is as large as 1 s in traveltime, comparable to the largest lateral PcP-P traveltime variations observed. We illustrate the importance of accurate mantle corrections and the need for higher resolution mantle models for future studies. After optimal mantle corrections, the main signal left is relatively long wavelength in the regions sampled, except at the border of the Pacific large-low shear velocity province (LLSVP). We detect the northwest border of the Pacific LLSVP in the western Pacific from array observations in Japan, and observe higher than average P velocities, or depressed CMB, in Central America, and slightly lower than average P velocities under Alaska/western Canada.
Zone clearance in an infinite TASEP with a step initial condition
NASA Astrophysics Data System (ADS)
Cividini, Julien; Appert-Rolland, Cécile
2017-06-01
The TASEP is a paradigmatic model of out-of-equilibrium statistical physics, for which many quantities have been computed, either exactly or by approximate methods. In this work we study two new kinds of observables that have some relevance in biological or traffic models. They represent the probability for a given clearance zone of the lattice to be empty (for the first time) at a given time, starting from a step density profile. Exact expressions are obtained for single-time quantities, while more involved history-dependent observables are studied by Monte Carlo simulation, and partially predicted by a phenomenological approach.
Finite Volume Algorithms for Heat Conduction
2010-05-01
scalar quantity). Although (3) is relatively easy to discretize by using finite differences , its form in generalized coordinates is not. Later, we...familiar with the finite difference method for discretizing differential equations. In fact, the Newton divided difference is the numerical analog for a...expression (8) for the average derivative matches the Newton divided difference formula, so for uniform one-dimensional meshes, the finite volume and
Code of Federal Regulations, 2011 CFR
2011-07-01
... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...
Code of Federal Regulations, 2012 CFR
2012-07-01
... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...
Code of Federal Regulations, 2014 CFR
2014-07-01
... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...
Code of Federal Regulations, 2010 CFR
2010-07-01
... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...
Code of Federal Regulations, 2013 CFR
2013-07-01
... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...
The U.S. EPA established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric CDDs, CDFs and coplanar PCBs throughout the United States. Currently operating at 33 stations, NDAMN has, as one of its tasks, the dete...
Hangar Fire Suppression Utilizing Novec 1230
2018-01-01
The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...fuel fires in aircraft hangars. A 30×30×8-ft concrete-and-steel test structure was constructed for this test series . Four discharge assemblies...structure. System discharge parameters---discharge time , discharge rate, and quantity of agent discharged---were adjusted to produce the desired Novec 1230
Estimated use of water in the United States, 1965
Murray, Charles Richard
1968-01-01
Estimates of water use in the United States for 1965 indicate that an average of about 310 bgd (billion gallons per day) were withdrawn for public-supply, rural domestic and livestock, irrigation, and industrial (including thermoelectric power)uses--that is, about 1,600 gallons per capita per day. This represents an increase of 15 percent over the withdrawal of 270 bgd reported for 1960. Fresh water withdrawals for thermoelectric power generation increased nearly 25 percent during the 5 years, and saline water withdrawals increased 33 percent. An additional 2,300 bgd was used for hydroelectric power generation (waterpower), which also represented a 15-percent increase in 5 years. The quantity of water consumed-that is, water made unavailable for further possible withdrawal because of evaporation, incorporation in manufactured products, and other causes - was estimated to average 78 bgd for 1965, an increase of about 28 percent since 1960.Estimates made of the quantities of water withdrawn from surface and ground-water sources indicate withdrawals of 61 bgd of ground water, of which nearly 0.5 bgd was saline, and 250 bgd of surface water, of which 44 bgd was saline. The estimated amount of saline water used by industry increased 36 percent from 1960 to 1965. In addition to surface and ground water sources, reclaimed sewage supplied two-thirds of a billion gallons per day, mainly to irrigation and industry.The average annual streamflow in the United States is approximately 1,200 bgd, about four times the amount withdrawn for all purposes (except hydroelectric power) in 1965, and more than 15 times the estimated quantity of water consumed. However, comparisons of supply and demand in many river basins show that repeated use of the water is made, and at times in some basins all the available supply is consumed.In addition to tabulations of water-use data by States and by the water-use regions previously used, water-use tables are also given for the regions recently defined by the Water Resources Council for its national assessment.
NASA Astrophysics Data System (ADS)
Yu, Sen; Lu, Hongwei
2018-04-01
Under the effects of global change, water crisis ranks as the top global risk in the future decade, and water conflict in transboundary river basins as well as the geostrategic competition led by it is most concerned. This study presents an innovative integrated PPMGWO model of water resources optimization allocation in a transboundary river basin, which is integrated through the projection pursuit model (PPM) and Grey wolf optimization (GWO) method. This study uses the Songhua River basin and 25 control units as examples, adopting the PPMGWO model proposed in this study to allocate the water quantity. Using water consumption in all control units in the Songhua River basin in 2015 as reference to compare with optimization allocation results of firefly algorithm (FA) and Particle Swarm Optimization (PSO) algorithms as well as the PPMGWO model, results indicate that the average difference between corresponding allocation results and reference values are 0.195 bil m3, 0.151 bil m3, and 0.085 bil m3, respectively. Obviously, the average difference of the PPMGWO model is the lowest and its optimization allocation result is closer to reality, which further confirms the reasonability, feasibility, and accuracy of the PPMGWO model. And then the PPMGWO model is adopted to simulate allocation of available water quantity in Songhua River basin in 2018, 2020, and 2030. The simulation results show water quantity which could be allocated in all controls demonstrates an overall increasing trend with reasonable and equal exploitation and utilization of water resources in the Songhua River basin in future. In addition, this study has a certain reference value and application meaning to comprehensive management and water resources allocation in other transboundary river basins.
Observations of ELM stabilization during neutral beam injection in DIII-D
NASA Astrophysics Data System (ADS)
Bortolon, Alessandro; Kramer, Gerrit; Diallo, Ahmed; Knolker, Matthias; Maingi, Rajesh; Nazikian, Raffi; Degrassie, John; Osborne, Thomas
2017-10-01
Edge localized modes (ELMs) are generally interpreted as peeling-ballooning instabilities, driven by the pedestal current and pressure gradient, with other subdominant effects possibly relevant close to marginal stability. We report observations of transient stabilization of type-I ELMs during neutral beam injection (NBI), emerging from a combined dataset of DIII-D ELMy H-mode plasmas with moderate heating obtained through pulsed NBI waveforms. Statistical analysis of ELM onset times indicates that, in the selected dataset, the likelihood of onset of an ELM lowers significantly during NBI modulation pulses, with the stronger correlation found with counter-current NBI. The effect is also found in rf-heated H-modes, where ELMs appear inhibited when isolated diagnostic beam pulses are applied. Coherent average analysis is used to determine how plasma density, temperature, rotation as well as beam ion quantities evolve during a NB modulation cycle, finding relatively small changes ( 3%) of pedestal Te and ne and toroidal and poloidal rotation variations up to 5 km/s. The effect of these changes on pedestal stability will be discussed. Work supported by US DOE under DE-FC02-04ER54698, DE-AC02-09CH11466.
The MIPAS2D: 2-D analysis of MIPAS observations of ESA target molecules and minor species
NASA Astrophysics Data System (ADS)
Arnone, E.; Brizzi, G.; Carlotti, M.; Dinelli, B. M.; Magnani, L.; Papandrea, E.; Ridolfi, M.
2008-12-01
Measurements from the MIPAS instrument onboard the ENVISAT satellite were analyzed with the Geofit Multi- Target Retrieval (GMTR) system to obtain 2-dimensional fields of pressure, temperature and volume mixing ratios of H2O, O3, HNO3, CH4, N2O, and NO2. Secondary target species relevant to stratospheric chemistry were also analysed and robust mixing ratios of N2O5, ClONO2, F11, F12, F14 and F22 were obtained. Other minor species with high uncertainties were not included in the database and will be the object of further studies. The analysis covers the original nominal observation mode from July 2002 to March 2004 and it is currently being extended to the ongoing reduced resolution mission. The GMTR algorithm was operated on a fixed 5 degrees latitudinal grid in order to ease the comparison with model calculations and climatological datasets. The generated database of atmospheric fields can be directly used for analyses based on averaging processes with no need of further interpolation. Samples of the obtained products are presented and discussed. The database of the retrieved quantities is made available to the scientific community.
Ultrasensitive investigations of biological systems by fluorescence correlation spectroscopy.
Haustein, Elke; Schwille, Petra
2003-02-01
Fluorescence correlation spectroscopy (FCS) extracts information about molecular dynamics from the tiny fluctuations that can be observed in the emission of small ensembles of fluorescent molecules in thermodynamic equilibrium. Employing a confocal setup in conjunction with highly dilute samples, the average number of fluorescent particles simultaneously within the measurement volume (approximately 1 fl) is minimized. Among the multitude of chemical and physical parameters accessible by FCS are local concentrations, mobility coefficients, rate constants for association and dissociation processes, and even enzyme kinetics. As any reaction causing an alteration of the primary measurement parameters such as fluorescence brightness or mobility can be monitored, the application of this noninvasive method to unravel processes in living cells is straightforward. Due to the high spatial resolution of less than 0.5 microm, selective measurements in cellular compartments, e.g., to probe receptor-ligand interactions on cell membranes, are feasible. Moreover, the observation of local molecular dynamics provides access to environmental parameters such as local oxygen concentrations, pH, or viscosity. Thus, this versatile technique is of particular attractiveness for researchers striving for quantitative assessment of interactions and dynamics of small molecular quantities in biologically relevant systems.
Statistical and sampling issues when using multiple particle tracking
NASA Astrophysics Data System (ADS)
Savin, Thierry; Doyle, Patrick S.
2007-08-01
Video microscopy can be used to simultaneously track several microparticles embedded in a complex material. The trajectories are used to extract a sample of displacements at random locations in the material. From this sample, averaged quantities characterizing the dynamics of the probes are calculated to evaluate structural and/or mechanical properties of the assessed material. However, the sampling of measured displacements in heterogeneous systems is singular because the volume of observation with video microscopy is finite. By carefully characterizing the sampling design in the experimental output of the multiple particle tracking technique, we derive estimators for the mean and variance of the probes’ dynamics that are independent of the peculiar statistical characteristics. We expose stringent tests of these estimators using simulated and experimental complex systems with a known heterogeneous structure. Up to a certain fundamental limitation, which we characterize through a material degree of sampling by the embedded probe tracking, these estimators can be applied to quantify the heterogeneity of a material, providing an original and intelligible kind of information on complex fluid properties. More generally, we show that the precise assessment of the statistics in the multiple particle tracking output sample of observations is essential in order to provide accurate unbiased measurements.
The impact of generic reference pricing in Italy, a decade on.
Ghislandi, Simone; Armeni, Patrizio; Jommi, Claudio
2013-12-01
The generic reference price (GRP) was introduced in Italy in 2001. The main purpose of this paper is: (a) producing evidence regarding the effect of GRP on prices; (b) testing the hypothesis that there is a reallocation of demand from the genericated (and reference-priced) molecules to patent-protected products that have the same therapeutic indication. The analysis used a unique dataset of quantities and revenues of six therapeutic groups that were observed for more than a decade. Difference-in-differences analysis is applied. Prices are adjusted for all the regulatory interventions in the ten years of observations, to control for confounding impact of these interventions. On average, prices dropped 13% more in groups to which GRP was applied than in other groups. Moreover, each entry of a new generic was associated with a price drop of around 2.8%. On the other hand, GRP did not induce any significant switching towards in-patent molecules. We provide the first empirical results of the impact of GRP on prices in Italy and evidence that GRP cannot be held solely responsible for the often reported demand reallocation towards new and in-patent molecules.
NASA Astrophysics Data System (ADS)
Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; Morley, Steven K.; Ozturk, Dogacan Su
2017-12-01
We simulated the entire month of January 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, SYM-H, AL, and cross-polar cap potential (CPCP). We find that the model does an excellent job of predicting the SYM-H index, with a root-mean-square error (RMSE) of 17-18 nT. Kp is predicted well during storm time conditions but overpredicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonably well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to overpredict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resolution, with the exception of the rate of occurrence for strongly negative AL values. The use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.
Direct numerical simulation of cellular-scale blood flow in microvascular networks
NASA Astrophysics Data System (ADS)
Balogh, Peter; Bagchi, Prosenjit
2017-11-01
A direct numerical simulation method is developed to study cellular-scale blood flow in physiologically realistic microvascular networks that are constructed in silico following published in vivo images and data, and are comprised of bifurcating, merging, and winding vessels. The model resolves large deformation of individual red blood cells (RBC) flowing in such complex networks. The vascular walls and deformable interfaces of the RBCs are modeled using the immersed-boundary methods. Time-averaged hemodynamic quantities obtained from the simulations agree quite well with published in vivo data. Our simulations reveal that in several vessels the flow rates and pressure drops could be negatively correlated. The flow resistance and hematocrit are also found to be negatively correlated in some vessels. These observations suggest a deviation from the classical Poiseuille's law in such vessels. The cells are observed to frequently jam at vascular bifurcations resulting in reductions in hematocrit and flow rate in the daughter and mother vessels. We find that RBC jamming results in several orders of magnitude increase in hemodynamic resistance, and thus provides an additional mechanism of increased in vivo blood viscosity as compared to that determined in vitro. Funded by NSF CBET 1604308.
Evidence for a second-order phase transition around 350 K in Ce3Rh4Sn13
NASA Astrophysics Data System (ADS)
Kuo, C. N.; Chen, W. T.; Tseng, C. W.; Hsu, C. J.; Huang, R. Y.; Chou, F. C.; Kuo, Y. K.; Lue, C. S.
2018-03-01
We report an observation of a phase transition in Ce3Rh4Sn13 with the transition temperature T*≃350 K by means of synchrotron x-ray powder diffraction, specific heat, electrical resistivity, Seebeck coefficient, thermal conductivity, as well as 119Sn nuclear magnetic resonance (NMR) measurements. The phase transition has been characterized by marked features near T* in all measured physical quantities. The lack of thermal hysteresis in the specific heat indicates a second-order phase transition in nature. From the NMR analysis, the change in the transferred hyperfine coupling constant for two tin sites has been resolved. The obtained result has been associated with the reduction in the averaged interatomic distance between Ce and Sn atoms, particularly for the Sn2 atoms. It indicates that the movement of the Sn2 atoms, which deforms the high-temperature structure, shortens the Ce-Sn2 bond length at low temperatures. We therefore provide a concise picture that the observed second-order phase transition at T* of Ce3Rh4Sn13 should be characterized by a structural modulation essentially due to lattice distortions arising from phonon instability.
Phase Averaged Measurements of the Coherent Structure of a Mach Number 0.6 Jet. M.S. Thesis
NASA Technical Reports Server (NTRS)
Emami, S.
1983-01-01
The existence of a large scale structure in a Mach number 0.6, axisymmetric jet of cold air was proven. In order to further characterize the coherent structure, phase averaged measurements of the axial mass velocity, radial velocity, and the product of the two were made. These measurements yield information about the percent of the total fluctuations contained in the coherent structure. These measured values were compared to the total fluctuation levels for each quantity and the result expressed as a percent of the total fluctuation level contained in the organized structure at a given frequency. These measurements were performed for five frequencies (St=0.16, 0.32, 0.474, 0.95, and 1.26). All of the phase averaged measurements required that the jet be artificially excited.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Norrie, John; Davidson, Kate; Tata, Philip; Gumley, Andrew
2013-09-01
We investigated the treatment effects reported from a high-quality randomized controlled trial of cognitive behavioural therapy (CBT) for 106 people with borderline personality disorder attending community-based clinics in the UK National Health Service - the BOSCOT trial. Specifically, we examined whether the amount of therapy and therapist competence had an impact on our primary outcome, the number of suicidal acts, using instrumental variables regression modelling. Randomized controlled trial. Participants from across three sites (London, Glasgow, and Ayrshire/Arran) were randomized equally to CBT for personality disorders (CBTpd) plus Treatment as Usual or to Treatment as Usual. Treatment as Usual varied between sites and individuals, but was consistent with routine treatment in the UK National Health Service at the time. CBTpd comprised an average 16 sessions (range 0-35) over 12 months. We used instrumental variable regression modelling to estimate the impact of quantity and quality of therapy received (recording activities and behaviours that took place after randomization) on number of suicidal acts and inpatient psychiatric hospitalization. A total of 101 participants provided full outcome data at 2 years post randomization. The previously reported intention-to-treat (ITT) results showed on average a reduction of 0.91 (95% confidence interval 0.15-1.67) suicidal acts over 2 years for those randomized to CBT. By incorporating the influence of quantity of therapy and therapist competence, we show that this estimate of the effect of CBTpd could be approximately two to three times greater for those receiving the right amount of therapy from a competent therapist. Trials should routinely control for and collect data on both quantity of therapy and therapist competence, which can be used, via instrumental variable regression modelling, to estimate treatment effects for optimal delivery of therapy. Such estimates complement rather than replace the ITT results, which are properly the principal analysis results from such trials. © 2013 The British Psychological Society.
Leitman, Patricia L.; Hall, D.W.; Langland, M.J.; Chichester, D.C.; Ward, J.R.
1996-01-01
Surface-runoff and ground-water quantity and quality of a 22.1-acre field site were characterized from January 1983 through September 1984, before implementation of terracing and nutrient-management practices. The site, underlain by carbonate rock, was cropland used primarily for the production of corn and alfalfa. Average annual application of nutrients to the 14.4 acres of cornfields was 410 pounds of nitrogen and 110 pounds of phosphorus. About three times more nutrients were applied during the 1984 water year than during the 1983 water year. During the investigation, 714,000 cubic feet of runoff transported 244 tons of suspended sediment, 300 pounds of nitrogen, and 170 pounds of phosphorus during the 1984 water year. Runoff from storms on frozen ground produced the highest loads of nitrogen. Regression analyses indicate that runoff rates and quantities were controlled by precipitation intensities of quantities and the amount of crop cover, and that mean concentrations of nitrogen for runoff events increased with increased surface-nitrogen applications made prior to runoff. Ground-water levels responded quickly to recharge, with peaks occurring several hours to a day after precipitation. Median concentrations of dissolved nitrate in ground water ranged from 9.2 to 13 milligrams per liter as nitrogen. A lag time of 1 to 3 months was observed between the time that nitrogen was applied to the land surface and local maximums in nitrate concentrations were detected in ground water unaffected by recharge events. About 3 million cubic feet of ground water and an associated 2,200 pounds of nitrate-nitrogen discharged from the site during the study period. For the study period, 42 percent of the precipitation recharged to ground water, 10 percent became runoff, and 48 percent evapotranspired. Inputs of nitrogen to the study area were estimated to be 93 percent from manure, 5 percent from commercial fertilizer, and 2 percent from precipitation. Nitrogen outputs from the system were estimated to be 38 percent to crop uptake, 39 percent to volatilization, 20 percent to ground- water discharge, and 3 percent to surface runoff.
Bharti, Omesh Kumar; Madhusudana, Shampur Narayan; Gaunta, Pyare Lal; Belludi, Ashwin Yajaman
2016-01-01
ABSTRACT Presently the dose of rabies immunoglobulin (RIG) which is an integral part of rabies post exposure prophylaxis (PEP) is calculated based on body weight though the recommendation is to infiltrate the wound(s). This practice demands large quantities of RIG which may be unaffordable to many patients. In this background, we conducted this study to know if the quantity and cost of RIG can be reduced by restricting passive immunization to local infiltration alone and avoiding systemic intramuscular administration based on the available scientific evidence. Two hundred and sixty nine category III patients bitten by suspect or confirmed rabid dogs/animals were infiltrated with equine rabies immunoglobulin (ERIGs) in and around the wound. The quantity of ERIG used was proportionate to the size and number of wounds irrespective of their body weight. They were followed with a regular course of rabies vaccination by intra-dermal route. As against 363 vials of RIGs required for all these cases as per current recommendation based on body weight, they required only 42 vials of 5ml RIG. Minimum dose of RIGs given was 0.25 ml and maximum dose given was 8 ml. On an average 1.26 ml of RIGs was required per patient that costs Rs. 150 ($3). All the patients were followed for 9 months and they were healthy and normal at the end of observation period. With local infiltration, that required small quantities of RIG, the RIGs could be made available to all patients in times of short supply in the market. A total of 30 (11%) serum samples of patients were tested for rabies virus neutralizing antibodies by the rapid fluorescent focus inhibition test (RFFIT) and all showed antibody titers >0.5 IU/mL by day 14. In no case the dose was higher than that required based on body weight and no immunosuppression resulted. To conclude, this pilot study shows that local infiltration of RIG need to be considered in times of non-availability in the market or unaffordability by poor patients. This preliminary study needs to be done on larger scale in other centers with long term follow up to substantiate the results of our study. PMID:26317441
Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1990-01-01
The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.
Data assimilation problems in glaciology
NASA Astrophysics Data System (ADS)
Shapero, Daniel
Rising sea levels due to mass loss from Greenland and Antarctica threaten to inundate coastal areas the world over. For the purposes of urban planning and hazard mitigation, policy makers would like to know how much sea-level rise can be anticipated in the next century. To make these predictions, glaciologists use mathematical models of ice sheet flow, together with remotely-sensed observations of the current state of the ice sheets. The quantities that are observable over large spatial scales are the ice surface elevation and speed, and the elevation of the underlying bedrock. There are other quantities, such as the viscosity within the ice and the friction coefficient for sliding over the bed, that are just as important in dictating how fast the glacier flows, but that are not observable at large scales using current methods. These quantities can be inferred from observations by using data assimilation methods, applied to a model of glacier flow. In this dissertation, I will describe my work on data assimilation problems in glaciology. My main contributions so far have been: computing the bed stress underneath the three biggest Greenland outlet glaciers; developing additional tools for glacier modeling and data assimilation in the form of the open-source library icepack ; and improving the statistical methodology through the user of total variation priors.
NASA Astrophysics Data System (ADS)
Diesch, J.-M.; Drewnick, F.; Klimach, T.; Borrmann, S.
2013-04-01
Measurements of the ambient aerosol, various trace gases and meteorological quantities using a mobile laboratory (MoLa) were performed on the banks of the Lower Elbe in an emission control area (ECA) which is passed by numerous private and commercial marine vessels reaching and leaving the port of Hamburg, Germany. From 25-29 April 2011 a total of 178 vessels were probed at a distance of about 0.8-1.2 km with high temporal resolution. 139 ship emission plumes were of sufficient quality to be analyzed further and to determine emission factors (EFs). Concentrations of aerosol number and mass as well as polycyclic aromatic hydrocarbons (PAH) and black carbon were measured in PM1 and size distribution instruments covered the diameter range from 6 nm up to 32 μm. The chemical composition of the non-refractory submicron aerosol was measured by means of an Aerosol Mass Spectrometer (Aerodyne HR-ToF-AMS). Gas phase species analyzers monitored various trace gases (O3, SO2, NO, NO2, CO2) in the air and a weather station provided wind, precipitation, solar radiation data and other quantities. Together with ship information for each vessel obtained from Automatic Identification System (AIS) broadcasts a detailed characterization of the individual ship types and of features affecting gas and particulate emissions is provided. Particle number EFs (average 2.6e+16 # kg-1) and PM1 mass EFs (average 2.4 g kg-1) tend to increase with the fuel sulfur content. Observed PM1 composition of the vessel emissions was dominated by organic matter (72%), sulfate (22%) and black carbon (6%) while PAHs only account for 0.2% of the submicron aerosol mass. Measurements of gaseous components showed an increase of SO2 (average EF: 7.7 g kg-1) and NOx (average EF: 53 g kg-1) while O3 decreased when a ship plume reached the sampling site. The particle number size distributions of the vessels are generally characterized by a bimodal size distribution, with the nucleation mode in the 10-20 nm diameter range and a combustion aerosol mode centered at about 35 nm while particles > 1 μm were not found. "High particle number emitters" are characterized by a dominant nucleation mode. By contrast, increased particle concentrations around 150 nm primarily occurred for "high black carbon emitters". Classifying the vessels according to their gross tonnage shows a decrease of the number, black carbon and PAH EFs while EFs of SO2, NO, NO2, NOx, AMS species (particulate organics, sulfate) and PM1 mass concentration increase with increasing gross tonnages.
NASA Astrophysics Data System (ADS)
Pritychenko, B.; Mughabghab, S. F.
2012-12-01
We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present paper contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.
NASA Astrophysics Data System (ADS)
Barry, Jeremy A.; Robichaud, Guillaume; Bokhart, Mark T.; Thompson, Corbin; Sykes, Craig; Kashuba, Angela D. M.; Muddiman, David C.
2014-12-01
This work describes the coupling of the IR-MALDESI imaging source with the Q Exactive mass spectrometer. IR-MALDESI MSI was used to elucidate the spatial distribution of several HIV drugs in cervical tissues that had been incubated in either a low or high concentration. Serial sections of those analyzed by IR-MALDESI MSI were homogenized and analyzed by LC-MS/MS to quantify the amount of each drug present in the tissue. By comparing the two techniques, an agreement between the average intensities from the imaging experiment and the absolute quantities for each drug was observed. This correlation between these two techniques serves as a prerequisite to quantitative IR-MALDESI MSI. In addition, a targeted MS2 imaging experiment was also conducted to demonstrate the capabilities of the Q Exactive and to highlight the added selectivity that can be obtained with SRM or MRM imaging experiments.
Self-organized dynamics in local load-sharing fiber bundle models.
Biswas, Soumyajyoti; Chakrabarti, Bikas K
2013-10-01
We study the dynamics of a local load-sharing fiber bundle model in two dimensions under an external load (which increases with time at a fixed slow rate) applied at a single point. Due to the local load-sharing nature, the redistributed load remains localized along the boundary of the broken patch. The system then goes to a self-organized state with a stationary average value of load per fiber along the (increasing) boundary of the broken patch (damaged region) and a scale-free distribution of avalanche sizes and other related quantities are observed. In particular, when the load redistribution is only among nearest surviving fiber(s), the numerical estimates of the exponent values are comparable with those of the Manna model. When the load redistribution is uniform along the patch boundary, the model shows a simple mean-field limit of this self-organizing critical behavior, for which we give analytical estimates of the saturation load per fiber values and avalanche size distribution exponent. These are in good agreement with numerical simulation results.
NASA Technical Reports Server (NTRS)
Harrison, G.; Mackenzie, W.
1973-01-01
The lungs of rats exposed to OF2 were examined by light and electron microscopy. The exposures were for 30 to 60 minutes to an average of 4.5 ppm OF2, the minimal lethal dose. Animals were sacrificed after 30 (group 1) and 60 minutes (group 2) exposure and 1 (group 3) and 2 (group 4) hours following 60 minutes exposure. Mild gross changes were observed in groups 3 and 4, but no light microscopic lesions were found. Alterations were noted in all four groups using electron microscopy. These were mostly indicative of fluid change and consisted of blebbing of the endothelial and epithelial layers of the alveolocapillary wall and rarification of the cytoplasm of these cells. The lamellar bodies of the Type II cells showed an increasing and consistent loss of matrix structure and density. These fine structural changes increased in quantity and severity as time of exposure or post-exposure period increased. (Modified author abstract)
Spectroscopic Evidence of Formation of Small Polarons in Doped Manganites
NASA Astrophysics Data System (ADS)
Moritomo, Yutaka; Machida, Akihiko; Nakamura, Arao
1998-03-01
Temperature dependence of absorption spectra for thin films of doped manganites R_0.6Sr_0.4MnO_3, where R is rare-earth atom, has been investigated systematically changing averaged ionic radius < rA > of perovskite A-site. We have observed a specific absorption band at ~1.5eV due to optical excitations from small polarons (SP)(Machida et al.), submitted.. Spectral weight of the SP band increases with decreasing temperature and eventually disappears at the insulator-metal (IM) transition, indicating that SP in the paramagnetic state (T >= T_C) changes into bare electrons (or large polarons) in the ferromagnetic state due to the enhanced one-electron bandwidth W. We further derived important physical quantities, i.e., W, on-site exchange interaction J and binding energy Ep of SP, and discuss material dependence of stability of SP. This work was supported by a Grant-In-Aid for Scientific Research from the Ministry of Education, Science, Sport and Culture and from PRESTO, Japan Scienece and Technology Corporation (JST), Japan.
Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamojjala, Krishna; Lacy, Jeffrey; Chu, Henry S.
2015-03-01
Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimenmore » are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.« less
Determining Near-Bottom Fluxes of Passive Tracers in Aquatic Environments
NASA Astrophysics Data System (ADS)
Bluteau, Cynthia E.; Ivey, Gregory N.; Donis, Daphne; McGinnis, Daniel F.
2018-03-01
In aquatic systems, the eddy correlation method (ECM) provides vertical flux measurements near the sediment-water interface. The ECM independently measures the turbulent vertical velocities w' and the turbulent tracer concentration c' at a high sampling rate (> 1 Hz) to obtain the vertical flux w'c'¯ from their time-averaged covariance. This method requires identifying and resolving all the flow-dependent time (and length) scales contributing to w'c'¯. With increasingly energetic flows, we demonstrate that the ECM's current technology precludes resolving the smallest flux-contributing scales. To avoid these difficulties, we show that for passive tracers such as dissolved oxygen, w'c'¯ can be measured from estimates of two scalar quantities: the rate of turbulent kinetic energy dissipation ɛ and the rate of tracer variance dissipation χc. Applying this approach to both laboratory and field observations demonstrates that w'c'¯ is well resolved by the new method and can provide flux estimates in more energetic flows where the ECM cannot be used.
NASA Astrophysics Data System (ADS)
Kemp, Paul F.
1986-10-01
Strandings of cnidaria occur commonly on exposed shorelines. In some years, large numbers of the chondrophoran Velella velella (L.) are stranded on Pacific beaches of North America. The quantity of organic material deposited on an Oregon beach by one of three mass strandings in 1984 was measured. An average of 2573 g ash-free dry weight (AFDW) was deposited per meter of shoreline, representing 1223 g m -1 of carbon and 347 g m -1 of nitrogen. No significant reduction in AFDW m -1 of the decomposing material was observed in the first three days. The drying mat of stranded material was broken apart by wave action after nine days and most of the material was absent after twelve days. Measurement of microbial and primary production in the period following a stranding may help to determine how long nutrients derived from the stranded material are retained in the beach and surf system.
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros
2004-01-01
The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.
Current databases on biological variation: pros, cons and progress.
Ricós, C; Alvarez, V; Cava, F; García-Lario, J V; Hernández, A; Jiménez, C V; Minchinela, J; Perich, C; Simón, M
1999-11-01
A database with reliable information to derive definitive analytical quality specifications for a large number of clinical laboratory tests was prepared in this work. This was achieved by comparing and correlating descriptive data and relevant observations with the biological variation information, an approach that had not been used in the previous efforts of this type. The material compiled in the database was obtained from published articles referenced in BIOS, CURRENT CONTENTS, EMBASE and MEDLINE using "biological variation & laboratory medicine" as key words, as well as books and doctoral theses provided by their authors. The database covers 316 quantities and reviews 191 articles, fewer than 10 of which had to be rejected. The within- and between-subject coefficients of variation and the subsequent desirable quality specifications for precision, bias and total error for all the quantities accepted are presented. Sex-related stratification of results was justified for only four quantities and, in these cases, quality specifications were derived from the group with lower within-subject variation. For certain quantities, biological variation in pathological states was higher than in the healthy state. In these cases, quality specifications were derived only from the healthy population (most stringent). Several quantities (particularly hormones) have been treated in very few articles and the results found are highly discrepant. Therefore, professionals in laboratory medicine should be strongly encouraged to study the quantities for which results are discrepant, the 90 quantities described in only one paper and the numerous quantities that have not been the subject of study.
Statistics of Statisticians: Critical Mass of Statistics and Operational Research Groups
NASA Astrophysics Data System (ADS)
Kenna, Ralph; Berche, Bertrand
Using a recently developed model, inspired by mean field theory in statistical physics, and data from the UK's Research Assessment Exercise, we analyse the relationship between the qualities of statistics and operational research groups and the quantities of researchers in them. Similar to other academic disciplines, we provide evidence for a linear dependency of quality on quantity up to an upper critical mass, which is interpreted as the average maximum number of colleagues with whom a researcher can communicate meaningfully within a research group. The model also predicts a lower critical mass, which research groups should strive to achieve to avoid extinction. For statistics and operational research, the lower critical mass is estimated to be 9 ± 3. The upper critical mass, beyond which research quality does not significantly depend on group size, is 17 ± 6.
Numerical investigation of supersonic turbulent boundary layers with high wall temperature
NASA Technical Reports Server (NTRS)
Guo, Y.; Adams, N. A.
1994-01-01
A direct numerical approach has been developed to simulate supersonic turbulent boundary layers. The mean flow quantities are obtained by solving the parabolized Reynolds-averaged Navier-Stokes equations (globally). Fluctuating quantities are computed locally with a temporal direct numerical simulation approach, in which nonparallel effects of boundary layers are partially modeled. Preliminary numerical results obtained at the free-stream Mach numbers 3, 4.5, and 6 with hot-wall conditions are presented. Approximately 5 million grid points are used in all three cases. The numerical results indicate that compressibility effects on turbulent kinetic energy, in terms of dilatational dissipation and pressure-dilatation correlation, are small. Due to the hot-wall conditions the results show significant low Reynolds number effects and large streamwise streaks. Further simulations with a bigger computational box or a cold-wall condition are desirable.
Hyperbolicity measures democracy in real-world networks
NASA Astrophysics Data System (ADS)
Borassi, Michele; Chessa, Alessandro; Caldarelli, Guido
2015-09-01
In this work, we analyze the hyperbolicity of real-world networks, a geometric quantity that measures if a space is negatively curved. We provide two improvements in our understanding of this quantity: first of all, in our interpretation, a hyperbolic network is "aristocratic", since few elements "connect" the system, while a non-hyperbolic network has a more "democratic" structure with a larger number of crucial elements. The second contribution is the introduction of the average hyperbolicity of the neighbors of a given node. Through this definition, we outline an "influence area" for the vertices in the graph. We show that in real networks the influence area of the highest degree vertex is small in what we define "local" networks (i.e., social or peer-to-peer networks), and large in "global" networks (i.e., power grid, metabolic networks, or autonomous system networks).
Quantifying the European Strategic Airlift Gap
2013-06-01
Lindstrom , 2007: 41). There is a reason a vast majority of freight is moved via sea and/or land world-wide. Even with relatively slow average speeds of...Some areas of operation are land locked, severely hampering the relevance of sealift ( Lindstrom , 2007: 41). Operations in Kosovo and Afghanistan...Manufacturer Lockheed Martin Quantity in NATO Nations B model: Greece (5), Romania (4) and Turkey (6); E model: Canada (10), Poland (5), Turkey
Grizelle Gonzalez; Y. Li; X. Zou
2007-01-01
Hurricanes are a common disturbance in the Caribbean, striking the island of Puerto Rico on average every 21 years. Hurricane Hugo (1989) distributed the canopy litter onto the forest floor changing the chemistry and quantity of litter inputs to the soil. In this study, we determined the effect of inorganic fertilization on earthworm abundance, biomass, and species...
Measuring Concentrations of Particulate 140La in the Air
Okada, Colin E.; Kernan, Warnick J.; Keillor, Martin E.; ...
2016-05-01
Air sampling systems were deployed to measure the concentration of radioactive material in the air during the Full-Scale Radiological Dispersal Device experiments. The air samplers were positioned 100-600 meters downwind of the release point. The filters were collected immediately and analyzed in the field. Quantities for total activity collected on the air filters are reported along with additional information to compute the average or integrated air concentrations.
Nucleation of Crystals From Solution in Microgravity (USML-1 Glovebox (GBX) Investigation)
NASA Technical Reports Server (NTRS)
Kroes, Roger L.; Reiss, Donald A.; Lehoczky, Sandor L.
1994-01-01
A new method for initiating nucleation from solutions in microgravity which avoids nucleation on container walls and other surfaces is described. This method consists of injecting a small quantity of highly concentrated, heated solution into the interior of a lightly supersaturated, cooler host gowth solution. It was tested successfully on USML-I, producing a large number of LAP crystals whose longest dimension averaged 1 mm.
Determination of picomole quantities of acetylcholine and choline in physiologic salt solutions.
Gilberstadt, M L; Russell, J A
1984-04-01
An assay capable of detecting tens-of-picomole quantities of choline and acetylcholine in milliliter volumes of a physiological salt solution has been developed. Silica column chromatography was used to bind and separate 10-3000 pmol [14C]choline and [14C]acetylcholine standards made up in 3 ml of a bicarbonate-buffered Krebs-Ringer solution. The silica columns bound 95-98% of both choline and acetylcholine. Of the bound choline 84-87% was eluted in 1.5 ml of 0.075 N HCl, whereas 95-98% of the bound acetylcholine was eluted in a subsequent wash with 1.5 ml of 0.030 N HCl in 10% 2-butanone. Vacuum centrifugation of the eluants yielded small white pellets with losses of choline and acetylcholine of only 1%. Dried pellets of unlabeled choline and acetylcholine standards were assayed radioenzymatically using [gamma-32P]ATP, choline kinase, and acetylcholinesterase. The net disintegrations per minute of choline[32P]phosphate product was proportional to both the acetylcholine (10-3000 pmol) and choline (30-3000 pmol) standards. The "limit sensitivity" was 8.5 pmol for acetylcholine and 11.4 pmol for choline. Cross-contamination of the choline assay by acetylcholine averaged 1.3%, whereas contamination of the acetylcholine assay by choline averaged 3.1%.
NASA Astrophysics Data System (ADS)
Tsuda, Shin-Ichi; Nakano, Yuta; Watanabe, Satoshi
2017-11-01
Recently, several studies using Molecular Dynamics (MD) simulation have been conducted for investigation of Ostwald ripening of cavitation bubbles in a finite space. The previous studies focused a characteristic length of bubbles as one of the spatially-averaged quantities, but each bubble behavior was not been investigated in detail. The objective of this study is clarification of the characteristics of each bubble behavior in Ostwald ripening, and we conducted MD simulation of a Lennard-Jones fluid in a semi-confined space. As a result, the time dependency of the characteristic length of bubbles as a spatially-averaged quantity suggested that the driving force of the Ostwald ripening is Evaporation/Condensation (EC) across liquid-vapor surface, which is the same result as the previous works. The radius change of the relatively larger bubbles also showed the same tendency to a classical EC model. However, the sufficiently smaller bubbles than the critical size, e.g., the bubbles just before collapsing, showed a different characteristic from the classical EC model. Those smaller bubbles has a tendency to be limited by mechanical non-equilibrium in which viscosity of liquid is dominant rather than by EC across liquid-vapor surface. This work was supported by JSPS KAKENHI Grant Number JP16K06085.
Bee pollination increases yield quantity and quality of cash crops in Burkina Faso, West Africa.
Stein, Katharina; Coulibaly, Drissa; Stenchly, Kathrin; Goetze, Dethardt; Porembski, Stefan; Lindner, André; Konaté, Souleymane; Linsenmair, Eduard K
2017-12-18
Mutualistic biotic interactions as among flowering plants and their animal pollinators are a key component of biodiversity. Pollination, especially by insects, is a key element in ecosystem functioning, and hence constitutes an ecosystem service of global importance. Not only sexual reproduction of plants is ensured, but also yields are stabilized and genetic variability of crops is maintained, counteracting inbreeding depression and facilitating system resilience. While experiencing rapid environmental change, there is an increased demand for food and income security, especially in sub-Saharan communities, which are highly dependent on small scale agriculture. By combining exclusion experiments, pollinator surveys and field manipulations, this study for the first time quantifies the contribution of bee pollinators to smallholders' production of the major cash crops, cotton and sesame, in Burkina Faso. Pollination by honeybees and wild bees significantly increased yield quantity and quality on average up to 62%, while exclusion of pollinators caused an average yield gap of 37% in cotton and 59% in sesame. Self-pollination revealed inbreeding depression effects on fruit set and low germination rates in the F1-generation. Our results highlight potential negative consequences of any pollinator decline, provoking risks to agriculture and compromising crop yields in sub-Saharan West Africa.
Auvinen, Juha P; Tammelin, Tuija H; Taimela, Simo P; Zitting, Paavo J; Järvelin, Marjo-Riitta; Taanila, Anja M; Karppinen, Jaro I
2010-04-01
The quantity and quality of adolescents' sleep may have changed due to new technologies. At the same time, the prevalence of neck, shoulder and low back pain has increased. However, only a few studies have investigated insufficient quantity and quality of sleep as possible risk factors for musculoskeletal pain among adolescents. The aim of the study was to assess whether insufficient quantity and quality of sleep are risk factors for neck (NP), shoulder (SP) and low back pain (LBP). A 2-year follow-up survey among adolescents aged 15-19 years was (2001-2003) carried out in a subcohort of the Northern Finland Birth Cohort 1986 (n = 1,773). The outcome measures were 6-month period prevalences of NP, SP and LBP. The quantity and quality of sleep were categorized into sufficient, intermediate or insufficient, based on average hours spent sleeping, and whether or not the subject suffered from nightmares, tiredness and sleeping problems. The odds ratios (OR) and 95% confidence intervals (CI) for having musculoskeletal pain were obtained through logistic regression analysis, adjusted for previously suggested risk factors and finally adjusted for specific pain status at 16 years. The 6-month period prevalences of neck, shoulder and low back pain were higher at the age of 18 than at 16 years. Insufficient quantity or quality of sleep at 16 years predicted NP in both girls (OR 4.4; CI 2.2-9.0) and boys (2.2; 1.2-4.1). Similarly, insufficient sleep at 16 years predicted LBP in both girls (2.9; 1.7-5.2) and boys (2.4; 1.3-4.5), but SP only in girls (2.3; 1.2-4.4). After adjustment for pain status, insufficient sleep at 16 years predicted significantly only NP (3.2; 1.5-6.7) and LBP (2.4; 1.3-4.3) in girls. Insufficient sleep quantity or quality was an independent risk factor for NP and LBP among girls. Future studies should test whether interventions aimed at improving sleep characteristics are effective in the prevention and treatment of musculoskeletal pain.
Modeling of turbulent supersonic H2-air combustion with a multivariate beta PDF
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Hassan, H. A.
1993-01-01
Recent calculations of turbulent supersonic reacting shear flows using an assumed multivariate beta PDF (probability density function) resulted in reduced production rates and a delay in the onset of combustion. This result is not consistent with available measurements. The present research explores two possible reasons for this behavior: use of PDF's that do not yield Favre averaged quantities, and the gradient diffusion assumption. A new multivariate beta PDF involving species densities is introduced which makes it possible to compute Favre averaged mass fractions. However, using this PDF did not improve comparisons with experiment. A countergradient diffusion model is then introduced. Preliminary calculations suggest this to be the cause of the discrepancy.
Decoherence-induced conductivity in the one-dimensional Anderson model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegmann, Thomas; Wolf, Dietrich E.; Ujsághy, Orsolya
We study the effect of decoherence on the electron transport in the one-dimensional Anderson model by means of a statistical model [1, 2, 3, 4, 5]. In this model decoherence bonds are randomly distributed within the system, at which the electron phase is randomized completely. Afterwards, the transport quantity of interest (e.g. resistance or conductance) is ensemble averaged over the decoherence configurations. Averaging the resistance of the sample, the calculation can be performed analytically. In the thermodynamic limit, we find a decoherence-driven transition from the quantum-coherent localized regime to the Ohmic regime at a critical decoherence density, which is determinedmore » by the second-order generalized Lyapunov exponent (GLE) [4].« less
Yang, Bin; Xue, Quan-hong; Chen, Zhan-quan; Guo, Zhi-ying; Zhang, Xiao-lu; Zhou, Yong-qiang; Xu, Ying-jun; Sun, De-fu
2008-08-01
In order to probe into the effects of artificial vegetation rehabilitation on soil actinomycetes, dilution plate and agar block methods were used to investigate the ecological distribution and antimicrobial effects of actinomycetes in sandy soil in Shazhuyu area of Qinghai after artificial vegetation restoration. The results showed that with the vegetation rehabilitation and the improvement of vegetation coverage on alpine sandy dry land, the quantity of soil actinomycetes increased significantly, being 145.4% higher in the grassland transferred from farmland than in sandy land. The quantity of soil Micromonospora in grassland transferred from farmland was about six times as much as that in sandy land. The average selection rate of antimicrobial actinomycetes was increased greatly, with the antimicrobial actinomycetes in the soil of grassland transferred from farmland, the antibacterial actinomycetes in the soil of natural grassland, and the pathogenic fungus resistant aetinomycetes in the soil of forestland being approximately 2, 3.2 and 1.5 times as much as those in the soil of sandy land, respectively. Vegetation coverage and soil nutrients had great influences on the quantities of actinomycetes and antimicrobial actinomycetes. The contents of soil organic matter and alkali-hydrolyzable nitrogen and the yield of fresh grasses had significant correlations with the quantities of actinomycetes (P < 0.01), and the content of soil organic matter and the yield of fresh grasses significantly correlated with the strain numbers of antimicrobial actinomycetes (P < 0.01). Furthermore, vegetation coverage and the contents of soil total nitrogen, total phosphorous, total potassium, total salt, and available potassium had significant correlations with the total quantities of actinomycetes, Streptomycetes, and Micromonospora (P < 0.05).
NASA Astrophysics Data System (ADS)
Zender, J. J.; Kariyappa, R.; Giono, G.; Bergmann, M.; Delouille, V.; Damé, L.; Hochedez, J.-F.; Kumara, S. T.
2017-09-01
Context. The magnetic field plays a dominant role in the solar irradiance variability. Determining the contribution of various magnetic features to this variability is important in the context of heliospheric studies and Sun-Earth connection. Aims: We studied the solar irradiance variability and its association with the underlying magnetic field for a period of five years (January 2011-January 2016). We used observations from the Large Yield Radiometer (LYRA), the Sun Watcher with Active Pixel System detector and Image Processing (SWAP) on board PROBA2, the Atmospheric Imaging Assembly (AIA), and the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). Methods: The Spatial Possibilistic Clustering Algorithm (SPoCA) is applied to the extreme ultraviolet (EUV) observations obtained from the AIA to segregate coronal features by creating segmentation maps of active regions (ARs), coronal holes (CHs) and the quiet sun (QS). Further, these maps are applied to the full-disk SWAP intensity images and the full-disk (FD) HMI line-of-sight (LOS) magnetograms to isolate the SWAP coronal features and photospheric magnetic counterparts, respectively. We then computed full-disk and feature-wise averages of EUV intensity and line of sight (LOS) magnetic flux density over ARs/CHs/QS/FD. The variability in these quantities is compared with that of LYRA irradiance values. Results: Variations in the quantities resulting from the segmentation, namely the integrated intensity and the total magnetic flux density of ARs/CHs/QS/FD regions, are compared with the LYRA irradiance variations. We find that the EUV intensity over ARs/CHs/QS/FD is well correlated with the underlying magnetic field. In addition, variations in the full-disk integrated intensity and magnetic flux density values are correlated with the LYRA irradiance variations. Conclusions: Using the segmented coronal features observed in the EUV wavelengths as proxies to isolate the underlying magnetic structures is demonstrated in this study. Sophisticated feature identification and segmentation tools are important in providing more insights into the role of various magnetic features in both the short- and long-term changes in the solar irradiance. The movie associated to Fig. 2 is available at http://www.aanda.org
Newtonian Gravity Reformulated
NASA Astrophysics Data System (ADS)
Dehnen, H.
2018-01-01
With reference to MOND we propose a reformulation of Newton's theory of gravity in the sense of the static electrodynamics introducing a "material" quantity in analogy to the dielectric "constant". We propose that this quantity is induced by vacuum polarizations generated by the gravitational field itself. Herewith the flat rotation curves of the spiral galaxies can be explained as well as the observed high velocities near the center of the galaxy should be reconsidered.
Pseudo-radar algorithms with two extremely wet months of disdrometer data in the Paris area
NASA Astrophysics Data System (ADS)
Gires, A.; Tchiguirinskaia, I.; Schertzer, D.
2018-05-01
Disdrometer data collected during the two extremely wet months of May and June 2016 at the Ecole des Ponts ParisTech are used to get insights on radar algorithms. The rain rate and pseudo-radar quantities (horizontal and vertical reflectivity, specific differential phase shift) are all estimated over several durations with the help of drop size distributions (DSD) collected at 30 s time steps. The pseudo-radar quantities are defined with simplifying hypotheses, in particular on the DSD homogeneity. First it appears that the parameters of the standard radar relations Zh - R, R - Kdp and R - Zh - Zdr for these pseudo-radar quantities exhibit strong variability between events and even within an event. Second an innovative methodology that relies on checking the ability of a given algorithm to reproduce the good scale invariant multifractal behaviour (on scales 30 s - few h) observed on rainfall time series is implemented. In this framework, the classical hybrid model (Zh - R for low rain rates and R - Kdp for great ones) performs best, as well as the local estimates of the radar relations' parameters. However, we emphasise that due to the hypotheses on which they rely these observations cannot be straightforwardly extended to real radar quantities.
Seashols-Williams, Sarah; Green, Raquel; Wohlfahrt, Denise; Brand, Angela; Tan-Torres, Antonio Limjuco; Nogales, Francy; Brooks, J Paul; Singh, Baneshwar
2018-05-17
Sequencing and classification of microbial taxa within forensically relevant biological fluids has the potential for applications in the forensic science and biomedical fields. The quantity of bacterial DNA from human samples is currently estimated based on quantity of total DNA isolated. This method can miscalculate bacterial DNA quantity due to the mixed nature of the sample, and consequently library preparation is often unreliable. We developed an assay that can accurately and specifically quantify bacterial DNA within a mixed sample for reliable 16S ribosomal DNA (16S rDNA) library preparation and high throughput sequencing (HTS). A qPCR method was optimized using universal 16S rDNA primers, and a commercially available bacterial community DNA standard was used to develop a precise standard curve. Following qPCR optimization, 16S rDNA libraries from saliva, vaginal and menstrual secretions, urine, and fecal matter were amplified and evaluated at various DNA concentrations; successful HTS data were generated with as low as 20 pg of bacterial DNA. Changes in bacterial DNA quantity did not impact observed relative abundances of major bacterial taxa, but relative abundance changes of minor taxa were observed. Accurate quantification of microbial DNA resulted in consistent, successful library preparations for HTS analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Latitudinal variation of the solar limb-darkening function
NASA Astrophysics Data System (ADS)
Kroll, Ronald J.
1994-06-01
In an effort to monitor solar limb-darkening variability, the continuum radiation intensity at 550 nm over the outermost 32 arcseconds of the limb is measured at various solar latitudes. Using the Finite Fourier Transform Definition, the edge location of the Sun is determined for a series of scan amplitudes at each of the observed positions. The differential radius is the difference between edge locations for a fixed pair of scan amplitudes, and is a quantity which characterizes the slope of the solar limb-darkening function. Utilizing the differential radius, such observations offer the possibility of revealing a latitudinal variation of the photospheric temperature gradient and could provide clues to the mechanisms and variability of energy transport out of the Sun. These observations began in 1988 with measurements at 24 separate limb positions and include observations since 1990 when 36 positions were observed. The daily differential radius measurements for each position that is free of contamination from solar active regions are weighted according to the corresponding daily variance and averaged to obtain an overall value at each position for the observing season. The results indicate that during the 1991 observing season, there were regions near 20 deg N latitude and 30 deg S latitude on the Sun where the differential radius values were significantly greater than surrounding regions. This suggests that perturbations to the temperature gradient occur in latitudinally localized regions and persist for at least several months. It is shown that this phenomenon could have the same origin as the observed latitudinal variations of surface temperature and could also speak to the question of a lag time between the cycles of irradiation and magnetic variation.
The Maximum Free Magnetic Energy Allowed in a Solar Active Region
NASA Technical Reports Server (NTRS)
Moore, Ronald L.; Falconer, David A.
2009-01-01
Two whole-active-region magnetic quantities that can be measured from a line-of-sight magnetogram are (sup L) WL(sub SG), a gauge of the total free energy in an active region's magnetic field, and sup L(sub theta), a measure of the active region's total magnetic flux. From these two quantities measured from 1865 SOHO/MDI magnetograms that tracked 44 sunspot active regions across the 0.5 R(sub Sun) central disk, together with each active region's observed production of CMEs, X flares, and M flares, Falconer et al (2009, ApJ, submitted) found that (1) active regions have a maximum attainable free magnetic energy that increases with the magnetic size (sup L) (sub theta) of the active region, (2) in (Log (sup L)WL(sub SG), Log(sup L) theta) space, CME/flare-productive active regions are concentrated in a straight-line main sequence along which the free magnetic energy is near its upper limit, and (3) X and M flares are restricted to large active regions. Here, from (a) these results, (b) the observation that even the greatest X flares produce at most only subtle changes in active region magnetograms, and (c) measurements from MSFC vector magnetograms and from MDI line-of-sight magnetograms showing that practically all sunspot active regions have nearly the same area-averaged magnetic field strength: =- theta/A approximately equal to 300 G, where theta is the active region's total photospheric flux of field stronger than 100 G and A is the area of that flux, we infer that (1) the maximum allowed ratio of an active region's free magnetic energy to its potential-field energy is 1, and (2) any one CME/flare eruption releases no more than a small fraction (less than 10%) of the active region's free magnetic energy. This work was funded by NASA's Heliophysics Division and NSF's Division of Atmospheric Sciences.
NASA Astrophysics Data System (ADS)
Bitew, M. M.; Jackson, C. R.; Vache, K. B.; Griffiths, N.; Starr, G.; McDonnell, J.; Rau, B.; Younger, S. E.; Fouts, K.
2016-12-01
Intensively managed loblolly pine is a candidate species for biofuel feedstock production in the southeastern Coastal Plain of the United States. However, the water quantity and quality effects of high intensity, short-rotation silviculture are largely unknown. Here we evaluate the potential hydrologic and water quality impacts of biofuel-induced land use changes based on model scenarios developed using existing forest BMPs and industry wide experiences. We quantified the effect of bio-energy production scenarios on each of water the balance components by applying an integrated physically based distributed watershed modeling system, and multi-objective assessment functions that accurately describes the flow regimes, water quality, and isotopic observations from three experimental headwater watersheds of Fourmile Creek at Savannah River Site, SC. The model incorporates optimized travel times of groundwater flowpaths and flow control processes in the riparian region allowing water quality analysis of groundwater dominated watershed systems. We compared five different short rotation pine management scenarios ranging from 35 year (low intensity) to 10 year (high intensity) rotations and a mixture of forestry and agriculture/pasture production practices. Simulation results, based on long-term climate records, revealed that complete conversion to short-rotation woody crops would have a negligible effect on water budget components; <2% decrease in streamflow, <1.5% increase in actual evapotranspiration, an average 0.5 m fall in the groundwater table, and no change in subsurface flow due to biofuel production. Simulation results of mixed 50% agriculture and pasture and 50% short-rotation woody crops showed the largest deviation in water budget components compared to the reference condition. Analysis of extreme stream flows showed that the largest effect was observed in the low intensity mixed land use scenario. The smallest effect was in the low intensity biomass production scenario with a 0.5% increase in a 100 year return event.
NASA Astrophysics Data System (ADS)
Rai, R. K.; Berg, L. K.; Kosovic, B.; Mirocha, J. D.; Pekour, M. S.; Shaw, W. J.
2015-12-01
Resolving the finest turbulent scales present in the lower atmosphere using numerical simulations helps to study the processes that occur in the atmospheric boundary layer, such as the turbulent inflow condition to the wind plant and the generation of the wake behind wind turbines. This work employs several nested domains in the WRF-LES framework to simulate conditions in a convectively driven cloud free boundary layer at an instrumented field site in complex terrain. The innermost LES domain (30 m spatial resolution) receives the boundary forcing from two other coarser resolution LES outer domains, which in turn receive boundary conditions from two WRF-mesoscale domains. Wind and temperature records from sonic anemometers mounted at two vertical levels (30 m and 60 m) are compared with the LES results in term of first and second statistical moments as well as power spectra and distributions of wind velocity. For the two mostly used boundary layer parameterizations (MYNN and YSU) tested in the WRF mesoscale domains, the MYNN scheme shows slightly better agreement with the observations for some quantities, such as time averaged velocity and Turbulent Kinetic Energy (TKE). However, LES driven by WRF-mesoscale simulations using either parameterization have similar velocity spectra and distributions of velocity. For each component of the wind velocity, WRF-LES power spectra are found to be comparable to the spectra derived from the measured data (for the frequencies that are accurately represented by WRF-LES). Furthermore, the analysis of LES results shows a noticeable variability of the mean and variance even over small horizontal distances that would be considered sub-grid scale in mesoscale simulations. This observed statistical variability in space and time can be utilized to further analyze the turbulence quantities over a heterogeneous surface and to improve the turbulence parameterization in the mesoscale model.
Exotic plant invasion alters nitrogen dynamics in an arid grassland
Evans, R.D.; Rimer, R.; Sperry, L.; Belnap, J.
2001-01-01
The introduction of nonnative plant species may decrease ecosystem stability by altering the availability of nitrogen (N) for plant growth. Invasive species can impact N availability by changing litter quantity and quality, rates of N2-fixation, or rates of N loss. We quantified the effects of invasion by the annual grass Bromus tectorum on N cycling in an arid grassland on the Colorado Plateau (USA). The invasion occurred in 1994 in two community types in an undisturbed grassland. This natural experiment allowed us to measure the immediate responses following invasion without the confounding effects of previous disturbance. Litter biomass and the C:N and lignin:N ratios were measured to determine the effects on litter dynamics. Long-term soil incubations (415 d) were used to measure potential microbial respiration and net N mineralization. Plant-available N was quantified for two years in situ with ion-exchange resin bags, and potential changes in rates of gaseous N loss were estimated by measuring denitrification enzyme activity. Bromus invasion significantly increased litter biomass, and Bromus litter had significantly greater C:N and lignin:N ratios than did native species. The change in litter quantity and chemistry decreased potential rates of net N mineralization in sites with Bromus by decreasing nitrogen available for microbial activity. Inorganic N was 50% lower on Hilaria sites with Bromus during the spring of 1997, but no differences were observed during 1998. The contrasting differences between years are likely due to moisture availability; spring precipitation was 15% greater than average during 1997, but 52% below average during spring of 1998. Bromus may cause a short-term decrease in N loss by decreasing substrate availability and denitrification enzyme activity, but N loss is likely to be greater in invaded sites in the long term because of increased fire frequency and greater N volatilization during fire. We hypothesize that the introduction of Bromus in conjunction with land-use change has established a series of positive feedbacks that will decrease N availability and alter species composition.
Effect of a gymnastics program on sleep characteristics in pregnant women.
Kocsis, Ildikó; Szilágyi, Tibor; Turos, János; Bakó, Aliz; Frigy, Attila
2017-04-01
The quality and quantity of sleep represent important health issues in pregnant women. Sleep disturbances could be associated, beyond alteration of quality of life, with poor pregnancy outcome. Our aim was to investigate the effect of a regular, specific, medium-term physical training program on sleep characteristics in healthy pregnant women. A total of 132 healthy pregnant women, with gestational age between 18 weeks and 22 weeks, were enrolled in a prospective study. They were allocated into two groups; the first group involved 79 women (average age, 29.4 years) who performed a specific gymnastics program of 10 weeks, and the second group involved 53 pregnant women (average age, 27.9 years) who did not perform gymnastics. All participants completed a comprehensive questionnaire at baseline and after 10 weeks concerning general data, sleep characteristics, and psycho-emotional status. The changes arising within a diverse set of characteristics were followed and compared for the two groups using parametric and nonparametric statistics. In the control group, we observed significant worsening of 12 out of the 14 studied parameters during the 10-week period. In comparison with the women who did not perform gymnastics, women who performed specific gymnastics showed the following characteristics: (1) significantly less deterioration of psycho-emotional status (stress and anxiety levels); (2) the same general pattern of decrease in sleep quality, which is related to the progression of pregnancy; and (3) a significant attenuation of the worsening of several sleep characteristics, such as restless sleep, snoring, diurnal tiredness, and excessive daytime sleepiness. Nocturnal and diurnal sleep quantity increased significantly in both groups. The 10-week training program designed for pregnant women has an overall beneficial effect on sleep characteristics, not by improving them but by attenuating their general deterioration related to the progression of pregnancy. Our data strengthen the general recommendation regarding participation of pregnant women in specific exercise programs, mainly for maintaining their psycho-emotional and general well-being. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Colucci, Simone; de'Michieli Vitturi, Mattia; Landi, Patrizia
2016-04-01
It is well known that nucleation and growth of crystals play a fundamental role in controlling magma ascent dynamics and eruptive behavior. Size- and shape-distribution of crystal populations can affect mixture viscosity, causing, potentially, transitions between effusive and explosive eruptions. Furthermore, volcanic samples are usually characterized in terms of Crystal Size Distribution (CSD), which provide a valuable insight into the physical processes that led to the observed distributions. For example, a large average size can be representative of a slow magma ascent, and a bimodal CSD may indicate two events of nucleation, determined by two degassing events within the conduit. The Method of Moments (MoM), well established in the field of chemical engineering, represents a mesoscopic modeling approach that rigorously tracks the polydispersity by considering the evolution in time and space of integral parameters characterizing the distribution, the moments, by solving their transport differential-integral equations. One important advantage of this approach is that the moments of the distribution correspond to quantities that have meaningful physical interpretations and are directly measurable in natural eruptive products, as well as in experimental samples. For example, when the CSD is defined by the number of particles of size D per unit volume of the magmatic mixture, the zeroth moment gives the total number of crystals, the third moment gives the crystal volume fraction in the magmatic mixture and ratios between successive moments provide different ways to evaluate average crystal length. Tracking these quantities, instead of volume fraction only, will allow using, for example, more accurate viscosity models in numerical code for magma ascent. Here we adopted, for the first time, a quadrature based method of moments to track the temporal evolution of CSD in a magmatic mixture and we verified and calibrated the model again experimental data. We also show how the equations and the tool developed can be integrated in a magma ascent numerical model, with application to eruptive events occurred at Stromboli volcano (Italy).
Geology and ground-water resources of the Memphis Sand in western Tennessee
Parks, William Scott; Carmichael, J.K.
1990-01-01
The Memphis Sand of the Claiborne Group of Tertiary age underlies approximately 7,400 square miles in western Tennessee. The formation primarily consists of a thick body of veryfine to very coarse sand that includes subordinate lenses or beds of clay and silt at various horizons. The Memphis Sand ranges from 0 to about 900 feet in thickness, but where the original thickness is preserved, it is about 400 to 900 feet thick. The Memphis Sand yields water to wells in most of the area of occurrence in western Tennessee and, where saturated, makes up the Memphis aquifer. Recharge to the Memphis aquifer is from precipitation on the outcrop, which is a broad belt across western Tennessee, or by downward infiltration of water from the overlying fluvial deposits of Tertiary(?) and Quatemary age and alluvium of Quatemary age. Long-term data from five observation wells indicate that water levels have declined at average rates rangingfrom less than 0.1 to 1.3 feet per year during the period 1928-83. The largest declines have been in the Memphis area. Water from the Memphis aquifer generally is a calcium bicarbonate type, but locally is a sodium bicarbonate or mixed type. The water contains low concentrations of most major constituents and generally is suitable for most uses. Dissolved-solids concentrations range from 19 to 333 milligrams per liter. The results from 76 aquifer tests made in the Memphis area and western Tennessee during the period 1949-62 indicate that transmissivities range from 2,700 to 53,500 feet squared per day, and storage coefficients range from 0.0001 to 0.003. The Memphis aquifer provides moderate to large quantities of water for many public and industrial water supplies in western Tennessee and small quantities to numerous domestic and farm wells. Withdrawals for public and industrial supplies in 1983 averaged about 227 million gallons per day, of which 183 million gallons per day were in the Memphis area. The Memphis aquifer has much potential for future use, particularly at places outside the Memphis area.
Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Border, J. S.
1988-01-01
The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.
NASA Astrophysics Data System (ADS)
Wood, Brian; He, Xiaoliang; Apte, Sourabh
2017-11-01
Turbulent flows through porous media are encountered in a number of natural and engineered systems. Many attempts to close the Navier-Stokes equation for such type of flow have been made, for example using RANS models and double averaging. On the other hand, Whitaker (1996) applied volume averaging theorem to close the macroscopic N-S equation for low Re flow. In this work, the volume averaging theory is extended into the turbulent flow regime to posit a relationship between the macroscale velocities and the spatial velocity statistics in terms of the spatial averaged velocity only. Rather than developing a Reynolds stress model, we propose a simple algebraic closure, consistent with generalized effective viscosity models (Pope 1975), to represent the spatial fluctuating velocity and pressure respectively. The coefficients (one 1st order, two 2nd order and one 3rd order tensor) of the linear functions depend on averaged velocity and gradient. With the data set from DNS, performed with inertial and turbulent flows (pore Re of 300, 500 and 1000) through a periodic face centered cubic (FCC) unit cell, all the unknown coefficients can be computed and the closure is complete. The macroscopic quantity calculated from the averaging is then compared with DNS data to verify the upscaling. NSF Project Numbers 1336983, 1133363.
Lynch, Robert Francis
2016-05-01
How to optimally allocate time, energy and investment in an effort to maximize one's reproductive success is a fundamental problem faced by all organisms. This effort is complicated when the production of each additional offspring dilutes the total resources available for parental investment. Although a quantity-quality trade-off between producing and investing in offspring has long been assumed in evolutionary biology, testing it directly in humans is difficult, partly owing to the long generation time of our species. Using data from an Icelandic genealogy (Íslendingabók) over two centuries, I address this issue and analyse the quantity-quality trade-off in humans. I demonstrate that the primary impact of parents on the fitness of their children is the result of resources and or investment, but not genes. This effect changes significantly across time, in response to environmental conditions. Overall, increasing reproduction has negative fitness consequences on offspring, such that each additional sibling reduces an individual's average lifespan and lifetime reproductive success. This analysis provides insights into the evolutionary conflict between producing and investing in children while also shedding light on some of the causes of the demographic transition.
[Application of improved regional citrate anticoagulation in continuous hemofiltration in children].
Bai, K; Liu, C J; Fu, Y Q; Xu, F
2017-05-04
Objective: To investigate the application of regional citrate anticoagulation with calcium hemofiltration basic solution in continuous hemofiltration in children. Method: The clinical data of 18 patients with citrate anticoagulation in continuous hemofiltration in children, excluding the hepatic failure and septic shock cases, were analyzed retrospectively, from September 2015 to August 2016 in Intensive Care Unit of the Children's Hospital of Chongqing Medical University.The commercial calcium hemofiltration basic solution was used as displacement liquid . The blood gas analysis, electrolyte, four coagulation tests during the treatment and the corresponding relations of quantity of blood flow(QB), quantity of citrate flow(QCi), quantity of sodium bicarbonate flow(QSB), quantity of calcium flow(QCa), quantity of filtered solution flow (Qf) were monitored. Meanwhile, the blood gas analysis, electrolyte, four coagulation tests, useful life of filter, bleeding and clotting events internal and external before, during and after the treatments were monitored, too. And the common complications of citrate anticoagulation, such as hypocalcaemia, metabolic alkalosis, citrate accumulation and hypernatremia were observed. Result: Continuous hemofiltration was applied in 18 patients for 734.5 hours, and the average useful life of filter was (25±11)h.There was no obvious clotting event. There were 168 groups of datum of the blood gas analysis, electrolyte, four coagulation tests during the treatment and the relationships of QB, QCi, QSB, QCa, Qf had been collected. The relationships of the initial parameter settings of QB, QCi, QSB, QCa and Qf were concluded as QCi=1.8×QB, QCa=0.12×QB, QSB=0.01×Qf . There were 150 times(89.3%)of extracorporeal ionized calcium(iCa(E)(2+)) and 162 times(96.4%) of intracorporal ionized calcium(iCa(I)(2+)) reached the anticoagulation target. Although all the comparisons of Na(+) ((136.2±4.1) vs .(138.2±2.4) vs .(138.5±3.9)mmol/L), iCa(2+) ((1.07±0.11) vs .(1.21±0.12) vs .(1.17±0.09)mmol/L), HCO(3)(-) ((22±4) vs .(28±5) vs . (26±4)mmol/L) among before, during and after treatment had significant difference( F =6.414, 18.950, 19.151; P =0.002, 0.000, 0.000). Each mean parameter was within the nearly normal range, except that the HCO(3)(-) increased slightly. High HCO(3)(-) was the most common complications, which happened 87 times (51.8%) during the treatment and 11 cases(37.9%) after the treatment. There was none with refractory hypocalcemia and total ionized calcium(TCa(2+) )/iCa(2+) above 2.5, which hints the accumulation of citrate. Conclusion: The commercialized displacement liquid containing calcium can be used in RCA-CHF in children safely and simply.
Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve
2017-11-07
Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
NASA Astrophysics Data System (ADS)
Nakayama, Tomoko; Takayama, Yoshihisa; Fujikawa, Chiemi; Watanabe, Eriko; Kodate, Kashiko
2014-09-01
In recent years, there has been considerable interest in satellite-ground laser communication due to an increase in the quantity of data exchanged between satellites and the ground. However, improving the quality of this data communication is necessary as laser communication is vulnerable to air fluctuation. We first verify the spatial and temporal averaging effects using light beam intensity images acquired from middle-range transmission experiments between two ground positions and the superposition of these images using simulations. Based on these results, we propose a compact and lightweight optical duplicate system as a multi-beam generation device with which it is easy to apply the spatial averaging effect. Although an optical duplicate system is already used for optical correlation operations, we present optimum design solutions, design a compact optical duplicate system for satellite-ground laser communications, and demonstrate the efficacy of this system using simulations.
NASA Astrophysics Data System (ADS)
Seligman, D.; Petrie, G. J. D.; Komm, R.
2014-11-01
We compare the average photospheric current helicity Hc , photospheric twist parameter α (a well-known proxy for the full relative magnetic helicity), and subsurface kinetic helicity Hk for 194 active regions observed between 2006-2013. We use 2440 Hinode photospheric vector magnetograms, and the corresponding subsurface fluid velocity data derived from GONG (2006-2012) and Helioseismic and Magnetic Imager (2010-2013) dopplergrams. We find a significant hemispheric bias in all three parameters. The subsurface kinetic helicity is preferentially positive in the southern hemisphere and negative in the northern hemisphere. The photospheric current helicity and the α parameter have the same bias for strong fields (|B| > 1000 G) and no significant bias for weak fields (100 G <|B| < 500 G). We find no significant region-by-region correlation between the subsurface kinetic helicity and either the strong-field current helicity or α. Subsurface fluid motions of a given handedness correspond to photospheric helicities of both signs in approximately equal numbers. However, common variations appear in annual averages of these quantities over all regions. Furthermore, in a subset of 77 regions, we find significant correlations between the temporal profiles of the subsurface and photospheric helicities. In these cases, the sign of the linear correlation coefficient matches the sign relationship between the helicities, indicating that the photospheric magnetic field twist is sensitive to the twisting motions below the surface.
NASA Astrophysics Data System (ADS)
Zhang, Wenjun; Deng, Weibing; Li, Wei
2018-07-01
Node properties and node importance identification of networks have been vastly studied in the last decades. While in this work, we analyze the links' properties of networks by taking the Worldwide Marine Transport Network (WMTN) as an example, i.e., statistical properties of the shipping lines of WMTN have been investigated in various aspects: Firstly, we study the feature of loops in the shipping lines by defining the line saturability. It is found that the line saturability decays exponentially with the increase of line length. Secondly, to detect the geographical community structure of shipping lines, the Label Propagation Algorithm with compression of Flow (LPAF) and Multi-Dimensional Scaling (MDS) method are employed, which show rather consistent communities. Lastly, to analyze the redundancy property of shipping lines of different marine companies, the multilayer networks are constructed by aggregating the shipping lines of different marine companies. It is observed that the topological quantities, such as average degree, average clustering coefficient, etc., increase smoothly when marine companies are randomly merged (randomly choose two marine companies, then merge the shipping lines of them together), while the relative entropy decreases when the merging sequence is determined by the Jensen-Shannon distance (choose two marine companies when the Jensen-Shannon distance between them is the lowest). This indicates the low redundancy of shipping lines among different marine companies.
Soft-sphere simulations of a planar shock interaction with a granular bed
NASA Astrophysics Data System (ADS)
Stewart, Cameron; Balachandar, S.; McGrath, Thomas P.
2018-03-01
Here we consider the problem of shock propagation through a layer of spherical particles. A point particle force model is used to capture the shock-induced aerodynamic force acting upon the particles. The discrete element method (DEM) code liggghts is used to implement the shock-induced force as well as to capture the collisional forces within the system. A volume-fraction-dependent drag correction is applied using Voronoi tessellation to calculate the volume of fluid around each individual particle. A statistically stationary frame is chosen so that spatial and temporal averaging can be performed to calculate ensemble-averaged macroscopic quantities, such as the granular temperature. A parametric study is carried out by varying the coefficient of restitution for three sets of multiphase shock conditions. A self-similar profile is obtained for the granular temperature that is dependent on the coefficient of restitution. A traveling wave structure is observed in the particle concentration downstream of the shock and this instability arises from the volume-fraction-dependent drag force. The intensity of the traveling wave increases significantly as inelastic collisions are introduced. Downstream of the shock, the variance in Voronoi volume fraction is shown to have a strong dependence upon the coefficient of restitution, indicating clustering of particles induced by collisional dissipation. Statistics of the Voronoi volume are computed upstream and downstream of the shock and compared to theoretical results for randomly distributed hard spheres.
Racial and ethnic disparities in the financial burden of prescription drugs among older Americans.
Xu, K Tom; Borders, Tyrone F
2007-01-01
This study examines racial and ethnic disparities in the financial burden of prescription drugs among older Americans using a market and an egalitarian model. A nationally representative data set, the Medical Expenditure Panel Survey 2002, was used. The financial burden of prescription drugs was measured by the out-of-pocket expenditure and proportion. In the market model (utilization adjustment)., utilization was measured at the annual aggregate level by the total number of prescription drugs, the average refills and the average quantity per prescription drug. In the egalitarian model (need adjustment)., health was measured by 15 chronic and costly diseases and the SF-12. Individuals 65 years or older were included. Nationally representative estimates were calculated. Raw racial and ethnic disparities were observed in the bivariate analyses between non-Hispanic whites and Hispanics in the out-of-pocket expenditure and proportion, and between non-Hispanic whites and non-Hispanic blacks in the out-of-pocket proportion. However, these disparities disappeared after controlling for utilization or health needs. Insurance status contributed the most to the disparities in the financial burden of prescription drugs. In conclusion, The disparities in the financial burden of prescription drugs between non-Hispanic elderly whites and Hispanics may be attributable to differences in utilization patterns. However, whether health disparities contribute to disparities in the financial burden of prescription drugs requires studies of specific diseases.
Quantity quotient reporting. A proposal for a standardized presentation of laboratory results.
Haeckel, Rainer; Wosniok, Werner
2009-01-01
Laboratory results are reported in different units (despite international recommendations for SI units) together with different reference limits, of which several exist for many quantities. It is proposed to adopt the concept of the intelligence quotient and to report quantitative results as a quantity quotient (QQ) in laboratory medicine. This quotient is essentially the difference (measured result minus mean or mode value of the reference interval) divided by the observed biological variation CV(o). Thus, all quantities are reported in the same unit system with the same reference limits (for convenience shifted to e.g., 80-120). The critical difference can also be included in this standardization concept. In this way the information of reference intervals and the original result are integrated into one combined value, which has the same format for all quantities suited for quotient reporting (QR). The proposal of QR does not interfere with neither the current concepts of traceability, SI units or method standardization. This proposal represents a further step towards harmonization of reporting. It provides simple values which can be interpreted easily by physicians and their patients.
Comparison of satellite derived dynamical quantities in the stratosphere of the Southern Hemisphere
NASA Technical Reports Server (NTRS)
Miles, Thomas (Editor); Oneill, Alan (Editor)
1989-01-01
The proceedings are summarized from a pre-MASH planning workshop on the intercomparison of Southern Hemisphere observations, analyses and derived dynamical quantities held in Williamsburg, Virginia during April 1986. The aims of this workshop were primarily twofold: (1) comparison of Southern Hemisphere dynamical quantities derived from various satellite data archives (e.g., from limb scanners and nadir sounders); and (2) assessing the impact of different base-level height information on such derived quantities. These tasks are viewed as especially important in the Southern Hemisphere because of the paucity of conventional measurements. A further strong impetus for the MASH program comes from the recent discovery of the springtime ozone hold over Antarctica. Insight gained from validation studies such as the one reported here will contribute to an improved understanding of the role of meteorology in the development and evolution of the hold, in its interannual variability, and in its interhemispheric differences. The dynamical quantities examined in this workshop included geopotential height, zonal wind, potential vorticity, eddy heat and momentum fluxes, and Eliassen-Palm fluxes. The time periods and data sources constituting the MASH comparisons are summarized.
NASA Astrophysics Data System (ADS)
Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.; Chilson, Phillip B.; Wharton, Sonia
2016-12-01
Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance (w'2) from zenith stares. However, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w'2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skill in correcting for volume-averaging effects in the calculation of w'2 is also assessed. Additionally, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w'2. Through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w'2. After the autocovariance technique is applied, values of w'2 from the Doppler lidars are generally in close agreement (R2 ≈ 0.95 - 0.98) with those calculated from sonic anemometer measurements.
Non-parametric estimation of population size changes from the site frequency spectrum.
Waltoft, Berit Lindum; Hobolth, Asger
2018-06-11
Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.
NASA Astrophysics Data System (ADS)
Monchaux, R.; Dejoan, A.
2017-10-01
The settling velocity of inertial particles falling in homogeneous turbulence is investigated by making use of direct numerical simulation (DNS) at moderate Reynolds number that include momentum exchange between both phases (two-way coupling approach). Effects of particle volume fraction, particle inertia, and gravity are presented for flow and particle parameters similar to the experiments of Aliseda et al. [J. Fluid Mech. 468, 77 (2002), 10.1017/S0022112002001593]. A good agreement is obtained between the DNS and the experiments for the settling velocity statistics, when overall averaged, but as well when conditioned on the local particle concentration. Both DNS and experiments show that the settling velocity further increases with increasing volume fraction and local concentration. At the considered particle loading the effects of two-way coupling is negligible on the mean statistics of turbulence. Nevertheless, the DNS results show that fluid quantities are locally altered by the particles. In particular, the conditional average on the local particle concentration of the slip velocity shows that the main contribution to the settling enhancement results from the increase of the fluid velocity surrounding the particles along the gravitational direction induced by the collective particle back-reaction force. Particles and the surrounding fluid are observed to fall together, which in turn results in an amplification of the sampling of particles in the downward fluid motion. Effects of two-way coupling on preferential concentration are also reported. Increase of both volume fraction and gravity is shown to lower preferential concentration of small inertia particles while a reverse tendency is observed for large inertia particles. This behavior is found to be related to an attenuation of the centrifuge effects and to an increase of particle accumulation along gravity direction, as particle loading and gravity become large.
Aqueous geochemistry and diagenesis in the eastern Snake River Plain aquifer system, Idaho
Wood, Warren W.; Low, Walton H.
1986-01-01
Water budget and isotopic analyses of water in the eastern Snake River Plain aquifer system confirm that most, if not all, of the water is local meteoric in origin. Solute mass-balance arguments suggest that ∼5 × 109 moles of calcite and 2.6 × 109 moles of silica are precipitated annually in the aquifer. Isotopic evaluations of calcite and petrographic observation of silica support the low-temperature origin of these deposits. Approximately 2.8 × 109 moles of chloride, 4.5 × 109 moles of sodium, 1.4 × 109 moles of sulfate, and 2 × 109 moles of magnesium are removed annually from the aquifer framework by solution. Proposed weathering reactions are shown to be consistent with mass balance, carbon isotopes, observed mineralogy, and chemical thermodynamics. Large quantities of sodium, chloride, and sulfate are being removed from the system relative to their abundances in the rock. Sedimentary interbeds, which are estimated to compose <10% of the aquifer volume, may yield as much as 20% of the solutes generated within the aquifer. Weathering rate of the aquifer framework of the eastern Snake River Plain is 14 (Mg/km2)/yr or less than half the average of the North American continent. This contrasts with the rate for the eastern Snake River basin, 34 (Mg/km2)/yr, which is almost identical to the average for the North American continent. Identification and quantification of reactions controlling solute concentrations in ground water in the eastern plain indicate that the aquifer is not an “inert bathtub” that simply stores and transmits water and solutes but is undergoing active diagenesis and is both a source and sink for solutes.
Selim, Alfredo; Rogers, William; Qian, Shirley; Rothendler, James A; Kent, Erin E; Kazis, Lewis E
2018-04-19
To develop bridging algorithms to score the Veterans Rand-12 (VR-12) scales for comparability to those of the SF-36® for facilitating multi-cohort studies using data from the National Cancer Institute Surveillance, Epidemiology, and End Results Program (SEER) linked to Medicare Health Outcomes Survey (MHOS), and to provide a model for minimizing non-statistical error in pooled analyses stemming from changes to survey instruments over time. Observational study of MHOS cohorts 1-12 (1998-2011). We modeled 2-year follow-up SF-36 scale scores from cohorts 1-6 based on baseline SF-36 scores, age, and gender, yielding 100 clusters using Classification and Regression Trees. Within each cluster, we averaged follow-up SF-36 scores. Using the same cluster specifications, expected follow-up SF-36 scores, based on cohorts 1-6, were computed for cohorts 7-8 (where the VR-12 was the follow-up survey). We created a new criterion validity measure, termed "extensibility," calculated from the square root of the mean square difference between expected SF-36 scale averages and observed VR-12 item score from cohorts 7-8, weighted by cluster size. VR-12 items were rescored to minimize this quantity. Extensibility of rescored VR-12 items and scales was considerably improved from the "simple" scoring method for comparability to the SF-36 scales. The algorithms are appropriate across a wide range of potential subsamples within the MHOS and provide robust application for future studies that span the SF-36 and VR-12 eras. It is possible that these surveys in a different setting outside the MHOS, especially in younger age groups, could produce somewhat different results.
The mechanical and chemical equations of motion of muscle contraction
NASA Astrophysics Data System (ADS)
Shiner, J. S.; Sieniutycz, Stanislaw
1997-11-01
Up to now no formulation of muscle contraction has provided both the chemical kinetic equations for the reactions responsible for the contraction and the mechanical equation of motion for the muscle. This has most likely been due to the lack of general formalisms for nonlinear systems with chemical-nonchemical coupling valid under the far from equilibrium conditions under which muscle operates physiologically. We have recently developed such formalisms and apply them here to the formulation of muscle contraction to obtain both the chemical and the mechanical equations. The standard formulation up to now has yielded only the dynamic equations for the chemical variables and has considered these to be functions of both time and an appropriate mechanical variable. The macroscopically observable quantities were then obtained by averaging over the mechanical variable. When attempting to derive the dynamics equations for both the chemistry and mechanics this choice of variables leads to conflicting results for the mechanical equation of motion when two different general formalisms are applied. The conflict can be resolved by choosing the variables such that both the chemical variables and the mechanical variables are considered to be functions of time alone. This adds one equation to the set of differential equations to be solved but is actually a simplification of the problem, since these equations are ordinary differential equations, not the partial differential equations of the now standard formulation, and since in this choice of variables the variables themselves are the macroscopic observables the procedure of averaging over the mechanical variable is eliminated. Furthermore, the parameters occurring in the equations at this level of description should be accessible to direct experimental determination.
Multi-phenomenology Observation Network Evaluation Tool'' (MONET)
NASA Astrophysics Data System (ADS)
Oltrogge, D.; North, P.; Vallado, D.
2014-09-01
Evaluating overall performance of an SSA "system-of-systems" observational network collecting against thousands of Resident Space Objects (RSO) is very difficult for typical tasking or scheduling-based analysis tools. This is further complicated by networks that have a wide variety of sensor types and phenomena, to include optical, radar and passive RF types, each having unique resource, ops tempo, competing customer and detectability constraints. We present details of the Multi-phenomenology Observation Network Evaluation Tool (MONET), which circumvents these difficulties by assessing the ideal performance of such a network via a digitized supply-vs-demand approach. Cells of each sensors supply time are distributed among RSO targets of interest to determine the average performance of the network against that set of RSO targets. Orbit Determination heuristics are invoked to represent observation quantity and geometry notionally required to obtain the desired orbit estimation quality. To feed this approach, we derive the detectability and collection rate performance of optical, radar and passive RF sensor physical and performance characteristics. We then prioritize the selected RSO targets according to object size, active/inactive status, orbit regime, and/or other considerations. Finally, the OD-derived tracking demands of each RSO of interest are levied against remaining sensor supply until either (a) all sensor time is exhausted; or (b) the list of RSO targets is exhausted. The outputs from MONET include overall network performance metrics delineated by sensor type, objects and orbits tracked, along with likely orbit accuracies which might result from the conglomerate network tracking.