Time-weighted average SPME analysis for in planta determination of cVOCs.
Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G
2012-03-20
The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592
Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M
1992-01-01
A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
[Evaluation of +Gz tolerance following simulation of 8-hr flight].
Khomenko, M N; Bukhtiiarov, I V; Malashchuk, L S
2005-01-01
Tolerance of +Gz (head-pelvis) centrifugation of pilots was evaluated following simulation of a long flight on single-seat fighter. The experiment involved 5 test-subjects who were exposed to +Gz before and after simulated 8-hr flight with a growth gradient of 0.1 u/s without anti-g suits and muscles relaxed; in addition, limiting tolerance of intricate profile +Gz loads of 2.0 to 9.0 units with a growth gradient of 1.0 u/s of test-subjects in anti-g suits (AGS) with a change-over pressure valve in the peak mode using muscle straining and breathing maneuvers. To counteract the negative effects of extended flight, various seat configurations: with a back inclination at 30 degrees to the +Gz vector and changeable geometry with a back inclination at 55 degrees to the vector. The other counter-measures applied were cool air shower, suit ventilation, physical exercises, lower body massage with AGS, electrostimulation of the back and lumber region, profiling of the supporting and soft parts of the seat, and 30-s exposure to +5 Gz. Hemodynamic and respiration parameters as well as body temperature were measured in the course of 8 hrs of flight and during and shortly after centrifugation. According to the results of the investigation, seat inclination at 55 degrees to the +Gz vector and tested system of countermeasures prevent degradation of tolerance of large (9 u.) loads following 8-hr flight simulation with the use of the modern anti-g gear, PMID:16353624
Reynolds, Steven D; Blanchard, Charles L; Ziman, Stephen D
2004-11-01
Analyses of ozone (O3) measurements in conjunction with photochemical modeling were used to assess the feasibility of attaining the federal 8-hr O3 standard in the eastern United States. Various combinations of volatile organic compound (VOC) and oxides of nitrogen (NOx) emission reductions were effective in lowering modeled peak 1-hr O3 concentrations. VOC emissions reductions alone had only a modest impact on modeled peak 8-hr O3 concentrations. Anthropogenic NOx emissions reductions of 46-86% of 1996 base case values were needed to reach the level of the 8-hr standard in some areas. As NOx emissions are reduced, O3 production efficiency increases, which accounts for the less than proportional response of calculated 8-hr O3 levels. Such increases in O3 production efficiency also were noted in previous modeling work for central California. O3 production in some urban core areas, such as New York City and Chicago, IL, was found to be VOC-limited. In these areas, moderate NOx emissions reductions may be accompanied by increases in peak 8-hr O3 levels. The findings help to explain differences in historical trends in 1- and 8-hr O3 levels and have serious implications for the feasibility of attaining the 8-hr O3 standard in several areas of the eastern United States. PMID:15587557
A ∼ 3.8 hr PERIODICITY FROM AN ULTRASOFT ACTIVE GALACTIC NUCLEUS CANDIDATE
Lin, Dacheng; Irwin, Jimmy A.; Godet, Olivier; Webb, Natalie A.; Barret, Didier
2013-10-10
Very few galactic nuclei are found to show significant X-ray quasi-periodic oscillations (QPOs). After carefully modeling the noise continuum, we find that the ∼3.8 hr QPO in the ultrasoft active galactic nucleus candidate 2XMM J123103.2+110648 was significantly detected (∼5σ) in two XMM-Newton observations in 2005, but not in the one in 2003. The QPO root mean square (rms) is very high and increases from ∼25% in 0.2-0.5 keV to ∼50% in 1-2 keV. The QPO probably corresponds to the low-frequency type in Galactic black hole X-ray binaries, considering its large rms and the probably low mass (∼10{sup 5} M {sub ☉}) of the black hole in the nucleus. We also fit the soft X-ray spectra from the three XMM-Newton observations and find that they can be described with either pure thermal disk emission or optically thick low-temperature Comptonization. We see no clear X-ray emission from the two Swift observations in 2013, indicating lower source fluxes than those in XMM-Newton observations.
Hill, R Jedd; Smith, Philip A
2015-01-01
Carbon dioxide (CO2) makes up a relatively small percentage of atmospheric gases, yet when used or produced in large quantities as a gas, a liquid, or a solid (dry ice), substantial airborne exposures may occur. Exposure to elevated CO2 concentrations may elicit toxicity, even with oxygen concentrations that are not considered dangerous per se. Full-shift sampling approaches to measure 8-hr time weighted average (TWA) CO2 exposures are used in many facilities where CO2 gas may be present. The need to assess rapidly fluctuating CO2 levels that may approach immediately dangerous to life or health (IDLH) conditions should also be a concern, and several methods for doing so using fast responding measurement tools are discussed in this paper. Colorimetric detector tubes, a non-dispersive infrared (NDIR) detector, and a portable Fourier transform infrared (FTIR) spectroscopy instrument were evaluated in a laboratory environment using a flow-through standard generation system and were found to provide suitable accuracy and precision for assessing rapid fluctuations in CO2 concentration, with a possible effect related to humidity noted only for the detector tubes. These tools were used in the field to select locations and times for grab sampling and personal full-shift sampling, which provided laboratory analysis data to confirm IDLH conditions and 8-hr TWA exposure information. Fluctuating CO2 exposures are exemplified through field work results from several workplaces. In a brewery, brief CO2 exposures above the IDLH value occurred when large volumes of CO2-containing liquid were released for disposal, but 8-hr TWA exposures were not found to exceed the permissible level. In a frozen food production facility nearly constant exposure to CO2 concentrations above the permissible 8-hr TWA value were seen, as well as brief exposures above the IDLH concentration which were associated with specific tasks where liquid CO2 was used. In a poultry processing facility the use of dry
Pate, William; Charlton, Michael; Wellington, Carl
2013-01-01
Occupational noise exposure is a recognized hazard for employees working near equipment and processes that generate high levels of sound pressure. High sound pressure levels have the potential to result in temporary or permanent alteration in hearing perception. The cleaning of cages used to house laboratory research animals is a process that uses equipment capable of generating high sound pressure levels. The purpose of this research study was to assess occupational exposure to sound pressure levels for employees operating cage decontamination equipment. This study reveals the potential for overexposure to hazardous noise as defined by the Occupational Safety and Health Administration (OSHA) permissible exposure limit and consistent surpassing of the OSHA action level. These results emphasize the importance of evaluating equipment and room design when acquiring new cage decontamination equipment in order to minimize employee exposure to potentially hazardous noise pressure levels. PMID:23566325
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Development of accumulated heat stress index based on time-weighted function
NASA Astrophysics Data System (ADS)
Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo
2016-05-01
Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.
Development of accumulated heat stress index based on time-weighted function
NASA Astrophysics Data System (ADS)
Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo
2015-04-01
Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered. PMID:22982552
Bradley, Paul M.; Journey, Celeste A.; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.
NASA Astrophysics Data System (ADS)
Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi
2016-04-01
Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.
Averaging the inhomogeneous universe
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2012-03-01
A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Bonnor, W.B.
1987-05-01
The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
NASA Astrophysics Data System (ADS)
Samuvel, K.; Ramachandran, K.
2016-05-01
BaTi0.5CO0.5O3 (BTCO) nanoparticles were prepared by the solid state reaction technique using different starting materials and the microstructure examined by XRD, FESEM, BDS and VSM. X-ray diffraction and electron diffraction patterns showed that the nanoparticles were the tetragonal BTCO phase. The BTCO nanoparticles prepared from the starting materials of as prepared titanium-oxide, Cobalt -oxide and barium carbonate have spherical grain morphology, an average size of 65 nm and a fairly narrow size distribution. The nano-scale presence and the formation of the tetragonal perovskite phase as well as the crystallinity were detected using the mentioned techniques. Dielectric properties of the samples were measured at different frequencies. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. The doped BTCO samples exhibited low loss factor at 1 kHz and 1 MHz frequencies respectively.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
2011-01-01
Background Approximately one third of New Zealand children and young people are overweight or obese. A similar proportion (33%) do not meet recommendations for physical activity, and 70% do not meet recommendations for screen time. Increased time being sedentary is positively associated with being overweight. There are few family-based interventions aimed at reducing sedentary behavior in children. The aim of this trial is to determine the effects of a 24 week home-based, family oriented intervention to reduce sedentary screen time on children's body composition, sedentary behavior, physical activity, and diet. Methods/Design The study design is a pragmatic two-arm parallel randomized controlled trial. Two hundred and seventy overweight children aged 9-12 years and primary caregivers are being recruited. Participants are randomized to intervention (family-based screen time intervention) or control (no change). At the end of the study, the control group is offered the intervention content. Data collection is undertaken at baseline and 24 weeks. The primary trial outcome is child body mass index (BMI) and standardized body mass index (zBMI). Secondary outcomes are change from baseline to 24 weeks in child percentage body fat; waist circumference; self-reported average daily time spent in physical and sedentary activities; dietary intake; and enjoyment of physical activity and sedentary behavior. Secondary outcomes for the primary caregiver include change in BMI and self-reported physical activity. Discussion This study provides an excellent example of a theory-based, pragmatic, community-based trial targeting sedentary behavior in overweight children. The study has been specifically designed to allow for estimation of the consistency of effects on body composition for Māori (indigenous), Pacific and non-Māori/non-Pacific ethnic groups. If effective, this intervention is imminently scalable and could be integrated within existing weight management programs. Trial
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Averaging Internal Consistency Reliability Coefficients
ERIC Educational Resources Information Center
Feldt, Leonard S.; Charter, Richard A.
2006-01-01
Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
The Averaging Problem in Cosmology
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2009-06-01
This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
On generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag M.
2007-09-01
We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Averaged Electroencephalic Audiometry in Infants
ERIC Educational Resources Information Center
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Averaging inhomogeneous cosmologies - a dialogue.
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging inhomogenous cosmologies - a dialogue
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging facial expression over time
Haberman, Jason; Harp, Tom; Whitney, David
2010-01-01
The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064
Average Cost of Common Schools.
ERIC Educational Resources Information Center
White, Fred; Tweeten, Luther
The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…
Exact averaging of laminar dispersion
NASA Astrophysics Data System (ADS)
Ratnakar, Ram R.; Balakotaiah, Vemuri
2011-02-01
We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.
Averaging Robertson-Walker cosmologies
NASA Astrophysics Data System (ADS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-04-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Long 3 x 8 hr dialysis: a three-decade summary.
Charra, Bernard; Chazot, Charles; Jean, Guillaume; Hurot, Jean-Marc; Vanel, Thierry; Terrat, Jean-Claude; VoVan, Cyril
2003-01-01
A long hemodialysis (HD), 3 x 8 hours/week, has been used without significant modification in Tassin for 35 years with excellent morbidity and mortality results. It can be performed during the day or overnight. The relatively good survival is mainly due to a lower cardiovascular mortality than usually reported in dialysis patients. This in turn is mainly due to the good control of blood pressure (BP) including drug-free hypertension control and low incidence of intradialytic hypotension. This control of BP is probably the result of the tight extracellular volume normalization (dry weight), although one cannot exclude the effect of other factors such as serum phosphorus control well achieved using long dialysis. The high dose of small and even more of middle molecules is another essential virtue of long dialysis, leading to good nutrition, correction of anemia, control of serum phosphate and potassium with low doses of medications and providing a very cost-effective treatment. In 2002 one must aim at optimal rather than just adequate dialysis. Optimal dialysis needs to correct as perfectly as possible each and every abnormality due to renal failure. It can be achieved using longer (or more frequent) sessions. Overnight dialysis is the most logical way of implementing long HD with the lowest possible hindrance on patient's life. Due to the change in case mix a decreasing number of patients are able or willing to go on overnight dialysis, education to be autonomous is more difficult, but the benefit is still there. PMID:14733303
Circadian Activity Rhythms and Sleep in Nurses Working Fixed 8-hr Shifts.
Kang, Jiunn-Horng; Miao, Nae-Fang; Tseng, Ing-Jy; Sithole, Trevor; Chung, Min-Huey
2015-05-01
Shift work is associated with adverse health outcomes. The aim of this study was to explore the effects of shift work on circadian activity rhythms (CARs) and objective and subjective sleep quality in nurses. Female day-shift (n = 16), evening-shift (n = 6), and night-shift (n = 13) nurses wore a wrist actigraph to monitor the activity. We used cosinor analysis and time-frequency analysis to study CARs. Night-shift nurses exhibited the lowest values of circadian rhythm amplitude, acrophase, autocorrelation, and mean of the circadian relative power (CRP), whereas evening-shift workers exhibited the greatest standard deviation of the CRP among the three shift groups. That is, night-shift nurses had less robust CARs and evening-shift nurses had greater variations in CARs compared with nurses who worked other shifts. Our results highlight the importance of assessing CARs to prevent the adverse effects of shift work on nurses' health. PMID:25332463
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
Spectral averaging techniques for Jacobi matrices
Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann
2008-02-15
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Averaging procedures for flow within vegetation canopies
NASA Astrophysics Data System (ADS)
Raupach, M. R.; Shaw, R. H.
1982-01-01
Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’ term describing the transfer of energy from large-scale motion to wake turbulence by form drag. The differences are primarily due to the appearance, in the covariances produced by the second scheme, of dispersive components arising from the spatial correlation of time-averaged flow variables. The two schemes are shown to coincide if these dispersive fluxes vanish.
Youngstedt, Shawn D.; Jean-Louis, Girardin; Bootzin, Richard R.; Kripke, Daniel F.; Cooper, Jonnifer; Dean, Lauren R.; Catao, Fabio; James, Shelli; Vining, Caitlyn; Williams, Natasha J.; Irwin, Michael R.
2013-01-01
Epidemiologic studies have consistently shown that sleeping < 7 hr and ≥ 8 hr is associated with increased mortality and morbidity. The risks of short sleep may be consistent with results from experimental sleep deprivation studies. However, there has been little study of chronic moderate sleep restriction and no evaluation of older adults who might be more vulnerable to negative effects of sleep restriction, given their age-related morbidities. Moreover, the risks of long sleep have scarcely been examined experimentally. Moderate sleep restriction might benefit older long sleepers who often spend excessive time in bed (TIB), in contrast to older adults with average sleep patterns. Our aims are: (1) to examine the ability of older long sleepers and older average sleepers to adhere to 60 min TIB restriction; and (2) to contrast effects of chronic TIB restriction in older long vs. average sleepers. Older adults (n=100) (60–80 yr) who sleep 8–9 hr per night and 100 older adults who sleep 6–7.25 hr per night will be examined at 4 sites over 5 years. Following a 2-week baseline, participants will be randomized to one of two 12-week treatments: (1) a sleep restriction involving a fixed sleep-wake schedule, in which TIB is reduced 60 min below each participant’s baseline TIB; (2) a control treatment involving no sleep restriction, but a fixed sleep schedule. Sleep will be assessed with actigraphy and a diary. Measures will include glucose tolerance, sleepiness, depressive symptoms, quality of life, cognitive performance, incidence of illness or accident, and inflammation. PMID:23811325
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging.
Neutron resonance averaging with filtered beams
Chrien, R.E.
1985-01-01
Neutron resonance averaging using filtered beams from a reactor source has proven to be an effective nuclear structure tool within certain limitations. These limitations are imposed by the nature of the averaging process, which produces fluctuations in radiative intensities. The fluctuations have been studied quantitatively. Resonance averaging also gives us information about initial or capture state parameters, in particular the photon strength function. Suitable modifications of the filtered beams are suggested for the enhancement of non-resonant processes.
Spatial limitations in averaging social cues.
Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Spectral and parametric averaging for integrable systems
NASA Astrophysics Data System (ADS)
Ma, Tao; Serota, R. A.
2015-05-01
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Whatever Happened to the Average Student?
ERIC Educational Resources Information Center
Krause, Tom
2005-01-01
Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the...
A note on generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag
2007-11-01
We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
New results on averaging theory and applications
NASA Astrophysics Data System (ADS)
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.
ERIC Educational Resources Information Center
Caruk, Joan Marie
To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…
Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.
Alvarez-Castro, José M; Yang, Rong-Cai
2012-01-01
Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
Physics of the spatially averaged snowmelt process
NASA Astrophysics Data System (ADS)
Horne, Federico E.; Kavvas, M. Levent
1997-04-01
It has been recognized that the snowmelt models developed in the past do not fully meet current prediction requirements. Part of the reason is that they do not account for the spatial variation in the dynamics of the spatially heterogeneous snowmelt process. Most of the current physics-based distributed snowmelt models utilize point-location-scale conservation equations which do not represent the spatially varying snowmelt dynamics over a grid area that surrounds a computational node. In this study, to account for the spatial heterogeneity of the snowmelt dynamics, areally averaged mass and energy conservation equations for the snowmelt process are developed. As a first step, energy and mass conservation equations that govern the snowmelt dynamics at a point location are averaged over the snowpack depth, resulting in depth averaged equations (DAE). In this averaging, it is assumed that the snowpack has two layers. Then, the point location DAE are averaged over the snowcover area. To develop the areally averaged equations of the snowmelt physics, we make the fundamental assumption that snowmelt process is spatially ergodic. The snow temperature and the snow density are considered as the stochastic variables. The areally averaged snowmelt equations are obtained in terms of their corresponding ensemble averages. Only the first two moments are considered. A numerical solution scheme (Runge-Kutta) is then applied to solve the resulting system of ordinary differential equations. This equation system is solved for the areal mean and areal variance of snow temperature and of snow density, for the areal mean of snowmelt, and for the areal covariance of snow temperature and snow density. The developed model is tested using Scott Valley (Siskiyou County, California) snowmelt and meteorological data. The performance of the model in simulating the observed areally averaged snowmelt is satisfactory.
Cosmic Inhomogeneities and Averaged Cosmological Dynamics
NASA Astrophysics Data System (ADS)
Paranjape, Aseem; Singh, T. P.
2008-10-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a “dark energy.” However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be “no.” Averaging effects negligibly influence the cosmological dynamics.
Average shape of transport-limited aggregates.
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z
2005-08-12
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793
Average Shape of Transport-Limited Aggregates
NASA Astrophysics Data System (ADS)
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.
2005-08-01
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Code of Federal Regulations, 2013 CFR
2013-07-01
... offset by positive credits from engine families below the applicable emission standard, as allowed under the provisions of this subpart. Averaging of credits in this manner is used to determine...
Orbit-averaged implicit particle codes
NASA Astrophysics Data System (ADS)
Cohen, B. I.; Freis, R. P.; Thomas, V.
1982-03-01
The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the numerical stability of each scheme are given. Simulation examples are presented for a one-dimensional electrostatic model. Schemes are constructed that are stable at large-time step, require fewer particles, and, hence, reduce input-output and memory requirements. Orbit averaging, however, in the unmagnetized electrostatic models tested so far is not as successful as in cases where there is a magnetic field. Methods are suggested in which orbit averaging should achieve more significant improvements in code efficiency.
Code of Federal Regulations, 2010 CFR
2010-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2011 CFR
2011-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2013 CFR
2013-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2012 CFR
2012-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2014 CFR
2014-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Total-pressure averaging in pulsating flows.
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.
Stochastic Averaging of Duhem Hysteretic Systems
NASA Astrophysics Data System (ADS)
YING, Z. G.; ZHU, W. Q.; NI, Y. Q.; KO, J. M.
2002-06-01
The response of Duhem hysteretic system to externally and/or parametrically non-white random excitations is investigated by using the stochastic averaging method. A class of integrable Duhem hysteresis models covering many existing hysteresis models is identified and the potential energy and dissipated energy of Duhem hysteretic component are determined. The Duhem hysteretic system under random excitations is replaced equivalently by a non-hysteretic non-linear random system. The averaged Ito's stochastic differential equation for the total energy is derived and the Fokker-Planck-Kolmogorov equation associated with the averaged Ito's equation is solved to yield stationary probability density of total energy, from which the statistics of system response can be evaluated. It is observed that the numerical results by using the stochastic averaging method is in good agreement with that from digital simulation.
Geologic analysis of averaged magnetic satellite anomalies
NASA Technical Reports Server (NTRS)
Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.
1985-01-01
To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Total pressure averaging in pulsating flows
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Heuristic approach to capillary pressures averaging
Coca, B.P.
1980-10-01
Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained
Explicit cosmological coarse graining via spatial averaging
NASA Astrophysics Data System (ADS)
Paranjape, Aseem; Singh, T. P.
2008-01-01
The present matter density of the Universe, while highly inhomogeneous on small scales, displays approximate homogeneity on large scales. We propose that whereas it is justified to use the Friedmann Lemaître Robertson Walker (FLRW) line element (which describes an exactly homogeneous and isotropic universe) as a template to construct luminosity distances in order to compare observations with theory, the evolution of the scale factor in such a construction must be governed not by the standard Einstein equations for the FLRW metric, but by the modified Friedmann equations derived by Buchert (Gen Relat Gravit 32:105, 2000; 33:1381, 2001) in the context of spatial averaging in Cosmology. Furthermore, we argue that this scale factor, defined in the spatially averaged cosmology, will correspond to the effective FLRW metric provided the size of the averaging domain coincides with the scale at which cosmological homogeneity arises. This allows us, in principle, to compare predictions of a spatially averaged cosmology with observations, in the standard manner, for instance by computing the luminosity distance versus red-shift relation. The predictions of the spatially averaged cosmology would in general differ from standard FLRW cosmology, because the scale-factor now obeys the modified FLRW equations. This could help determine, by comparing with observations, whether or not cosmological inhomogeneities are an alternative explanation for the observed cosmic acceleration.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
High Average Power Yb:YAG Laser
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Attractors and Time Averages for Random Maps
NASA Astrophysics Data System (ADS)
Araujo, Vitor
2006-07-01
Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
Polarized electron beams at milliampere average current
Poelker, Matthew
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Mean Element Propagations Using Numerical Averaging
NASA Technical Reports Server (NTRS)
Ely, Todd A.
2009-01-01
The long-term evolution characteristics (and stability) of an orbit are best characterized using a mean element propagation of the perturbed two body variational equations of motion. The averaging process eliminates short period terms leaving only secular and long period effects. In this study, a non-traditional approach is taken that averages the variational equations using adaptive numerical techniques and then numerically integrating the resulting EOMs. Doing this avoids the Fourier series expansions and truncations required by the traditional analytic methods. The resultant numerical techniques can be easily adapted to propagations at most solar system bodies.
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Cryo-Electron Tomography and Subtomogram Averaging.
Wan, W; Briggs, J A G
2016-01-01
Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
Averaging models for linear piezostructural systems
NASA Astrophysics Data System (ADS)
Kim, W.; Kurdila, A. J.; Stepanyan, V.; Inman, D. J.; Vignola, J.
2009-03-01
In this paper, we consider a linear piezoelectric structure which employs a fast-switched, capacitively shunted subsystem to yield a tunable vibration absorber or energy harvester. The dynamics of the system is modeled as a hybrid system, where the switching law is considered as a control input and the ambient vibration is regarded as an external disturbance. It is shown that under mild assumptions of existence and uniqueness of the solution of this hybrid system, averaging theory can be applied, provided that the original system dynamics is periodic. The resulting averaged system is controlled by the duty cycle of a driven pulse-width modulated signal. The response of the averaged system approximates the performance of the original fast-switched linear piezoelectric system. It is analytically shown that the averaging approximation can be used to predict the electromechanically coupled system modal response as a function of the duty cycle of the input switching signal. This prediction is experimentally validated for the system consisting of a piezoelectric bimorph connected to an electromagnetic exciter. Experimental results show that the analytical predictions are observed in practice over a fixed "effective range" of switching frequencies. The same experiments show that the response of the switched system is insensitive to an increase in switching frequency above the effective frequency range.
A Measure of the Average Intercorrelation
ERIC Educational Resources Information Center
Meyer, Edward P.
1975-01-01
Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Reformulation of Ensemble Averages via Coordinate Mapping.
Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A
2016-04-12
A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...
Average configuration of the induced venus magnetotail
McComas, D.J.; Spence, H.E.; Russell, C.T.
1985-01-01
In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Orbit Averaging in Perturbed Planetary Rings
NASA Astrophysics Data System (ADS)
Stewart, Glen R.
2015-11-01
The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.
Lidar uncertainty and beam averaging correction
NASA Astrophysics Data System (ADS)
Giyanani, A.; Bierbooms, W.; van Bussel, G.
2015-05-01
Remote sensing of the atmospheric variables with the use of Lidar is a relatively new technology field for wind resource assessment in wind energy. A review of the draft version of an international guideline (CD IEC 61400-12-1 Ed.2) used for wind energy purposes is performed and some extra atmospheric variables are taken into account for proper representation of the site. A measurement campaign with two Leosphere vertical scanning WindCube Lidars and metmast measurements is used for comparison of the uncertainty in wind speed measurements using the CD IEC 61400-12-1 Ed.2. The comparison revealed higher but realistic uncertainties. A simple model for Lidar beam averaging correction is demonstrated for understanding deviation in the measurements. It can be further applied for beam averaging uncertainty calculations in flat and complex terrain.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119
Polarized electron beams at milliampere average current
Poelker, M.
2013-11-07
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Apparent and average accelerations of the Universe
Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu
2008-10-15
In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.
Emissions averaging top option for HON compliance
Kapoor, S. )
1993-05-01
In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Representation of average drop sizes in sprays
NASA Astrophysics Data System (ADS)
Dodge, Lee G.
1987-06-01
Procedures are presented for processing drop-size measurements to obtain average drop sizes that represent overall spray characteristics. These procedures are not currently in general use, but they would represent an improvement over current practice. Clear distinctions are made between processing data for spatial- and temporal-type measurements. The conversion between spatial and temporal measurements is discussed. The application of these procedures is demonstrated by processing measurements of the same spray by two different types of instruments.
Modern average global sea-surface temperature
Schweitzer, Peter N.
1993-01-01
The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866
Viewpoint: observations on scaled average bioequivalence.
Patterson, Scott D; Jones, Byron
2012-01-01
The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace. PMID:22162308
Improving Reading Abilities of Average and Below Average Readers through Peer Tutoring.
ERIC Educational Resources Information Center
Galezio, Marne; And Others
A program was designed to improve the progress of average and below average readers in a first-grade, a second-grade, and a sixth-grade classroom in a multicultural, multi-social economic district located in a three-county area northwest of Chicago, Illinois. Classroom teachers noted that students were having difficulty making adequate progress in…
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
ERIC Educational Resources Information Center
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Average shape of fluctuations for subdiffusive walks
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Acedo, L.
2004-03-01
We study the average shape of fluctuations for subdiffusive processes, i.e., processes with uncorrelated increments but where the waiting time distribution has a broad power-law tail. This shape is obtained analytically by means of a fractional diffusion approach. We find that, in contrast with processes where the waiting time between increments has finite variance, the fluctuation shape is no longer a semicircle: it tends to adopt a tablelike form as the subdiffusive character of the process increases. The theoretical predictions are compared with numerical simulation results.
The averaging method in applied problems
NASA Astrophysics Data System (ADS)
Grebenikov, E. A.
1986-04-01
The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P.
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
Average observational quantities in the timescape cosmology
Wiltshire, David L.
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Climatology of globally averaged thermospheric mass density
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Picone, J. M.
2010-09-01
We present a climatological analysis of daily globally averaged density data, derived from orbit data and covering the years 1967-2007, along with an empirical Global Average Mass Density Model (GAMDM) that encapsulates the 1986-2007 data. The model represents density as a function of the F10.7 solar radio flux index, the day of year, and the Kp geomagnetic activity index. We discuss in detail the dependence of the data on each of the input variables, and demonstrate that all of the terms in the model represent consistent variations in both the 1986-2007 data (on which the model is based) and the independent 1967-1985 data. We also analyze the uncertainty in the results, and quantify how the variance in the data is apportioned among the model terms. We investigate the annual and semiannual variations of the data and quantify the amplitude, height dependence, solar cycle dependence, and interannual variability of these oscillatory modes. The auxiliary material includes Fortran 90 code for evaluating GAMDM.
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Average Gait Differential Image Based Human Recognition
Chen, Jinyan; Liu, Jiansheng
2014-01-01
The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171
Average power laser experiment (APLE) design
NASA Astrophysics Data System (ADS)
Parazzoli, C. G.; Rodenburg, R. E.; Dowell, D. H.; Greegor, R. B.; Kennedy, R. C.; Romero, J. B.; Siciliano, J. A.; Tong, K.-O.; Vetter, A. M.; Adamski, J. L.; Pistoresi, D. J.; Shoffstall, D. R.; Quimby, D. C.
1992-07-01
We describe the details and the design requirements for the 100 kW CW radio frequency free electron laser at 10 μm to be built at Boeing Aerospace and Electronics Division in Seattle with the collaboration of Los Alamos National Laboratory. APLE is a single-accelerator master-oscillator and power-amplifier (SAMOPA) device. The goal of this experiment is to demonstrate a fully operational RF-FEL at 10 μm with an average power of 100 kW. The approach and wavelength were chosen on the basis of maximum cost effectiveness, including utilization of existing hardware and reasonable risk, and potential for future applications. Current plans call for an initial oscillator power demonstration in the fall of 1994 and full SAMOPA operation by December 1995.
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Average prime-pair counting formula
NASA Astrophysics Data System (ADS)
Korevaar, Jaap; Riele, Herman Te
2010-04-01
Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-01-01
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214
Averaged implicit hydrodynamic model of semiflexible filaments.
Chandran, Preethi L; Mofrad, Mohammad R K
2010-03-01
We introduce a method to incorporate hydrodynamic interaction in a model of semiflexible filament dynamics. Hydrodynamic screening and other hydrodynamic interaction effects lead to nonuniform drag along even a rigid filament, and cause bending fluctuations in semiflexible filaments, in addition to the nonuniform Brownian forces. We develop our hydrodynamics model from a string-of-beads idealization of filaments, and capture hydrodynamic interaction by Stokes superposition of the solvent flow around beads. However, instead of the commonly used first-order Stokes superposition, we do an equivalent of infinite-order superposition by solving for the true relative velocity or hydrodynamic velocity of the beads implicitly. We also avoid the computational cost of the string-of-beads idealization by assuming a single normal, parallel and angular hydrodynamic velocity over sections of beads, excluding the beads at the filament ends. We do not include the end beads in the averaging and solve for them separately instead, in order to better resolve the drag profiles along the filament. A large part of the hydrodynamic drag is typically concentrated at the filament ends. The averaged implicit hydrodynamics methods can be easily incorporated into a string-of-rods idealization of semiflexible filaments that was developed earlier by the authors. The earlier model was used to solve the Brownian dynamics of semiflexible filaments, but without hydrodynamic interactions incorporated. We validate our current model at each stage of development, and reproduce experimental observations on the mean-squared displacement of fluctuating actin filaments . We also show how hydrodynamic interaction confines a fluctuating actin filament between two stationary lateral filaments. Finally, preliminary examinations suggest that a large part of the observed velocity in the interior segments of a fluctuating filament can be attributed to induced solvent flow or hydrodynamic screening. PMID:20365783
The entropy in finite N-unit nonextensive systems: The normal average and q-average
NASA Astrophysics Data System (ADS)
Hasegawa, Hideo
2010-09-01
We discuss the Tsallis entropy in finite N-unit nonextensive systems by using the multivariate q-Gaussian probability distribution functions (PDFs) derived by the maximum entropy methods with the normal average and the q-average (q: the entropic index). The Tsallis entropy obtained by the q-average has an exponential N dependence: Sq(N)/N≃e(1-q)NS1(1) for large N (≫1/(1-q)>0). In contrast, the Tsallis entropy obtained by the normal average is given by Sq(N)/N≃[1/(q-1)N] for large N (≫1/(q -1)>0). N dependences of the Tsallis entropy obtained by the q- and normal averages are generally quite different, although both results are in fairly good agreement for |q -1|≪1.0. The validity of the factorization approximation (FA) to PDFs, which has been commonly adopted in the literature, has been examined. We have calculated correlations defined by Cm=⟨(δxiδxj)m⟩-⟨(δxi)m⟩⟨(δxj)m⟩ for i ≠j where δxi=xi-⟨xi⟩, and the bracket ⟨ṡ⟩ stands for the normal and q-averages. The first-order correlation (m =1) expresses the intrinsic correlation and higher-order correlations with m ≥2 include nonextensivity-induced correlation, whose physical origin is elucidated in the superstatistics.
Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport
NASA Astrophysics Data System (ADS)
Parker, J. C.; van Genuchten, M. Th.
1984-07-01
Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.
Optimizing Average Precision Using Weakly Supervised Data.
Behl, Aseem; Mohapatra, Pritish; Jawahar, C V; Kumar, M Pawan
2015-12-01
Many tasks in computer vision, such as action classification and object detection, require us to rank a set of samples according to their relevance to a particular visual category. The performance of such tasks is often measured in terms of the average precision (ap). Yet it is common practice to employ the support vector machine ( svm) classifier, which optimizes a surrogate 0-1 loss. The popularity of svmcan be attributed to its empirical performance. Specifically, in fully supervised settings, svm tends to provide similar accuracy to ap-svm, which directly optimizes an ap-based loss. However, we hypothesize that in the significantly more challenging and practically useful setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. In order to test this hypothesis, we propose a novel latent ap-svm that minimizes a carefully designed upper bound on the ap-based loss function over weakly supervised samples. Using publicly available datasets, we demonstrate the advantage of our approach over standard loss-based learning frameworks on three challenging problems: action classification, character recognition and object detection. PMID:26539857
Calculating Free Energies Using Average Force
NASA Technical Reports Server (NTRS)
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Average oxidation state of carbon in proteins
Dick, Jeffrey M.
2014-01-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data
NASA Astrophysics Data System (ADS)
Kristoffersen, Anders
2007-08-01
The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.
Code of Federal Regulations, 2011 CFR
2011-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2014 CFR
2014-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2013 CFR
2013-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry
NASA Astrophysics Data System (ADS)
de Kat, Roeland
2015-11-01
Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...
Arithmetic averaging: A versatile technique for smoothing and trend removal
Clark, E.L.
1993-12-31
Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Cost averaging techniques for robust control of flexible structural systems
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.; Crawley, Edward F.
1991-01-01
Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.
Sample Size Bias in Judgments of Perceptual Averages
ERIC Educational Resources Information Center
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.
2014-01-01
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…
Averaging in SU(2) open quantum random walk
NASA Astrophysics Data System (ADS)
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
76 FR 57081 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2010 was $28,284. The average annual cost to confine an inmate in a Community Corrections...
78 FR 16711 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2011 was $28,893.40. The average annual cost to confine an inmate in a Community...
76 FR 6161 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an inmate in a Community Corrections...
47 CFR 1.959 - Computation of average terrain elevation.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Radial average terrain elevation is calculated as the average of the elevation along a straight line path... radial path extends over foreign territory or water, such portion must not be included in the computation of average elevation unless the radial path again passes over United States land between 16 and...
7 CFR 760.640 - National average market price.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...
7 CFR 760.640 - National average market price.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...
7 CFR 760.640 - National average market price.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...
7 CFR 760.640 - National average market price.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...
7 CFR 760.640 - National average market price.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...
20 CFR 404.221 - Computing your average monthly wage.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
20 CFR 404.221 - Computing your average monthly wage.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
20 CFR 404.221 - Computing your average monthly wage.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
20 CFR 404.221 - Computing your average monthly wage.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
20 CFR 404.221 - Computing your average monthly wage.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
27 CFR 19.37 - Average effective tax rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate... effective tax rate. (a) The proprietor may establish an average effective tax rate for any eligible... recompute the average effective tax rate so as to include only the immediately preceding 6-month period....
Robust Morphological Averages in Three Dimensions for Anatomical Atlas Construction
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Bloch, Isabelle; Schmitt, Francis
2004-09-01
We present original methods for obtaining robust, anatomical shape-based averages of features of the human head anatomy from a normal population. Our goals are computerized atlas construction with representative anatomical features and morphopometry for specific populations. A method for true-morphological averaging is proposed, consisting of a suitable blend of shape-related information for N objects to obtain a progressive average. It is made robust by penalizing, in a morphological sense, the contributions of features less similar to the current average. Morphological error and similarity, as well as penalization, are based on the same paradigm as the morphological averaging.
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
NASA Astrophysics Data System (ADS)
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
Average g-Factors of Anisotropic Polycrystalline Samples
Fishman, Randy Scott; Miller, Joel S.
2010-01-01
Due to the lack of suitable single crystals, the average g-factor of anisotropic polycrystalline samples are commonly estimated from either the Curie-Weiss susceptibility or the saturation magnetization. We show that the average g-factor obtained from the Curie constant is always greater than or equal to the average g-factor obtained from the saturation magnetization. The average g-factors are equal only for a single crystal or an isotropic polycrystal. We review experimental results for several compounds containing the anisotropic cation [Fe(C5Me5)2]+ and propose an experiment to test this inequality using a compound with a spinless anion.
Aberration averaging using point spread function for scanning projection systems
NASA Astrophysics Data System (ADS)
Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi
2000-07-01
Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.
Thermodynamic properties of average-atom interatomic potentials for alloys
NASA Astrophysics Data System (ADS)
Nöhring, Wolfram Georg; Curtin, William Arthur
2016-05-01
The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.
Phase averaging of image ensembles by using cepstral gradients
Swan, H.W.
1983-11-01
The direct Fourier phase averaging of an ensemble of randomly blurred images has long been thought to be too difficult a problem to undertake realistically owing to the necessity of proper phase unwrapping. It is shown that it is nevertheless possible to average the Fourier phase information in an image ensemble without calculating phases by using the technique of cepstral gradients.
78 FR 49770 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-15
... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal... annual cost to confine an inmate in a Community Corrections Center for Fiscal Year 2012 was $27,003...
20 CFR 404.220 - Average-monthly-wage method.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 404.220 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...
Delineating the Average Rate of Change in Longitudinal Models
ERIC Educational Resources Information Center
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Using Multiple Representations To Improve Conceptions of Average Speed.
ERIC Educational Resources Information Center
Reed, Stephen K.; Jazo, Linda
2002-01-01
Discusses improving mathematical reasoning through the design of computer microworlds and evaluates a computer-based learning environment that uses multiple representations to improve undergraduate students' conception of average speed. Describes improvement of students' estimates of average speed by using visual feedback from a simulation.…
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2014 CFR
2014-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
Average refractive powers of an alexandrite laser rod
NASA Astrophysics Data System (ADS)
Driedger, K. P.; Krause, W.; Weber, H.
1986-04-01
The average refractive powers (average inverse focal lengths) of the thermal lens produced by an alexandrite laser rod optically pumped at repetition rates between 0.4 and 10 Hz and with electrical flashlamp input pulse energies up to 500 J have been measured. The measuring setup is described and the measurement results are discussed.
Hadley circulations for zonally averaged heating centered off the equator
NASA Technical Reports Server (NTRS)
Lindzen, Richard S.; Hou, Arthur Y.
1988-01-01
Consistent with observations, it is found that moving peak heating even 2 deg off the equator leads to profound asymmetries in the Hadley circulation, with the winter cell amplifying greatly and the summer cell becoming negligible. It is found that the annually averaged Hadley circulation is much larger than the circulation forced by the annually averaged heating.
47 CFR 80.759 - Average terrain elevation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 5 2013-10-01 2013-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw...
47 CFR 80.759 - Average terrain elevation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw...
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
On various definitions of shadowing with average error in tracing
NASA Astrophysics Data System (ADS)
Wu, Xinxing; Oprocha, Piotr; Chen, Guanrong
2016-07-01
When computing a trajectory of a dynamical system, influence of noise can lead to large perturbations which can appear, however, with small probability. Then when calculating approximate trajectories, it makes sense to consider errors small on average, since controlling them in each iteration may be impossible. Demand to relate approximate trajectories with genuine orbits leads to various notions of shadowing (on average) which we consider in the paper. As the main tools in our studies we provide a few equivalent characterizations of the average shadowing property, which also partly apply to other notions of shadowing. We prove that almost specification on the whole space induces this property on the measure center which in turn implies the average shadowing property. Finally, we study connections among sensitivity, transitivity, equicontinuity and (average) shadowing.
LANDSAT-4 horizon scanner full orbit data averages
NASA Technical Reports Server (NTRS)
Stanley, J. P.; Bilanow, S.
1983-01-01
Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.
Some series of intuitionistic fuzzy interactive averaging aggregation operators.
Garg, Harish
2016-01-01
In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail. PMID:27441128
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
ERIC Educational Resources Information Center
Mask, Nan; Bowen, Charles E.
1984-01-01
Compared the Wechsler Intelligence Scale for Children (Revised) (WISC-R) and the Leiter International Performance Scale with 40 average and above average students. Results indicated a curvilinear relationship between the WISC-R and the Leiter, which correlates higher at the mean and deviates as the Full Scale varies from the mean. (JAC)
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How is the annual refinery or importer average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Gasoline...
ERIC Educational Resources Information Center
Saleh, Mohammad; Lazonder, Ard W.; Jong, Ton de
2007-01-01
Average-ability students often do not take full advantage of learning in mixed-ability groups because they hardly engage in the group interaction. This study examined whether structuring collaboration by group roles and ground rules for helping behavior might help overcome this participatory inequality. In a plant biology course, heterogeneously…
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
The causal meaning of Fisher’s average effect
LEE, JAMES J.; CHOW, CARSON C.
2013-01-01
Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113
Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.
Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu
2010-05-01
Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813
Incoherent averaging of phase singularities in speckle-shearing interferometry.
Mantel, Klaus; Nercissian, Vanusch; Lindlein, Norbert
2014-08-01
Interferometric speckle techniques are plagued by the omnipresence of phase singularities, impairing the phase unwrapping process. To reduce the number of phase singularities by physical means, an incoherent averaging of multiple speckle fields may be applied. It turns out, however, that the results may strongly deviate from the expected √N behavior. Using speckle-shearing interferometry as an example, we investigate the mechanism behind the reduction of phase singularities, both by calculations and by computer simulations. Key to an understanding of the reduction mechanism during incoherent averaging is the representation of the physical averaging process in terms of certain vector fields associated with each speckle field. PMID:25078215
Time average vibration fringe analysis using Hilbert transformation
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-10-20
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Bounce-averaged Kinetic Equations and Neoclassical Polarization Density
First Author = B.H. Fong; T.S. Hahm
1998-07-01
The rigorous formulation of the bounce-averaged equations is presented based upon the Poincare-Cartan one-form andLie perturbation methods. The resulting bounce-averaged Vlasov equation is Hamiltonian, thus suitable for theself-consistent simulation of low-frequency electrostatic turbulence in the trapped ion mode regime. In the bounce-kineticPoisson equation, the "neoclassical polarization density" arises from the difference between bounce-averaged banana centerand real trapped particle densities across a field line. This representation of the neoclassical polarization drift as ashielding term provides a systematic way to study the long-term behavior of the turbulence-driven E x B flow.
Optimization of high average power FEL beam for EUV lithography
NASA Astrophysics Data System (ADS)
Endo, Akira
2015-05-01
Extreme Ultraviolet Lithography (EUVL) is entering into high volume manufacturing (HVM) stage, with high average power (250W) EUV source from laser produced plasma at 13.5nm. Semiconductor industry road map indicates a scaling of the source technology more than 1kW average power by high repetition rate FEL. This paper discusses on the lowest risk approach to construct a prototype based on superconducting linac and normal conducting undulator, to demonstrate a high average power 13.5nm FEL equipped with optimized optical components and solid state lasers, to study FEL application in EUV lithography.
Definition of average path and relativity parameter computation in CASA
NASA Astrophysics Data System (ADS)
Wu, Dawei; Huang, Yan; Chen, Xiaohua; Yu, Chang
2001-09-01
System CASA (computer-assisted semen analysis) is a medical applicable system which gets the sperm motility and its parameters using image processing method. But there is no any authoritative administration or academic organization gives a set of criterion for CASA now result in lowering the effective compare of work between the labs or researchers. The average path and parameters relative to it as average path velocity, amplitude of lateral head displacement and beat cross frequency are often unable to compare between systems because of different algorithm. The paper presents a new algorithm that could define the average path uniquely and compute those 3 parameters above quickly and handy from any real path.
Averaging underwater noise levels for environmental assessment of shipping.
Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John
2012-10-01
Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575
Distribution of time-averaged observables for weak ergodicity breaking.
Rebenshtok, A; Barkai, E
2007-11-23
We find a general formula for the distribution of time-averaged observables for systems modeled according to the subdiffusive continuous time random walk. For Gaussian random walks coupled to a thermal bath we recover ergodicity and Boltzmann's statistics, while for the anomalous subdiffusive case a weakly nonergodic statistical mechanical framework is constructed, which is based on Lévy's generalized central limit theorem. As an example we calculate the distribution of X, the time average of the position of the particle, for unbiased and uniformly biased particles, and show that X exhibits large fluctuations compared with the ensemble average
Average waiting time in FDDI networks with local priorities
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.
Direct Statistical Simulation: Ensemble Averaging and Basis Reduction
NASA Astrophysics Data System (ADS)
Allawala, Altan; Marston, Brad
2015-11-01
Low-order statistics of models of geophysical fluids may be directly accessed by solving the equations of motion for the equal-time cumulants themselves. We investigate a variant of the second-order cumulant expansion (CE2) in which zonal averaging is replaced by ensemble averaging. Proper orthogonal decomposition (POD) of the second cumulant is used to reduce the dimensionality of the problem. The approach is tested on a quasi-geostrophic 2-layer baroclinic model of planetary atmospheres by comparison to the traditional approach of accumulating statistics via numerical simulation, and to zonal averaged CE2. Supported in part by NSF DMR-1306806 and NSF CCF-1048701.
Average local ionization energy generalized to correlated wavefunctions
Ryabinkin, Ilya G.; Staroverov, Viktor N.
2014-08-28
The average local ionization energy function introduced by Politzer and co-workers [Can. J. Chem. 68, 1440 (1990)] as a descriptor of chemical reactivity has a limited utility because it is defined only for one-determinantal self-consistent-field methods such as the Hartree–Fock theory and the Kohn–Sham density-functional scheme. We reinterpret the negative of the average local ionization energy as the average total energy of an electron at a given point and, by rewriting this quantity in terms of reduced density matrices, arrive at its natural generalization to correlated wavefunctions. The generalized average local electron energy turns out to be the diagonal part of the coordinate representation of the generalized Fock operator divided by the electron density; it reduces to the original definition in terms of canonical orbitals and their eigenvalues for one-determinantal wavefunctions. The discussion is illustrated with calculations on selected atoms and molecules at various levels of theory.
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
Average lifespan of radioelectronic equipment with allowance for resource limitations
NASA Astrophysics Data System (ADS)
Davydov, A. N.
2011-12-01
One of the reliability parameters of radioelectronic equipment is its average life span. The number of incidents during the operation of different items that make up the component base of radioelectronic equipment follows an exponential distribution. In general, the average life span for an exponential distribution is T mean = 1/λ, where λ is the rate of base incidents in a component per hour. This estimate is valid when considering the life span of radioelectronic equipment from zero to infinity. In reality, component base items and, correspondingly, radioelectronic equipment have resource limitations caused by the properties of their composing materials and manufacturing technique. The average life span of radioelectronic equipment will be different from the ideal life span of the equipment. This paper is aimed at calculating the average life span of radioelectronic equipment with allowance for resource limitations of constituent electronic component base items.
Does subduction zone magmatism produce average continental crust
NASA Technical Reports Server (NTRS)
Ellam, R. M.; Hawkesworth, C. J.
1988-01-01
The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.
Ensemble vs. time averages in financial time series analysis
NASA Astrophysics Data System (ADS)
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
Total-pressure-tube averaging in pulsating flows.
NASA Technical Reports Server (NTRS)
Krause, L. N.
1973-01-01
A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. The tests were performed at a pressure level of 1 bar, for Mach numbers up to near 1, and frequencies up to 3 kHz. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonances which further increased the indicated pressure were encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.
Modelling and designing digital control systems with averaged measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1988-01-01
An account is given of the control systems engineering methods applicable to the design of digital feedback controllers for aerospace deterministic systems in which the output, rather than being an instantaneous measure of the system at the sampling instants, instead represents an average measure of the system over the time interval between samples. The averaging effect can be included during the modeling of the plant, thereby obviating the iteration of design/simulation phases.
A precise measurement of the average b hadron lifetime
NASA Astrophysics Data System (ADS)
Buskulic, D.; de Bonis, I.; Casper, D.; Decamp, D.; Ghez, P.; Goy, C.; Lees, J.-P.; Lucotte, A.; Minard, M.-N.; Odier, P.; Pietrzyk, B.; Ariztizabal, F.; Chmeissani, M.; Crespo, J. M.; Efthymiopoulos, I.; Fernandez, E.; Fernandez-Bosman, M.; Gaitan, V.; Garrido, Ll.; Martinez, M.; Orteu, S.; Pacheco, A.; Padilla, C.; Palla, F.; Pascual, A.; Perlas, J. A.; Sanchez, F.; Teubert, F.; Colaleo, A.; Creanza, D.; de Palma, M.; Farilla, A.; Gelao, G.; Girone, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Marinelli, N.; Natali, S.; Nuzzo, S.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Bonvicini, G.; Cattaneo, M.; Comas, P.; Coyle, P.; Drevermann, H.; Forty, R. W.; Frank, M.; Hagelberg, R.; Harvey, J.; Jacobsen, R.; Janot, P.; Jost, B.; Knobloch, J.; Lehraus, I.; Markou, C.; Martin, E. B.; Mato, P.; Minten, A.; Miquel, R.; Oest, T.; Palazzi, P.; Pater, J. R.; Pusztaszeri, J.-F.; Ranjard, F.; Rensing, P.; Rolandi, L.; Schlatter, D.; Schmelling, M.; Schneider, O.; Tejessy, W.; Tomalin, I. R.; Venturi, A.; Wachsmuth, H.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Bardadin-Otwinowska, M.; Barrès, A.; Boyer, C.; Falvard, A.; Gay, P.; Guicheney, C.; Henrard, P.; Jousset, J.; Michel, B.; Monteil, S.; Montret, J.-C.; Pallin, D.; Perret, P.; Podlyski, F.; Proriol, J.; Rossignol, J.-M.; Saadi, F.; Fearnley, T.; Hansen, J. B.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Nilsson, B. S.; Kyriakis, A.; Simopoulou, E.; Siotis, I.; Vayaki, A.; Zachariadou, K.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Bourdon, P.; Passalacqua, L.; Rougé, A.; Rumpf, M.; Tanaka, R.; Valassi, A.; Verderi, M.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Focardi, E.; Parrini, G.; Corden, M.; Delfino, M.; Georgiopoulos, C.; Jaffe, D. E.; Antonelli, A.; Bencivenni, G.; Bologna, G.; Bossi, F.; Campana, P.; Capon, G.; Chiarella, V.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Pepe-Altarelli, M.; Dorris, S. J.; Halley, A. W.; Ten Have, I.; Knowles, I. G.; Lynch, J. G.; Morton, W. T.; O'Shea, V.; Raine, C.; Reeves, P.; Scarr, J. M.; Smith, K.; Smith, M. G.; Thompson, A. S.; Thomson, F.; Thorn, S.; Turnbull, R. M.; Becker, U.; Braun, O.; Geweniger, C.; Graefe, G.; Hanke, P.; Hepp, V.; Kluge, E. E.; Putzer, A.; Rensch, B.; Schmidt, M.; Sommer, J.; Stenzel, H.; Tittel, K.; Werner, S.; Wunsch, M.; Abbaneo, D.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Colling, D. J.; Dornan, P. J.; Konstantinidis, N.; Moneta, L.; Moutoussi, A.; Nash, J.; San Martin, G.; Sedgbeer, J. K.; Stacey, A. M.; Dissertori, G.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bowdery, C. K.; Brodbeck, T. J.; Colrain, P.; Crawford, G.; Finch, A. J.; Foster, F.; Hughes, G.; Sloan, T.; Whelan, E. P.; Williams, M. I.; Galla, A.; Greene, A. M.; Kleinknecht, K.; Quast, G.; Raab, J.; Renk, B.; Sander, H.-G.; van Gemmeren, P.; Wanke, R.; Zeitnitz, C.; Aubert, J. J.; Bencheikh, A. M.; Benchouk, C.; Bonissent, A.; Bujosa, G.; Calvet, D.; Carr, J.; Diaconu, C.; Etienne, F.; Nicod, D.; Payre, P.; Rousseau, D.; Talby, M.; Thulasidas, M.; Abt, I.; Assmann, R.; Bauer, C.; Blum, W.; Brown, D.; Dietl, H.; Dydak, F.; Ganis, G.; Gotzhein, C.; Jakobs, K.; Kroha, H.; Lütjens, G.; Lutz, G.; Männer, W.; Moser, H.-G.; Richter, R.; Rosado-Schlosser, A.; Schael, S.; Settles, R.; Seywerd, H.; Stierlin, U.; Denis, R. St.; Wolf, G.; Alemany, R.; Boucrot, J.; Callot, O.; Cordier, A.; Courault, F.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, Ph.; Jacquet, M.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Musolino, G.; Nikolic, I.; Park, H. J.; Park, I. C.; Schune, M.-H.; Simion, S.; Veillet, J.-J.; Videau, I.; Azzurri, P.; Bagliesi, G.; Batignani, G.; Bettarini, S.; Bozzi, C.; Calderini, G.; Carpinelli, M.; Ciocci, M. A.; Ciulli, V.; Dell'Orso, R.; Fantechi, R.; Ferrante, I.; Foà, L.; Forti, F.; Giassi, A.; Giorgi, M. A.; Gregorio, A.; Ligabue, F.; Lusiani, A.; Marrocchesi, P. S.; Messineo, A.; Rizzo, G.; Sanguinetti, G.; Sciabà, A.; Spagnolo, P.; Steinberger, J.; Tenchini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Verdini, P. G.; Walsh, J.; Betteridge, A. P.; Blair, G. A.; Bryant, L. M.; Cerutti, F.; Gao, Y.; Green, M. G.; Johnson, D. L.; Medcalf, T.; Mir, Ll. M.; Perrodo, P.; Strong, J. A.; Bertin, V.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Haywood, S.; Edwards, M.; Maley, P.; Norton, P. R.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Duarte, H.; Emery, S.; Kozanecki, W.; Lançon, E.; Lemaire, M. C.; Locci, E.; Marx, B.; Perez, P.; Rander, J.; Renardy, J.-F.; Rosowsky, A.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Trabelsi, A.; Vallage, B.; Johnson, R. P.; Kim, H. Y.; Litke, A. M.; McNeil, M. A.; Taylor, G.; Beddall, A.; Booth, C. N.; Boswell, R.; Cartwright, S.; Combley, F.; Dawson, I.; Koksal, A.; Letho, M.; Newton, W. M.; Rankin, C.; Thompson, L. F.; Böhrer, A.; Brandt, S.; Cowan, G.; Feigl, E.; Grupen, C.; Lutters, G.; Minguet-Rodriguez, J.; Rivera, F.; Saraiva, P.; Smolik, L.; Stephan, F.; Apollonio, M.; Bosisio, L.; Della Marina, R.; Giannini, G.; Gobbo, B.; Ragusa, F.; Rothberg, J.; Wasserbaech, S.; Armstrong, S. R.; Bellantoni, L.; Elmer, P.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; González, S.; Grahl, J.; Harton, J. L.; Hayes, O. J.; Hu, H.; McNamara, P. A.; Nachtman, J. M.; Orejudos, W.; Pan, Y. B.; Saadi, Y.; Schmitt, M.; Scott, I. J.; Sharma, V.; Turk, J. D.; Walsh, A. M.; Sau Lan Wu; Wu, X.; Yamartino, J. M.; Zheng, M.; Zobernig, G.; Aleph Collaboration
1996-02-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 ± 0.013 ± 0.022 ps.
Updated measurement of the average b hadron lifetime
NASA Astrophysics Data System (ADS)
Buskulic, D.; Decamp, D.; Goy, C.; Lees, J.-P.; Minard, M.-N.; Mours, B.; Alemany, R.; Ariztizabal, F.; Comas, P.; Crespo, J. M.; Delfino, M.; Fernandez, E.; Gaitan, V.; Garrido, Ll.; Mattison, T.; Pacheco, A.; Pascual, A.; Creanza, D.; de Palma, M.; Farilla, A.; Iaselli, G.; Maggi, G.; Maggi, M.; Natali, S.; Nuzzo, S.; Quattromini, M.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Hu, H.; Huang, D.; Huang, X.; Lin, J.; Lou, J.; Qiao, C.; Wang, T.; Xie, Y.; Xu, D.; Xu, R.; Zhang, J.; Zhao, W.; Bauerdick, L. A. T.; Blucher, E.; Bonvicini, G.; Bossi, F.; Boudreau, J.; Casper, D.; Drevermann, H.; Forty, R. W.; Ganis, G.; Gay, C.; Hagelberg, R.; Harvey, J.; Haywood, S.; Hilgart, J.; Jacobsen, R.; Jost, B.; Knobloch, J.; Lançon, E.; Lehraus, I.; Lohse, T.; Lusiani, A.; Martinez, M.; Mato, P.; Meinhard, H.; Minten, A.; Miquel, R.; Moser, H.-G.; Palazzi, P.; Perlas, J. A.; Pusztaszeri, J.-F.; Ranjard, F.; Redlinger, G.; Rolandi, L.; Rothberg, J.; Ruan, T.; Saich, M.; Schlatter, D.; Schmelling, M.; Sefkow, F.; Tejessy, W.; Wachsmuth, H.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Badaud, F.; Bardadin-Otwinowska, M.; Bencheikh, A. M.; El Fellous, R.; Falvard, A.; Gay, P.; Guicheney, C.; Henrad, P.; Jousset, J.; Michel, B.; Montret, J.-C.; Pallin, D.; Perret, P.; Pietrzyk, B.; Proriol, J.; Prulhière, F.; Stimpfl, G.; Fearnley, T.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Møllerud, R.; Nilsson, B. S.; Efthymiopoulos, I.; Kyriakis, A.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Badier, J.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Fouque, G.; Orteu, S.; Rosowsky, A.; Rougé, A.; Rumpf, M.; Tanaka, R.; Verderi, M.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Veitch, E.; Moneta, L.; Parrini, G.; Corden, M.; Georgiopoulos, C.; Ikeda, M.; Lannutti, J.; Levinthal, D.; Mermikides, M.; Sawyer, L.; Wasserbaech, S.; Antonelli, A.; Baldini, R.; Bencivenni, G.; Bologna, G.; Campana, P.; Capon, G.; Cerutti, F.; Chiarella, V.; D'Ettorre-Piazzoli, B.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Passalacqua, L.; Pepe-Altarelli, M.; Picchi, P.; Altoon, B.; Boyle, O.; Colrain, P.; Ten Have, I.; Lynch, J. G.; Maitland, W.; Morton, W. T.; Raine, C.; Scarr, J. M.; Smith, K.; Thompson, A. S.; Turnbull, R. M.; Brandl, B.; Braun, O.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Maumary, Y.; Putzer, A.; Rensch, B.; Stahl, A.; Tittel, K.; Wunsch, M.; Belk, A. T.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Cattaneo, M.; Colling, D. J.; Dornan, P. J.; Dugeay, S.; Greene, A. M.; Hassard, J. F.; Lieske, N. M.; Nash, J.; Patton, S. J.; Payne, D. G.; Phillips, M. J.; Sedgbeer, J. K.; Tomalin, I. R.; Wright, A. G.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bowdery, C. K.; Brodbeck, T. J.; Finch, A. J.; Foster, F.; Hughes, G.; Jackson, D.; Keemer, N. R.; Nuttall, M.; Patel, A.; Sloan, T.; Snow, S. W.; Whelan, E. P.; Kleinknecht, K.; Raab, J.; Renk, B.; Sander, H.-G.; Schmidt, H.; Steeg, F.; Walther, S. M.; Wolf, B.; Aubert, J.-J.; Benchouk, C.; Bonissent, A.; Carr, J.; Coyle, P.; Drinkard, J.; Etienne, F.; Papalexiou, S.; Payre, P.; Qian, Z.; Roos, L.; Rousseau, D.; Schwemling, P.; Talby, M.; Adlung, S.; Bauer, C.; Blum, W.; Brown, D.; Cattaneo, P.; Cowan, G.; Dehning, B.; Dietl, H.; Dydak, F.; Fernandez-Bosman, M.; Frank, M.; Halley, A. W.; Lauber, J.; Lütjens, G.; Lutz, G.; Männer, W.; Richter, R.; Rotscheidt, H.; Schröder, J.; Schwarz, A. S.; Settles, R.; Seywerd, H.; Stierlin, U.; Stiegler, U.; Denis, R. St.; Takashima, M.; Thomas, J.; Wolf, G.; Boucrot, J.; Callot, O.; Cordier, A.; Davier, M.; Grivaz, J.-F.; Heusse, Ph.; Jaffe, D. E.; Janot, P.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Schune, M.-H.; Veillet, J.-J.; Videau, I.; Zhang, Z.; Abbaneo, D.; Amendolia, S. R.; Bagliesi, G.; Batignani, G.; Bosisio, L.; Bottigli, U.; Bozzi, C.; Bradaschia, C.; Carpinelli, M.; Ciocci, M. A.; Dell'Orso, R.; Ferrante, I.; Fidecaro, F.; Foà, L.; Focardi, E.; Forti, F.; Giassi, A.; Giorgi, M. A.; Ligabue, F.; Mannelli, E. B.; Marrocchesi, P. S.; Messineo, A.; Palla, F.; Rizzo, G.; Sanguinetti, G.; Spagnolo, P.; Steinberger, J.; Tenchini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Venturi, A.; Verdini, P. G.; Walsh, J.; Carter, J. M.; Green, M. G.; March, P. V.; Mir, Ll. M.; Medcalf, T.; Quazi, I. S.; Strong, J. A.; West, L. R.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Edwards, M.; Fisher, S. M.; Jones, T. J.; Norton, P. R.; Salmon, D. P.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Duarte, H.; Kozanecki, W.; Lemaire, M. C.; Locci, E.; Loucatos, S.; Monnier, E.; Perez, P.; Perrier, F.; Rander, J.; Renardy, J.-F.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Vallage, B.; Johnson, R. P.; Litke, A. M.; Taylor, G.; Wear, J.; Ashman, J. G.; Babbage, W.; Booth, C. N.; Buttar, C.; Carney, R. E.; Cartwright, S.; Combley, F.; Hatfield, F.; Reeves, P.; Thompson, L. F.; Barberio, E.; Böhrer, A.; Brandt, S.; Grupen, C.; Mirabito, L.; Rivera, F.; Schäfer, U.; Giannini, G.; Gobbo, B.; Ragusa, F.; Bellantoni, L.; Chen, W.; Cinabro, D.; Conway, J. S.; Cowen, D. F.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; Grahl, J.; Harton, J. L.; Jared, R. C.; Leclaire, B. W.; Lishka, C.; Pan, Y. B.; Pater, J. R.; Saadi, Y.; Sharma, V.; Schmitt, M.; Shi, Z. H.; Walsh, A. M.; Weber, F. V.; Whitney, M. H.; Sau Lan Wu; Wu, X.; Zobernig, G.; Aleph Collaboration
1992-11-01
An improved measurement of the average lifetime of b hadrons has been performed with the ALEPH detector. From a sample of 260 000 hadronic Z 0 decays, recorded during the 1991 LEP run with the silicon vertex detector fully operational, a fit to the impact parameter distribution of lepton tracks coming from semileptonic decays yields an average b hadron lifetime of 1.49 ± 0.03 ± 0.06 ps.
Characterization of mirror-based modulation-averaging structures.
Komljenovic, Tin; Babić, Dubravko; Sipus, Zvonimir
2013-05-10
Modulation-averaging reflectors have recently been proposed as a means for improving the link margin in self-seeded wavelength-division multiplexing in passive optical networks. In this work, we describe simple methods for determining key parameters of such structures and use them to predict their averaging efficiency. We characterize several reflectors built by arraying fiber-Bragg gratings along a segment of an optical fiber and show very good agreement between experiments and theoretical models. PMID:23669835
Flavor Physics Data from the Heavy Flavor Averaging Group (HFAG)
The Heavy Flavor Averaging Group (HFAG) was established at the May 2002 Flavor Physics and CP Violation Conference in Philadelphia, and continues the LEP Heavy Flavor Steering Group's tradition of providing regular updates to the world averages of heavy flavor quantities. Data are provided by six subgroups that each focus on a different set of heavy flavor measurements: B lifetimes and oscillation parameters, Semi-leptonic B decays, Rare B decays, Unitarity triangle parameters, B decays to charm final states, and Charm Physics.
Geodesic estimation for large deformation anatomical shape averaging and interpolation.
Avants, Brian; Gee, James C
2004-01-01
The goal of this research is to promote variational methods for anatomical averaging that operate within the space of the underlying image registration problem. This approach is effective when using the large deformation viscous framework, where linear averaging is not valid, or in the elastic case. The theory behind this novel atlas building algorithm is similar to the traditional pairwise registration problem, but with single image forces replaced by average forces. These group forces drive an average transport ordinary differential equation allowing one to estimate the geodesic that moves an image toward the mean shape configuration. This model gives large deformation atlases that are optimal with respect to the shape manifold as defined by the data and the image registration assumptions. We use the techniques in the large deformation context here, but they also pertain to small deformation atlas construction. Furthermore, a natural, inherently inverse consistent image registration is gained for free, as is a tool for constant arc length geodesic shape interpolation. The geodesic atlas creation algorithm is quantitatively compared to the Euclidean anatomical average to elucidate the need for optimized atlases. The procedures generate improved average representations of highly variable anatomy from distinct populations. PMID:15501083
Average Soil Water Retention Curves Measured by Neutron Radiography
Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Exact Averaging of Stochastic Equations for Flow in Porous Media
Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi
2008-03-15
It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.
The average longitudinal air shower profile: exploring the shape information
NASA Astrophysics Data System (ADS)
Conceição, R.; Andringa, S.; Diogo, F.; Pimenta, M.
2015-08-01
The shape of the extensive air shower (EAS) longitudinal profile contains information about the nature of the primary cosmic ray. However, with the current detection capabilities, the assessment of this quantity in an event-by-event basis is still very challenging. In this work we show that the average longitudinal profile can be used to characterise the average behaviour of high energy cosmic rays. Using the concept of universal shower profile it is possible to describe the shape of the average profile in terms of two variables, which can be already measured by the current experiments. These variables present sensitivity to both average primary mass composition and to hadronic interaction properties in shower development. We demonstrate that the shape of the average muon production depth profile can be explored in the same way as the electromagnetic profile having a higher power of discrimination for the state of the art hadronic interaction models. The combination of the shape variables of both profiles provides a new powerful test to the existing hadronic interaction models, and may also provide important hints about multi-particle production at the highest energies.
Spectral Approach to Optimal Estimation of the Global Average Temperature.
NASA Astrophysics Data System (ADS)
Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.
1994-12-01
Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.