Science.gov

Sample records for 8-hr time-weighted average

  1. Time-weighted average SPME analysis for in planta determination of cVOCs.

    PubMed

    Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G

    2012-03-20

    The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592

  2. Occupational dimethylformamide exposure. 1. Diffusive sampling of dimethylformamide vapor for determination of time-weighted average concentration in air.

    PubMed

    Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M

    1992-01-01

    A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523

  3. Analysis of trace contaminants in hot gas streams using time-weighted average solid-phase microextraction: proof of concept.

    PubMed

    Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C

    2013-03-15

    Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low

  4. Time-weighted average sampling of airborne propylene glycol ethers by a solid-phase microextraction device.

    PubMed

    Shih, H C; Tsai, S W; Kuo, C H

    2012-01-01

    A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler

  5. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    PubMed

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428

  6. [Evaluation of +Gz tolerance following simulation of 8-hr flight].

    PubMed

    Khomenko, M N; Bukhtiiarov, I V; Malashchuk, L S

    2005-01-01

    Tolerance of +Gz (head-pelvis) centrifugation of pilots was evaluated following simulation of a long flight on single-seat fighter. The experiment involved 5 test-subjects who were exposed to +Gz before and after simulated 8-hr flight with a growth gradient of 0.1 u/s without anti-g suits and muscles relaxed; in addition, limiting tolerance of intricate profile +Gz loads of 2.0 to 9.0 units with a growth gradient of 1.0 u/s of test-subjects in anti-g suits (AGS) with a change-over pressure valve in the peak mode using muscle straining and breathing maneuvers. To counteract the negative effects of extended flight, various seat configurations: with a back inclination at 30 degrees to the +Gz vector and changeable geometry with a back inclination at 55 degrees to the vector. The other counter-measures applied were cool air shower, suit ventilation, physical exercises, lower body massage with AGS, electrostimulation of the back and lumber region, profiling of the supporting and soft parts of the seat, and 30-s exposure to +5 Gz. Hemodynamic and respiration parameters as well as body temperature were measured in the course of 8 hrs of flight and during and shortly after centrifugation. According to the results of the investigation, seat inclination at 55 degrees to the +Gz vector and tested system of countermeasures prevent degradation of tolerance of large (9 u.) loads following 8-hr flight simulation with the use of the modern anti-g gear, PMID:16353624

  7. Understanding the effectiveness of precursor reductions in lowering 8-hr ozone concentrations--Part II. The eastern United States.

    PubMed

    Reynolds, Steven D; Blanchard, Charles L; Ziman, Stephen D

    2004-11-01

    Analyses of ozone (O3) measurements in conjunction with photochemical modeling were used to assess the feasibility of attaining the federal 8-hr O3 standard in the eastern United States. Various combinations of volatile organic compound (VOC) and oxides of nitrogen (NOx) emission reductions were effective in lowering modeled peak 1-hr O3 concentrations. VOC emissions reductions alone had only a modest impact on modeled peak 8-hr O3 concentrations. Anthropogenic NOx emissions reductions of 46-86% of 1996 base case values were needed to reach the level of the 8-hr standard in some areas. As NOx emissions are reduced, O3 production efficiency increases, which accounts for the less than proportional response of calculated 8-hr O3 levels. Such increases in O3 production efficiency also were noted in previous modeling work for central California. O3 production in some urban core areas, such as New York City and Chicago, IL, was found to be VOC-limited. In these areas, moderate NOx emissions reductions may be accompanied by increases in peak 8-hr O3 levels. The findings help to explain differences in historical trends in 1- and 8-hr O3 levels and have serious implications for the feasibility of attaining the 8-hr O3 standard in several areas of the eastern United States. PMID:15587557

  8. A ∼ 3.8 hr PERIODICITY FROM AN ULTRASOFT ACTIVE GALACTIC NUCLEUS CANDIDATE

    SciTech Connect

    Lin, Dacheng; Irwin, Jimmy A.; Godet, Olivier; Webb, Natalie A.; Barret, Didier

    2013-10-10

    Very few galactic nuclei are found to show significant X-ray quasi-periodic oscillations (QPOs). After carefully modeling the noise continuum, we find that the ∼3.8 hr QPO in the ultrasoft active galactic nucleus candidate 2XMM J123103.2+110648 was significantly detected (∼5σ) in two XMM-Newton observations in 2005, but not in the one in 2003. The QPO root mean square (rms) is very high and increases from ∼25% in 0.2-0.5 keV to ∼50% in 1-2 keV. The QPO probably corresponds to the low-frequency type in Galactic black hole X-ray binaries, considering its large rms and the probably low mass (∼10{sup 5} M {sub ☉}) of the black hole in the nucleus. We also fit the soft X-ray spectra from the three XMM-Newton observations and find that they can be described with either pure thermal disk emission or optically thick low-temperature Comptonization. We see no clear X-ray emission from the two Swift observations in 2013, indicating lower source fluxes than those in XMM-Newton observations.

  9. Exposure Assessment for Carbon Dioxide Gas: Full Shift Average and Short-Term Measurement Approaches.

    PubMed

    Hill, R Jedd; Smith, Philip A

    2015-01-01

    Carbon dioxide (CO2) makes up a relatively small percentage of atmospheric gases, yet when used or produced in large quantities as a gas, a liquid, or a solid (dry ice), substantial airborne exposures may occur. Exposure to elevated CO2 concentrations may elicit toxicity, even with oxygen concentrations that are not considered dangerous per se. Full-shift sampling approaches to measure 8-hr time weighted average (TWA) CO2 exposures are used in many facilities where CO2 gas may be present. The need to assess rapidly fluctuating CO2 levels that may approach immediately dangerous to life or health (IDLH) conditions should also be a concern, and several methods for doing so using fast responding measurement tools are discussed in this paper. Colorimetric detector tubes, a non-dispersive infrared (NDIR) detector, and a portable Fourier transform infrared (FTIR) spectroscopy instrument were evaluated in a laboratory environment using a flow-through standard generation system and were found to provide suitable accuracy and precision for assessing rapid fluctuations in CO2 concentration, with a possible effect related to humidity noted only for the detector tubes. These tools were used in the field to select locations and times for grab sampling and personal full-shift sampling, which provided laboratory analysis data to confirm IDLH conditions and 8-hr TWA exposure information. Fluctuating CO2 exposures are exemplified through field work results from several workplaces. In a brewery, brief CO2 exposures above the IDLH value occurred when large volumes of CO2-containing liquid were released for disposal, but 8-hr TWA exposures were not found to exceed the permissible level. In a frozen food production facility nearly constant exposure to CO2 concentrations above the permissible 8-hr TWA value were seen, as well as brief exposures above the IDLH concentration which were associated with specific tasks where liquid CO2 was used. In a poultry processing facility the use of dry

  10. Measurement and analysis of 8-hour time-weighted average sound pressure levels in a vivarium decontamination facility.

    PubMed

    Pate, William; Charlton, Michael; Wellington, Carl

    2013-01-01

    Occupational noise exposure is a recognized hazard for employees working near equipment and processes that generate high levels of sound pressure. High sound pressure levels have the potential to result in temporary or permanent alteration in hearing perception. The cleaning of cages used to house laboratory research animals is a process that uses equipment capable of generating high sound pressure levels. The purpose of this research study was to assess occupational exposure to sound pressure levels for employees operating cage decontamination equipment. This study reveals the potential for overexposure to hazardous noise as defined by the Occupational Safety and Health Administration (OSHA) permissible exposure limit and consistent surpassing of the OSHA action level. These results emphasize the importance of evaluating equipment and room design when acquiring new cage decontamination equipment in order to minimize employee exposure to potentially hazardous noise pressure levels. PMID:23566325

  11. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  12. Development of accumulated heat stress index based on time-weighted function

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo

    2016-05-01

    Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.

  13. Development of accumulated heat stress index based on time-weighted function

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo

    2015-04-01

    Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.

  14. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  15. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates.

    PubMed

    Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered. PMID:22982552

  16. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates

    USGS Publications Warehouse

    Bradley, Paul M.; Journey, Celeste A.; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.

  17. On the Berdichevsky average

    NASA Astrophysics Data System (ADS)

    Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi

    2016-04-01

    Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.

  18. Averaging the inhomogeneous universe

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2012-03-01

    A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.

  19. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  20. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  1. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  2. Effect of annealing time, weight pressure and cobalt doping on the electrical and magnetic behavior of barium titanate

    NASA Astrophysics Data System (ADS)

    Samuvel, K.; Ramachandran, K.

    2016-05-01

    BaTi0.5CO0.5O3 (BTCO) nanoparticles were prepared by the solid state reaction technique using different starting materials and the microstructure examined by XRD, FESEM, BDS and VSM. X-ray diffraction and electron diffraction patterns showed that the nanoparticles were the tetragonal BTCO phase. The BTCO nanoparticles prepared from the starting materials of as prepared titanium-oxide, Cobalt -oxide and barium carbonate have spherical grain morphology, an average size of 65 nm and a fairly narrow size distribution. The nano-scale presence and the formation of the tetragonal perovskite phase as well as the crystallinity were detected using the mentioned techniques. Dielectric properties of the samples were measured at different frequencies. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. The doped BTCO samples exhibited low loss factor at 1 kHz and 1 MHz frequencies respectively.

  3. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  4. Screen-time Weight-loss Intervention Targeting Children at Home (SWITCH): A randomized controlled trial study protocol

    PubMed Central

    2011-01-01

    Background Approximately one third of New Zealand children and young people are overweight or obese. A similar proportion (33%) do not meet recommendations for physical activity, and 70% do not meet recommendations for screen time. Increased time being sedentary is positively associated with being overweight. There are few family-based interventions aimed at reducing sedentary behavior in children. The aim of this trial is to determine the effects of a 24 week home-based, family oriented intervention to reduce sedentary screen time on children's body composition, sedentary behavior, physical activity, and diet. Methods/Design The study design is a pragmatic two-arm parallel randomized controlled trial. Two hundred and seventy overweight children aged 9-12 years and primary caregivers are being recruited. Participants are randomized to intervention (family-based screen time intervention) or control (no change). At the end of the study, the control group is offered the intervention content. Data collection is undertaken at baseline and 24 weeks. The primary trial outcome is child body mass index (BMI) and standardized body mass index (zBMI). Secondary outcomes are change from baseline to 24 weeks in child percentage body fat; waist circumference; self-reported average daily time spent in physical and sedentary activities; dietary intake; and enjoyment of physical activity and sedentary behavior. Secondary outcomes for the primary caregiver include change in BMI and self-reported physical activity. Discussion This study provides an excellent example of a theory-based, pragmatic, community-based trial targeting sedentary behavior in overweight children. The study has been specifically designed to allow for estimation of the consistency of effects on body composition for Māori (indigenous), Pacific and non-Māori/non-Pacific ethnic groups. If effective, this intervention is imminently scalable and could be integrated within existing weight management programs. Trial

  5. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    PubMed Central

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180

  6. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  7. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  8. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  9. The Averaging Problem in Cosmology

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2009-06-01

    This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.

  10. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  11. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  12. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  13. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  14. On generalized averaged Gaussian formulas

    NASA Astrophysics Data System (ADS)

    Spalevic, Miodrag M.

    2007-09-01

    We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.

  15. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  16. Averaged Electroencephalic Audiometry in Infants

    ERIC Educational Resources Information Center

    Lentz, William E.; McCandless, Geary A.

    1971-01-01

    Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)

  17. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  18. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  19. Averaging facial expression over time

    PubMed Central

    Haberman, Jason; Harp, Tom; Whitney, David

    2010-01-01

    The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064

  20. Average Cost of Common Schools.

    ERIC Educational Resources Information Center

    White, Fred; Tweeten, Luther

    The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…

  1. Exact averaging of laminar dispersion

    NASA Astrophysics Data System (ADS)

    Ratnakar, Ram R.; Balakotaiah, Vemuri

    2011-02-01

    We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.

  2. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  3. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  4. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  5. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  6. Long 3 x 8 hr dialysis: a three-decade summary.

    PubMed

    Charra, Bernard; Chazot, Charles; Jean, Guillaume; Hurot, Jean-Marc; Vanel, Thierry; Terrat, Jean-Claude; VoVan, Cyril

    2003-01-01

    A long hemodialysis (HD), 3 x 8 hours/week, has been used without significant modification in Tassin for 35 years with excellent morbidity and mortality results. It can be performed during the day or overnight. The relatively good survival is mainly due to a lower cardiovascular mortality than usually reported in dialysis patients. This in turn is mainly due to the good control of blood pressure (BP) including drug-free hypertension control and low incidence of intradialytic hypotension. This control of BP is probably the result of the tight extracellular volume normalization (dry weight), although one cannot exclude the effect of other factors such as serum phosphorus control well achieved using long dialysis. The high dose of small and even more of middle molecules is another essential virtue of long dialysis, leading to good nutrition, correction of anemia, control of serum phosphate and potassium with low doses of medications and providing a very cost-effective treatment. In 2002 one must aim at optimal rather than just adequate dialysis. Optimal dialysis needs to correct as perfectly as possible each and every abnormality due to renal failure. It can be achieved using longer (or more frequent) sessions. Overnight dialysis is the most logical way of implementing long HD with the lowest possible hindrance on patient's life. Due to the change in case mix a decreasing number of patients are able or willing to go on overnight dialysis, education to be autonomous is more difficult, but the benefit is still there. PMID:14733303

  7. Circadian Activity Rhythms and Sleep in Nurses Working Fixed 8-hr Shifts.

    PubMed

    Kang, Jiunn-Horng; Miao, Nae-Fang; Tseng, Ing-Jy; Sithole, Trevor; Chung, Min-Huey

    2015-05-01

    Shift work is associated with adverse health outcomes. The aim of this study was to explore the effects of shift work on circadian activity rhythms (CARs) and objective and subjective sleep quality in nurses. Female day-shift (n = 16), evening-shift (n = 6), and night-shift (n = 13) nurses wore a wrist actigraph to monitor the activity. We used cosinor analysis and time-frequency analysis to study CARs. Night-shift nurses exhibited the lowest values of circadian rhythm amplitude, acrophase, autocorrelation, and mean of the circadian relative power (CRP), whereas evening-shift workers exhibited the greatest standard deviation of the CRP among the three shift groups. That is, night-shift nurses had less robust CARs and evening-shift nurses had greater variations in CARs compared with nurses who worked other shifts. Our results highlight the importance of assessing CARs to prevent the adverse effects of shift work on nurses' health. PMID:25332463

  8. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  9. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  10. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  11. RHIC BPM system average orbit calculations

    SciTech Connect

    Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

    2009-05-04

    RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

  12. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  13. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  14. Averaging procedures for flow within vegetation canopies

    NASA Astrophysics Data System (ADS)

    Raupach, M. R.; Shaw, R. H.

    1982-01-01

    Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’ term describing the transfer of energy from large-scale motion to wake turbulence by form drag. The differences are primarily due to the appearance, in the covariances produced by the second scheme, of dispersive components arising from the spatial correlation of time-averaged flow variables. The two schemes are shown to coincide if these dispersive fluxes vanish.

  15. Chronic Moderate Sleep Restriction in Older Long Sleepers and Older Average Duration Sleepers: A Randomized Controlled Trial

    PubMed Central

    Youngstedt, Shawn D.; Jean-Louis, Girardin; Bootzin, Richard R.; Kripke, Daniel F.; Cooper, Jonnifer; Dean, Lauren R.; Catao, Fabio; James, Shelli; Vining, Caitlyn; Williams, Natasha J.; Irwin, Michael R.

    2013-01-01

    Epidemiologic studies have consistently shown that sleeping < 7 hr and ≥ 8 hr is associated with increased mortality and morbidity. The risks of short sleep may be consistent with results from experimental sleep deprivation studies. However, there has been little study of chronic moderate sleep restriction and no evaluation of older adults who might be more vulnerable to negative effects of sleep restriction, given their age-related morbidities. Moreover, the risks of long sleep have scarcely been examined experimentally. Moderate sleep restriction might benefit older long sleepers who often spend excessive time in bed (TIB), in contrast to older adults with average sleep patterns. Our aims are: (1) to examine the ability of older long sleepers and older average sleepers to adhere to 60 min TIB restriction; and (2) to contrast effects of chronic TIB restriction in older long vs. average sleepers. Older adults (n=100) (60–80 yr) who sleep 8–9 hr per night and 100 older adults who sleep 6–7.25 hr per night will be examined at 4 sites over 5 years. Following a 2-week baseline, participants will be randomized to one of two 12-week treatments: (1) a sleep restriction involving a fixed sleep-wake schedule, in which TIB is reduced 60 min below each participant’s baseline TIB; (2) a control treatment involving no sleep restriction, but a fixed sleep schedule. Sleep will be assessed with actigraphy and a diary. Measures will include glucose tolerance, sleepiness, depressive symptoms, quality of life, cognitive performance, incidence of illness or accident, and inflammation. PMID:23811325

  16. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  17. Averaging of Backscatter Intensities in Compounds

    PubMed Central

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging.

  18. Neutron resonance averaging with filtered beams

    SciTech Connect

    Chrien, R.E.

    1985-01-01

    Neutron resonance averaging using filtered beams from a reactor source has proven to be an effective nuclear structure tool within certain limitations. These limitations are imposed by the nature of the averaging process, which produces fluctuations in radiative intensities. The fluctuations have been studied quantitatively. Resonance averaging also gives us information about initial or capture state parameters, in particular the photon strength function. Suitable modifications of the filtered beams are suggested for the enhancement of non-resonant processes.

  19. Spatial limitations in averaging social cues.

    PubMed

    Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  20. Spectral and parametric averaging for integrable systems

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Serota, R. A.

    2015-05-01

    We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.

  1. Statistics of time averaged atmospheric scintillation

    SciTech Connect

    Stroud, P.

    1994-02-01

    A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

  2. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  3. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  4. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  5. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  6. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  7. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the...

  8. A note on generalized averaged Gaussian formulas

    NASA Astrophysics Data System (ADS)

    Spalevic, Miodrag

    2007-11-01

    We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.

  9. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…

  10. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  11. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  12. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  13. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  14. New results on averaging theory and applications

    NASA Astrophysics Data System (ADS)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  15. The Hubble rate in averaged cosmology

    SciTech Connect

    Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com

    2011-03-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.

  16. Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.

    ERIC Educational Resources Information Center

    Caruk, Joan Marie

    To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…

  17. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    PubMed

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  18. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  19. Physics of the spatially averaged snowmelt process

    NASA Astrophysics Data System (ADS)

    Horne, Federico E.; Kavvas, M. Levent

    1997-04-01

    It has been recognized that the snowmelt models developed in the past do not fully meet current prediction requirements. Part of the reason is that they do not account for the spatial variation in the dynamics of the spatially heterogeneous snowmelt process. Most of the current physics-based distributed snowmelt models utilize point-location-scale conservation equations which do not represent the spatially varying snowmelt dynamics over a grid area that surrounds a computational node. In this study, to account for the spatial heterogeneity of the snowmelt dynamics, areally averaged mass and energy conservation equations for the snowmelt process are developed. As a first step, energy and mass conservation equations that govern the snowmelt dynamics at a point location are averaged over the snowpack depth, resulting in depth averaged equations (DAE). In this averaging, it is assumed that the snowpack has two layers. Then, the point location DAE are averaged over the snowcover area. To develop the areally averaged equations of the snowmelt physics, we make the fundamental assumption that snowmelt process is spatially ergodic. The snow temperature and the snow density are considered as the stochastic variables. The areally averaged snowmelt equations are obtained in terms of their corresponding ensemble averages. Only the first two moments are considered. A numerical solution scheme (Runge-Kutta) is then applied to solve the resulting system of ordinary differential equations. This equation system is solved for the areal mean and areal variance of snow temperature and of snow density, for the areal mean of snowmelt, and for the areal covariance of snow temperature and snow density. The developed model is tested using Scott Valley (Siskiyou County, California) snowmelt and meteorological data. The performance of the model in simulating the observed areally averaged snowmelt is satisfactory.

  20. Cosmic Inhomogeneities and Averaged Cosmological Dynamics

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Singh, T. P.

    2008-10-01

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a “dark energy.” However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be “no.” Averaging effects negligibly influence the cosmological dynamics.

  1. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793

  2. Average Shape of Transport-Limited Aggregates

    NASA Astrophysics Data System (ADS)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  3. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  5. 40 CFR 91.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... offset by positive credits from engine families below the applicable emission standard, as allowed under the provisions of this subpart. Averaging of credits in this manner is used to determine...

  6. Orbit-averaged implicit particle codes

    NASA Astrophysics Data System (ADS)

    Cohen, B. I.; Freis, R. P.; Thomas, V.

    1982-03-01

    The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the numerical stability of each scheme are given. Simulation examples are presented for a one-dimensional electrostatic model. Schemes are constructed that are stable at large-time step, require fewer particles, and, hence, reduce input-output and memory requirements. Orbit averaging, however, in the unmagnetized electrostatic models tested so far is not as successful as in cases where there is a magnetic field. Methods are suggested in which orbit averaging should achieve more significant improvements in code efficiency.

  7. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  8. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  9. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  10. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  11. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...

  12. Total-pressure averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  13. Stochastic Averaging of Duhem Hysteretic Systems

    NASA Astrophysics Data System (ADS)

    YING, Z. G.; ZHU, W. Q.; NI, Y. Q.; KO, J. M.

    2002-06-01

    The response of Duhem hysteretic system to externally and/or parametrically non-white random excitations is investigated by using the stochastic averaging method. A class of integrable Duhem hysteresis models covering many existing hysteresis models is identified and the potential energy and dissipated energy of Duhem hysteretic component are determined. The Duhem hysteretic system under random excitations is replaced equivalently by a non-hysteretic non-linear random system. The averaged Ito's stochastic differential equation for the total energy is derived and the Fokker-Planck-Kolmogorov equation associated with the averaged Ito's equation is solved to yield stationary probability density of total energy, from which the statistics of system response can be evaluated. It is observed that the numerical results by using the stochastic averaging method is in good agreement with that from digital simulation.

  14. Geologic analysis of averaged magnetic satellite anomalies

    NASA Technical Reports Server (NTRS)

    Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.

    1985-01-01

    To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.

  15. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  16. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  17. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  18. Heuristic approach to capillary pressures averaging

    SciTech Connect

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  19. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  20. Average luminosity distance in inhomogeneous universes

    SciTech Connect

    Kostov, Valentin

    2010-04-01

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained

  1. Explicit cosmological coarse graining via spatial averaging

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Singh, T. P.

    2008-01-01

    The present matter density of the Universe, while highly inhomogeneous on small scales, displays approximate homogeneity on large scales. We propose that whereas it is justified to use the Friedmann Lemaître Robertson Walker (FLRW) line element (which describes an exactly homogeneous and isotropic universe) as a template to construct luminosity distances in order to compare observations with theory, the evolution of the scale factor in such a construction must be governed not by the standard Einstein equations for the FLRW metric, but by the modified Friedmann equations derived by Buchert (Gen Relat Gravit 32:105, 2000; 33:1381, 2001) in the context of spatial averaging in Cosmology. Furthermore, we argue that this scale factor, defined in the spatially averaged cosmology, will correspond to the effective FLRW metric provided the size of the averaging domain coincides with the scale at which cosmological homogeneity arises. This allows us, in principle, to compare predictions of a spatially averaged cosmology with observations, in the standard manner, for instance by computing the luminosity distance versus red-shift relation. The predictions of the spatially averaged cosmology would in general differ from standard FLRW cosmology, because the scale-factor now obeys the modified FLRW equations. This could help determine, by comparing with observations, whether or not cosmological inhomogeneities are an alternative explanation for the observed cosmic acceleration.

  2. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  3. High Average Power Yb:YAG Laser

    SciTech Connect

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  4. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  5. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  6. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  7. The modulated average structure of mullite.

    PubMed

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  8. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  9. Average: the juxtaposition of procedure and context

    NASA Astrophysics Data System (ADS)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  10. Mean Element Propagations Using Numerical Averaging

    NASA Technical Reports Server (NTRS)

    Ely, Todd A.

    2009-01-01

    The long-term evolution characteristics (and stability) of an orbit are best characterized using a mean element propagation of the perturbed two body variational equations of motion. The averaging process eliminates short period terms leaving only secular and long period effects. In this study, a non-traditional approach is taken that averages the variational equations using adaptive numerical techniques and then numerically integrating the resulting EOMs. Doing this avoids the Fourier series expansions and truncations required by the traditional analytic methods. The resultant numerical techniques can be easily adapted to propagations at most solar system bodies.

  11. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...

  12. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  13. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733

  14. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  15. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  16. Averaging models for linear piezostructural systems

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kurdila, A. J.; Stepanyan, V.; Inman, D. J.; Vignola, J.

    2009-03-01

    In this paper, we consider a linear piezoelectric structure which employs a fast-switched, capacitively shunted subsystem to yield a tunable vibration absorber or energy harvester. The dynamics of the system is modeled as a hybrid system, where the switching law is considered as a control input and the ambient vibration is regarded as an external disturbance. It is shown that under mild assumptions of existence and uniqueness of the solution of this hybrid system, averaging theory can be applied, provided that the original system dynamics is periodic. The resulting averaged system is controlled by the duty cycle of a driven pulse-width modulated signal. The response of the averaged system approximates the performance of the original fast-switched linear piezoelectric system. It is analytically shown that the averaging approximation can be used to predict the electromechanically coupled system modal response as a function of the duty cycle of the input switching signal. This prediction is experimentally validated for the system consisting of a piezoelectric bimorph connected to an electromagnetic exciter. Experimental results show that the analytical predictions are observed in practice over a fixed "effective range" of switching frequencies. The same experiments show that the response of the switched system is insensitive to an increase in switching frequency above the effective frequency range.

  17. A Measure of the Average Intercorrelation

    ERIC Educational Resources Information Center

    Meyer, Edward P.

    1975-01-01

    Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)

  18. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  19. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  20. Reformulation of Ensemble Averages via Coordinate Mapping.

    PubMed

    Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A

    2016-04-12

    A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263

  1. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  2. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...

  3. Average configuration of the induced venus magnetotail

    SciTech Connect

    McComas, D.J.; Spence, H.E.; Russell, C.T.

    1985-01-01

    In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.

  4. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  5. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  6. Orbit Averaging in Perturbed Planetary Rings

    NASA Astrophysics Data System (ADS)

    Stewart, Glen R.

    2015-11-01

    The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.

  7. Lidar uncertainty and beam averaging correction

    NASA Astrophysics Data System (ADS)

    Giyanani, A.; Bierbooms, W.; van Bussel, G.

    2015-05-01

    Remote sensing of the atmospheric variables with the use of Lidar is a relatively new technology field for wind resource assessment in wind energy. A review of the draft version of an international guideline (CD IEC 61400-12-1 Ed.2) used for wind energy purposes is performed and some extra atmospheric variables are taken into account for proper representation of the site. A measurement campaign with two Leosphere vertical scanning WindCube Lidars and metmast measurements is used for comparison of the uncertainty in wind speed measurements using the CD IEC 61400-12-1 Ed.2. The comparison revealed higher but realistic uncertainties. A simple model for Lidar beam averaging correction is demonstrated for understanding deviation in the measurements. It can be further applied for beam averaging uncertainty calculations in flat and complex terrain.

  8. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119

  9. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  10. Apparent and average accelerations of the Universe

    SciTech Connect

    Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu

    2008-10-15

    In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

  11. Emissions averaging top option for HON compliance

    SciTech Connect

    Kapoor, S. )

    1993-05-01

    In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.

  12. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  13. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  14. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  15. Representation of average drop sizes in sprays

    NASA Astrophysics Data System (ADS)

    Dodge, Lee G.

    1987-06-01

    Procedures are presented for processing drop-size measurements to obtain average drop sizes that represent overall spray characteristics. These procedures are not currently in general use, but they would represent an improvement over current practice. Clear distinctions are made between processing data for spatial- and temporal-type measurements. The conversion between spatial and temporal measurements is discussed. The application of these procedures is demonstrated by processing measurements of the same spray by two different types of instruments.

  16. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  17. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  18. Disk-averaged synthetic spectra of Mars

    NASA Technical Reports Server (NTRS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  19. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866

  20. Viewpoint: observations on scaled average bioequivalence.

    PubMed

    Patterson, Scott D; Jones, Byron

    2012-01-01

    The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace. PMID:22162308

  1. Improving Reading Abilities of Average and Below Average Readers through Peer Tutoring.

    ERIC Educational Resources Information Center

    Galezio, Marne; And Others

    A program was designed to improve the progress of average and below average readers in a first-grade, a second-grade, and a sixth-grade classroom in a multicultural, multi-social economic district located in a three-county area northwest of Chicago, Illinois. Classroom teachers noted that students were having difficulty making adequate progress in…

  2. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    ERIC Educational Resources Information Center

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  3. A Green's function quantum average atom model

    DOE PAGESBeta

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  4. Average shape of fluctuations for subdiffusive walks

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.; Acedo, L.

    2004-03-01

    We study the average shape of fluctuations for subdiffusive processes, i.e., processes with uncorrelated increments but where the waiting time distribution has a broad power-law tail. This shape is obtained analytically by means of a fractional diffusion approach. We find that, in contrast with processes where the waiting time between increments has finite variance, the fluctuation shape is no longer a semicircle: it tends to adopt a tablelike form as the subdiffusive character of the process increases. The theoretical predictions are compared with numerical simulation results.

  5. The averaging method in applied problems

    NASA Astrophysics Data System (ADS)

    Grebenikov, E. A.

    1986-04-01

    The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.

  6. Auto-exploratory average reward reinforcement learning

    SciTech Connect

    Ok, DoKyeong; Tadepalli, P.

    1996-12-31

    We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.

  7. Average observational quantities in the timescape cosmology

    SciTech Connect

    Wiltshire, David L.

    2009-12-15

    We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

  8. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    SciTech Connect

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  9. Climatology of globally averaged thermospheric mass density

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Picone, J. M.

    2010-09-01

    We present a climatological analysis of daily globally averaged density data, derived from orbit data and covering the years 1967-2007, along with an empirical Global Average Mass Density Model (GAMDM) that encapsulates the 1986-2007 data. The model represents density as a function of the F10.7 solar radio flux index, the day of year, and the Kp geomagnetic activity index. We discuss in detail the dependence of the data on each of the input variables, and demonstrate that all of the terms in the model represent consistent variations in both the 1986-2007 data (on which the model is based) and the independent 1967-1985 data. We also analyze the uncertainty in the results, and quantify how the variance in the data is apportioned among the model terms. We investigate the annual and semiannual variations of the data and quantify the amplitude, height dependence, solar cycle dependence, and interannual variability of these oscillatory modes. The auxiliary material includes Fortran 90 code for evaluating GAMDM.

  10. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  11. Average Gait Differential Image Based Human Recognition

    PubMed Central

    Chen, Jinyan; Liu, Jiansheng

    2014-01-01

    The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648

  12. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171

  13. Average power laser experiment (APLE) design

    NASA Astrophysics Data System (ADS)

    Parazzoli, C. G.; Rodenburg, R. E.; Dowell, D. H.; Greegor, R. B.; Kennedy, R. C.; Romero, J. B.; Siciliano, J. A.; Tong, K.-O.; Vetter, A. M.; Adamski, J. L.; Pistoresi, D. J.; Shoffstall, D. R.; Quimby, D. C.

    1992-07-01

    We describe the details and the design requirements for the 100 kW CW radio frequency free electron laser at 10 μm to be built at Boeing Aerospace and Electronics Division in Seattle with the collaboration of Los Alamos National Laboratory. APLE is a single-accelerator master-oscillator and power-amplifier (SAMOPA) device. The goal of this experiment is to demonstrate a fully operational RF-FEL at 10 μm with an average power of 100 kW. The approach and wavelength were chosen on the basis of maximum cost effectiveness, including utilization of existing hardware and reasonable risk, and potential for future applications. Current plans call for an initial oscillator power demonstration in the fall of 1994 and full SAMOPA operation by December 1995.

  14. Asymmetric network connectivity using weighted harmonic averages

    NASA Astrophysics Data System (ADS)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  15. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  16. Average prime-pair counting formula

    NASA Astrophysics Data System (ADS)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  17. The balanced survivor average causal effect.

    PubMed

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-01-01

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214

  18. Averaged implicit hydrodynamic model of semiflexible filaments.

    PubMed

    Chandran, Preethi L; Mofrad, Mohammad R K

    2010-03-01

    We introduce a method to incorporate hydrodynamic interaction in a model of semiflexible filament dynamics. Hydrodynamic screening and other hydrodynamic interaction effects lead to nonuniform drag along even a rigid filament, and cause bending fluctuations in semiflexible filaments, in addition to the nonuniform Brownian forces. We develop our hydrodynamics model from a string-of-beads idealization of filaments, and capture hydrodynamic interaction by Stokes superposition of the solvent flow around beads. However, instead of the commonly used first-order Stokes superposition, we do an equivalent of infinite-order superposition by solving for the true relative velocity or hydrodynamic velocity of the beads implicitly. We also avoid the computational cost of the string-of-beads idealization by assuming a single normal, parallel and angular hydrodynamic velocity over sections of beads, excluding the beads at the filament ends. We do not include the end beads in the averaging and solve for them separately instead, in order to better resolve the drag profiles along the filament. A large part of the hydrodynamic drag is typically concentrated at the filament ends. The averaged implicit hydrodynamics methods can be easily incorporated into a string-of-rods idealization of semiflexible filaments that was developed earlier by the authors. The earlier model was used to solve the Brownian dynamics of semiflexible filaments, but without hydrodynamic interactions incorporated. We validate our current model at each stage of development, and reproduce experimental observations on the mean-squared displacement of fluctuating actin filaments . We also show how hydrodynamic interaction confines a fluctuating actin filament between two stationary lateral filaments. Finally, preliminary examinations suggest that a large part of the observed velocity in the interior segments of a fluctuating filament can be attributed to induced solvent flow or hydrodynamic screening. PMID:20365783

  19. The entropy in finite N-unit nonextensive systems: The normal average and q-average

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hideo

    2010-09-01

    We discuss the Tsallis entropy in finite N-unit nonextensive systems by using the multivariate q-Gaussian probability distribution functions (PDFs) derived by the maximum entropy methods with the normal average and the q-average (q: the entropic index). The Tsallis entropy obtained by the q-average has an exponential N dependence: Sq(N)/N≃e(1-q)NS1(1) for large N (≫1/(1-q)>0). In contrast, the Tsallis entropy obtained by the normal average is given by Sq(N)/N≃[1/(q-1)N] for large N (≫1/(q -1)>0). N dependences of the Tsallis entropy obtained by the q- and normal averages are generally quite different, although both results are in fairly good agreement for |q -1|≪1.0. The validity of the factorization approximation (FA) to PDFs, which has been commonly adopted in the literature, has been examined. We have calculated correlations defined by Cm=⟨(δxiδxj)m⟩-⟨(δxi)m⟩⟨(δxj)m⟩ for i ≠j where δxi=xi-⟨xi⟩, and the bracket ⟨ṡ⟩ stands for the normal and q-averages. The first-order correlation (m =1) expresses the intrinsic correlation and higher-order correlations with m ≥2 include nonextensivity-induced correlation, whose physical origin is elucidated in the superstatistics.

  20. Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport

    NASA Astrophysics Data System (ADS)

    Parker, J. C.; van Genuchten, M. Th.

    1984-07-01

    Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

  1. Optimizing Average Precision Using Weakly Supervised Data.

    PubMed

    Behl, Aseem; Mohapatra, Pritish; Jawahar, C V; Kumar, M Pawan

    2015-12-01

    Many tasks in computer vision, such as action classification and object detection, require us to rank a set of samples according to their relevance to a particular visual category. The performance of such tasks is often measured in terms of the average precision (ap). Yet it is common practice to employ the support vector machine ( svm) classifier, which optimizes a surrogate 0-1 loss. The popularity of svmcan be attributed to its empirical performance. Specifically, in fully supervised settings, svm tends to provide similar accuracy to ap-svm, which directly optimizes an ap-based loss. However, we hypothesize that in the significantly more challenging and practically useful setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. In order to test this hypothesis, we propose a novel latent ap-svm that minimizes a carefully designed upper bound on the ap-based loss function over weakly supervised samples. Using publicly available datasets, we demonstrate the advantage of our approach over standard loss-based learning frameworks on three challenging problems: action classification, character recognition and object detection. PMID:26539857

  2. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  3. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  4. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  5. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  6. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  7. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  8. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  9. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  10. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....

  11. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  12. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  13. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  14. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  15. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  16. Arithmetic averaging: A versatile technique for smoothing and trend removal

    SciTech Connect

    Clark, E.L.

    1993-12-31

    Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.

  17. Scaling of average weighted shortest path and average receiving time on weighted expanded Koch networks

    NASA Astrophysics Data System (ADS)

    Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng

    2014-04-01

    Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.

  18. Cost averaging techniques for robust control of flexible structural systems

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.

  19. Sample Size Bias in Judgments of Perceptual Averages

    ERIC Educational Resources Information Center

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  20. Averaging in SU(2) open quantum random walk

    NASA Astrophysics Data System (ADS)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  1. 76 FR 57081 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2010 was $28,284. The average annual cost to confine an inmate in a Community Corrections...

  2. 78 FR 16711 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2011 was $28,893.40. The average annual cost to confine an inmate in a Community...

  3. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-03

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an inmate in a Community Corrections...

  4. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Radial average terrain elevation is calculated as the average of the elevation along a straight line path... radial path extends over foreign territory or water, such portion must not be included in the computation of average elevation unless the radial path again passes over United States land between 16 and...

  5. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  6. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  7. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  8. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  9. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  10. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  11. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  12. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  13. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  14. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  15. 27 CFR 19.37 - Average effective tax rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate... effective tax rate. (a) The proprietor may establish an average effective tax rate for any eligible... recompute the average effective tax rate so as to include only the immediately preceding 6-month period....

  16. Robust Morphological Averages in Three Dimensions for Anatomical Atlas Construction

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Bloch, Isabelle; Schmitt, Francis

    2004-09-01

    We present original methods for obtaining robust, anatomical shape-based averages of features of the human head anatomy from a normal population. Our goals are computerized atlas construction with representative anatomical features and morphopometry for specific populations. A method for true-morphological averaging is proposed, consisting of a suitable blend of shape-related information for N objects to obtain a progressive average. It is made robust by penalizing, in a morphological sense, the contributions of features less similar to the current average. Morphological error and similarity, as well as penalization, are based on the same paradigm as the morphological averaging.

  17. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    NASA Astrophysics Data System (ADS)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  18. Average g-Factors of Anisotropic Polycrystalline Samples

    SciTech Connect

    Fishman, Randy Scott; Miller, Joel S.

    2010-01-01

    Due to the lack of suitable single crystals, the average g-factor of anisotropic polycrystalline samples are commonly estimated from either the Curie-Weiss susceptibility or the saturation magnetization. We show that the average g-factor obtained from the Curie constant is always greater than or equal to the average g-factor obtained from the saturation magnetization. The average g-factors are equal only for a single crystal or an isotropic polycrystal. We review experimental results for several compounds containing the anisotropic cation [Fe(C5Me5)2]+ and propose an experiment to test this inequality using a compound with a spinless anion.

  19. Aberration averaging using point spread function for scanning projection systems

    NASA Astrophysics Data System (ADS)

    Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi

    2000-07-01

    Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.

  20. Thermodynamic properties of average-atom interatomic potentials for alloys

    NASA Astrophysics Data System (ADS)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  1. Phase averaging of image ensembles by using cepstral gradients

    SciTech Connect

    Swan, H.W.

    1983-11-01

    The direct Fourier phase averaging of an ensemble of randomly blurred images has long been thought to be too difficult a problem to undertake realistically owing to the necessity of proper phase unwrapping. It is shown that it is nevertheless possible to average the Fourier phase information in an image ensemble without calculating phases by using the technique of cepstral gradients.

  2. 78 FR 49770 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal... annual cost to confine an inmate in a Community Corrections Center for Fiscal Year 2012 was $27,003...

  3. 20 CFR 404.220 - Average-monthly-wage method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 404.220 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  4. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  5. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    ERIC Educational Resources Information Center

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  6. Using Multiple Representations To Improve Conceptions of Average Speed.

    ERIC Educational Resources Information Center

    Reed, Stephen K.; Jazo, Linda

    2002-01-01

    Discusses improving mathematical reasoning through the design of computer microworlds and evaluates a computer-based learning environment that uses multiple representations to improve undergraduate students' conception of average speed. Describes improvement of students' estimates of average speed by using visual feedback from a simulation.…

  7. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...

  8. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...

  9. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...

  10. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...

  11. Average refractive powers of an alexandrite laser rod

    NASA Astrophysics Data System (ADS)

    Driedger, K. P.; Krause, W.; Weber, H.

    1986-04-01

    The average refractive powers (average inverse focal lengths) of the thermal lens produced by an alexandrite laser rod optically pumped at repetition rates between 0.4 and 10 Hz and with electrical flashlamp input pulse energies up to 500 J have been measured. The measuring setup is described and the measurement results are discussed.

  12. Hadley circulations for zonally averaged heating centered off the equator

    NASA Technical Reports Server (NTRS)

    Lindzen, Richard S.; Hou, Arthur Y.

    1988-01-01

    Consistent with observations, it is found that moving peak heating even 2 deg off the equator leads to profound asymmetries in the Hadley circulation, with the winter cell amplifying greatly and the summer cell becoming negligible. It is found that the annually averaged Hadley circulation is much larger than the circulation forced by the annually averaged heating.

  13. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw...

  14. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw...

  15. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  16. On various definitions of shadowing with average error in tracing

    NASA Astrophysics Data System (ADS)

    Wu, Xinxing; Oprocha, Piotr; Chen, Guanrong

    2016-07-01

    When computing a trajectory of a dynamical system, influence of noise can lead to large perturbations which can appear, however, with small probability. Then when calculating approximate trajectories, it makes sense to consider errors small on average, since controlling them in each iteration may be impossible. Demand to relate approximate trajectories with genuine orbits leads to various notions of shadowing (on average) which we consider in the paper. As the main tools in our studies we provide a few equivalent characterizations of the average shadowing property, which also partly apply to other notions of shadowing. We prove that almost specification on the whole space induces this property on the measure center which in turn implies the average shadowing property. Finally, we study connections among sensitivity, transitivity, equicontinuity and (average) shadowing.

  17. LANDSAT-4 horizon scanner full orbit data averages

    NASA Technical Reports Server (NTRS)

    Stanley, J. P.; Bilanow, S.

    1983-01-01

    Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.

  18. Some series of intuitionistic fuzzy interactive averaging aggregation operators.

    PubMed

    Garg, Harish

    2016-01-01

    In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail. PMID:27441128

  19. Do diurnal aerosol changes affect daily average radiative forcing?

    NASA Astrophysics Data System (ADS)

    Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary

    2013-06-01

    diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  20. Comparison of the WISC-R and the Leiter International Performance Scale with Average and Above-Average Students.

    ERIC Educational Resources Information Center

    Mask, Nan; Bowen, Charles E.

    1984-01-01

    Compared the Wechsler Intelligence Scale for Children (Revised) (WISC-R) and the Leiter International Performance Scale with 40 average and above average students. Results indicated a curvilinear relationship between the WISC-R and the Leiter, which correlates higher at the mean and deviates as the Full Scale varies from the mean. (JAC)

  1. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How is the annual refinery or importer average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Gasoline...

  2. Structuring Collaboration in Mixed-Ability Groups to Promote Verbal Interaction, Learning, and Motivation of Average-Ability Students

    ERIC Educational Resources Information Center

    Saleh, Mohammad; Lazonder, Ard W.; Jong, Ton de

    2007-01-01

    Average-ability students often do not take full advantage of learning in mixed-ability groups because they hardly engage in the group interaction. This study examined whether structuring collaboration by group roles and ground rules for helping behavior might help overcome this participatory inequality. In a plant biology course, heterogeneously…

  3. Programmable noise bandwidth reduction by means of digital averaging

    NASA Technical Reports Server (NTRS)

    Poklemba, John J. (Inventor)

    1993-01-01

    Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.

  4. The causal meaning of Fisher’s average effect

    PubMed Central

    LEE, JAMES J.; CHOW, CARSON C.

    2013-01-01

    Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113

  5. Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.

    PubMed

    Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu

    2010-05-01

    Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813

  6. Incoherent averaging of phase singularities in speckle-shearing interferometry.

    PubMed

    Mantel, Klaus; Nercissian, Vanusch; Lindlein, Norbert

    2014-08-01

    Interferometric speckle techniques are plagued by the omnipresence of phase singularities, impairing the phase unwrapping process. To reduce the number of phase singularities by physical means, an incoherent averaging of multiple speckle fields may be applied. It turns out, however, that the results may strongly deviate from the expected √N behavior. Using speckle-shearing interferometry as an example, we investigate the mechanism behind the reduction of phase singularities, both by calculations and by computer simulations. Key to an understanding of the reduction mechanism during incoherent averaging is the representation of the physical averaging process in terms of certain vector fields associated with each speckle field. PMID:25078215

  7. Time average vibration fringe analysis using Hilbert transformation

    SciTech Connect

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-10-20

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  8. Bounce-averaged Kinetic Equations and Neoclassical Polarization Density

    SciTech Connect

    First Author = B.H. Fong; T.S. Hahm

    1998-07-01

    The rigorous formulation of the bounce-averaged equations is presented based upon the Poincare-Cartan one-form andLie perturbation methods. The resulting bounce-averaged Vlasov equation is Hamiltonian, thus suitable for theself-consistent simulation of low-frequency electrostatic turbulence in the trapped ion mode regime. In the bounce-kineticPoisson equation, the "neoclassical polarization density" arises from the difference between bounce-averaged banana centerand real trapped particle densities across a field line. This representation of the neoclassical polarization drift as ashielding term provides a systematic way to study the long-term behavior of the turbulence-driven E x B flow.

  9. Optimization of high average power FEL beam for EUV lithography

    NASA Astrophysics Data System (ADS)

    Endo, Akira

    2015-05-01

    Extreme Ultraviolet Lithography (EUVL) is entering into high volume manufacturing (HVM) stage, with high average power (250W) EUV source from laser produced plasma at 13.5nm. Semiconductor industry road map indicates a scaling of the source technology more than 1kW average power by high repetition rate FEL. This paper discusses on the lowest risk approach to construct a prototype based on superconducting linac and normal conducting undulator, to demonstrate a high average power 13.5nm FEL equipped with optimized optical components and solid state lasers, to study FEL application in EUV lithography.

  10. Definition of average path and relativity parameter computation in CASA

    NASA Astrophysics Data System (ADS)

    Wu, Dawei; Huang, Yan; Chen, Xiaohua; Yu, Chang

    2001-09-01

    System CASA (computer-assisted semen analysis) is a medical applicable system which gets the sperm motility and its parameters using image processing method. But there is no any authoritative administration or academic organization gives a set of criterion for CASA now result in lowering the effective compare of work between the labs or researchers. The average path and parameters relative to it as average path velocity, amplitude of lateral head displacement and beat cross frequency are often unable to compare between systems because of different algorithm. The paper presents a new algorithm that could define the average path uniquely and compute those 3 parameters above quickly and handy from any real path.

  11. Averaging underwater noise levels for environmental assessment of shipping.

    PubMed

    Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

    2012-10-01

    Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575

  12. Distribution of time-averaged observables for weak ergodicity breaking.

    PubMed

    Rebenshtok, A; Barkai, E

    2007-11-23

    We find a general formula for the distribution of time-averaged observables for systems modeled according to the subdiffusive continuous time random walk. For Gaussian random walks coupled to a thermal bath we recover ergodicity and Boltzmann's statistics, while for the anomalous subdiffusive case a weakly nonergodic statistical mechanical framework is constructed, which is based on Lévy's generalized central limit theorem. As an example we calculate the distribution of X, the time average of the position of the particle, for unbiased and uniformly biased particles, and show that X exhibits large fluctuations compared with the ensemble average . PMID:18233203

  13. Average waiting time in FDDI networks with local priorities

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.

  14. Direct Statistical Simulation: Ensemble Averaging and Basis Reduction

    NASA Astrophysics Data System (ADS)

    Allawala, Altan; Marston, Brad

    2015-11-01

    Low-order statistics of models of geophysical fluids may be directly accessed by solving the equations of motion for the equal-time cumulants themselves. We investigate a variant of the second-order cumulant expansion (CE2) in which zonal averaging is replaced by ensemble averaging. Proper orthogonal decomposition (POD) of the second cumulant is used to reduce the dimensionality of the problem. The approach is tested on a quasi-geostrophic 2-layer baroclinic model of planetary atmospheres by comparison to the traditional approach of accumulating statistics via numerical simulation, and to zonal averaged CE2. Supported in part by NSF DMR-1306806 and NSF CCF-1048701.

  15. Average local ionization energy generalized to correlated wavefunctions

    SciTech Connect

    Ryabinkin, Ilya G.; Staroverov, Viktor N.

    2014-08-28

    The average local ionization energy function introduced by Politzer and co-workers [Can. J. Chem. 68, 1440 (1990)] as a descriptor of chemical reactivity has a limited utility because it is defined only for one-determinantal self-consistent-field methods such as the Hartree–Fock theory and the Kohn–Sham density-functional scheme. We reinterpret the negative of the average local ionization energy as the average total energy of an electron at a given point and, by rewriting this quantity in terms of reduced density matrices, arrive at its natural generalization to correlated wavefunctions. The generalized average local electron energy turns out to be the diagonal part of the coordinate representation of the generalized Fock operator divided by the electron density; it reduces to the original definition in terms of canonical orbitals and their eigenvalues for one-determinantal wavefunctions. The discussion is illustrated with calculations on selected atoms and molecules at various levels of theory.

  16. Effects of spatial variability and scale on areal -average evapotranspiration

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, Eric F.

    1993-01-01

    This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.

  17. Average lifespan of radioelectronic equipment with allowance for resource limitations

    NASA Astrophysics Data System (ADS)

    Davydov, A. N.

    2011-12-01

    One of the reliability parameters of radioelectronic equipment is its average life span. The number of incidents during the operation of different items that make up the component base of radioelectronic equipment follows an exponential distribution. In general, the average life span for an exponential distribution is T mean = 1/λ, where λ is the rate of base incidents in a component per hour. This estimate is valid when considering the life span of radioelectronic equipment from zero to infinity. In reality, component base items and, correspondingly, radioelectronic equipment have resource limitations caused by the properties of their composing materials and manufacturing technique. The average life span of radioelectronic equipment will be different from the ideal life span of the equipment. This paper is aimed at calculating the average life span of radioelectronic equipment with allowance for resource limitations of constituent electronic component base items.

  18. Does subduction zone magmatism produce average continental crust

    NASA Technical Reports Server (NTRS)

    Ellam, R. M.; Hawkesworth, C. J.

    1988-01-01

    The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.

  19. Ensemble vs. time averages in financial time series analysis

    NASA Astrophysics Data System (ADS)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  20. Total-pressure-tube averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.

    1973-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. The tests were performed at a pressure level of 1 bar, for Mach numbers up to near 1, and frequencies up to 3 kHz. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonances which further increased the indicated pressure were encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  1. Modelling and designing digital control systems with averaged measurements

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.; Beale, Guy O.

    1988-01-01

    An account is given of the control systems engineering methods applicable to the design of digital feedback controllers for aerospace deterministic systems in which the output, rather than being an instantaneous measure of the system at the sampling instants, instead represents an average measure of the system over the time interval between samples. The averaging effect can be included during the modeling of the plant, thereby obviating the iteration of design/simulation phases.

  2. A precise measurement of the average b hadron lifetime

    NASA Astrophysics Data System (ADS)

    Buskulic, D.; de Bonis, I.; Casper, D.; Decamp, D.; Ghez, P.; Goy, C.; Lees, J.-P.; Lucotte, A.; Minard, M.-N.; Odier, P.; Pietrzyk, B.; Ariztizabal, F.; Chmeissani, M.; Crespo, J. M.; Efthymiopoulos, I.; Fernandez, E.; Fernandez-Bosman, M.; Gaitan, V.; Garrido, Ll.; Martinez, M.; Orteu, S.; Pacheco, A.; Padilla, C.; Palla, F.; Pascual, A.; Perlas, J. A.; Sanchez, F.; Teubert, F.; Colaleo, A.; Creanza, D.; de Palma, M.; Farilla, A.; Gelao, G.; Girone, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Marinelli, N.; Natali, S.; Nuzzo, S.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Bonvicini, G.; Cattaneo, M.; Comas, P.; Coyle, P.; Drevermann, H.; Forty, R. W.; Frank, M.; Hagelberg, R.; Harvey, J.; Jacobsen, R.; Janot, P.; Jost, B.; Knobloch, J.; Lehraus, I.; Markou, C.; Martin, E. B.; Mato, P.; Minten, A.; Miquel, R.; Oest, T.; Palazzi, P.; Pater, J. R.; Pusztaszeri, J.-F.; Ranjard, F.; Rensing, P.; Rolandi, L.; Schlatter, D.; Schmelling, M.; Schneider, O.; Tejessy, W.; Tomalin, I. R.; Venturi, A.; Wachsmuth, H.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Bardadin-Otwinowska, M.; Barrès, A.; Boyer, C.; Falvard, A.; Gay, P.; Guicheney, C.; Henrard, P.; Jousset, J.; Michel, B.; Monteil, S.; Montret, J.-C.; Pallin, D.; Perret, P.; Podlyski, F.; Proriol, J.; Rossignol, J.-M.; Saadi, F.; Fearnley, T.; Hansen, J. B.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Nilsson, B. S.; Kyriakis, A.; Simopoulou, E.; Siotis, I.; Vayaki, A.; Zachariadou, K.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Bourdon, P.; Passalacqua, L.; Rougé, A.; Rumpf, M.; Tanaka, R.; Valassi, A.; Verderi, M.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Focardi, E.; Parrini, G.; Corden, M.; Delfino, M.; Georgiopoulos, C.; Jaffe, D. E.; Antonelli, A.; Bencivenni, G.; Bologna, G.; Bossi, F.; Campana, P.; Capon, G.; Chiarella, V.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Pepe-Altarelli, M.; Dorris, S. J.; Halley, A. W.; Ten Have, I.; Knowles, I. G.; Lynch, J. G.; Morton, W. T.; O'Shea, V.; Raine, C.; Reeves, P.; Scarr, J. M.; Smith, K.; Smith, M. G.; Thompson, A. S.; Thomson, F.; Thorn, S.; Turnbull, R. M.; Becker, U.; Braun, O.; Geweniger, C.; Graefe, G.; Hanke, P.; Hepp, V.; Kluge, E. E.; Putzer, A.; Rensch, B.; Schmidt, M.; Sommer, J.; Stenzel, H.; Tittel, K.; Werner, S.; Wunsch, M.; Abbaneo, D.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Colling, D. J.; Dornan, P. J.; Konstantinidis, N.; Moneta, L.; Moutoussi, A.; Nash, J.; San Martin, G.; Sedgbeer, J. K.; Stacey, A. M.; Dissertori, G.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bowdery, C. K.; Brodbeck, T. J.; Colrain, P.; Crawford, G.; Finch, A. J.; Foster, F.; Hughes, G.; Sloan, T.; Whelan, E. P.; Williams, M. I.; Galla, A.; Greene, A. M.; Kleinknecht, K.; Quast, G.; Raab, J.; Renk, B.; Sander, H.-G.; van Gemmeren, P.; Wanke, R.; Zeitnitz, C.; Aubert, J. J.; Bencheikh, A. M.; Benchouk, C.; Bonissent, A.; Bujosa, G.; Calvet, D.; Carr, J.; Diaconu, C.; Etienne, F.; Nicod, D.; Payre, P.; Rousseau, D.; Talby, M.; Thulasidas, M.; Abt, I.; Assmann, R.; Bauer, C.; Blum, W.; Brown, D.; Dietl, H.; Dydak, F.; Ganis, G.; Gotzhein, C.; Jakobs, K.; Kroha, H.; Lütjens, G.; Lutz, G.; Männer, W.; Moser, H.-G.; Richter, R.; Rosado-Schlosser, A.; Schael, S.; Settles, R.; Seywerd, H.; Stierlin, U.; Denis, R. St.; Wolf, G.; Alemany, R.; Boucrot, J.; Callot, O.; Cordier, A.; Courault, F.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, Ph.; Jacquet, M.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Musolino, G.; Nikolic, I.; Park, H. J.; Park, I. C.; Schune, M.-H.; Simion, S.; Veillet, J.-J.; Videau, I.; Azzurri, P.; Bagliesi, G.; Batignani, G.; Bettarini, S.; Bozzi, C.; Calderini, G.; Carpinelli, M.; Ciocci, M. A.; Ciulli, V.; Dell'Orso, R.; Fantechi, R.; Ferrante, I.; Foà, L.; Forti, F.; Giassi, A.; Giorgi, M. A.; Gregorio, A.; Ligabue, F.; Lusiani, A.; Marrocchesi, P. S.; Messineo, A.; Rizzo, G.; Sanguinetti, G.; Sciabà, A.; Spagnolo, P.; Steinberger, J.; Tenchini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Verdini, P. G.; Walsh, J.; Betteridge, A. P.; Blair, G. A.; Bryant, L. M.; Cerutti, F.; Gao, Y.; Green, M. G.; Johnson, D. L.; Medcalf, T.; Mir, Ll. M.; Perrodo, P.; Strong, J. A.; Bertin, V.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Haywood, S.; Edwards, M.; Maley, P.; Norton, P. R.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Duarte, H.; Emery, S.; Kozanecki, W.; Lançon, E.; Lemaire, M. C.; Locci, E.; Marx, B.; Perez, P.; Rander, J.; Renardy, J.-F.; Rosowsky, A.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Trabelsi, A.; Vallage, B.; Johnson, R. P.; Kim, H. Y.; Litke, A. M.; McNeil, M. A.; Taylor, G.; Beddall, A.; Booth, C. N.; Boswell, R.; Cartwright, S.; Combley, F.; Dawson, I.; Koksal, A.; Letho, M.; Newton, W. M.; Rankin, C.; Thompson, L. F.; Böhrer, A.; Brandt, S.; Cowan, G.; Feigl, E.; Grupen, C.; Lutters, G.; Minguet-Rodriguez, J.; Rivera, F.; Saraiva, P.; Smolik, L.; Stephan, F.; Apollonio, M.; Bosisio, L.; Della Marina, R.; Giannini, G.; Gobbo, B.; Ragusa, F.; Rothberg, J.; Wasserbaech, S.; Armstrong, S. R.; Bellantoni, L.; Elmer, P.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; González, S.; Grahl, J.; Harton, J. L.; Hayes, O. J.; Hu, H.; McNamara, P. A.; Nachtman, J. M.; Orejudos, W.; Pan, Y. B.; Saadi, Y.; Schmitt, M.; Scott, I. J.; Sharma, V.; Turk, J. D.; Walsh, A. M.; Sau Lan Wu; Wu, X.; Yamartino, J. M.; Zheng, M.; Zobernig, G.; Aleph Collaboration

    1996-02-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 ± 0.013 ± 0.022 ps.

  3. Updated measurement of the average b hadron lifetime

    NASA Astrophysics Data System (ADS)

    Buskulic, D.; Decamp, D.; Goy, C.; Lees, J.-P.; Minard, M.-N.; Mours, B.; Alemany, R.; Ariztizabal, F.; Comas, P.; Crespo, J. M.; Delfino, M.; Fernandez, E.; Gaitan, V.; Garrido, Ll.; Mattison, T.; Pacheco, A.; Pascual, A.; Creanza, D.; de Palma, M.; Farilla, A.; Iaselli, G.; Maggi, G.; Maggi, M.; Natali, S.; Nuzzo, S.; Quattromini, M.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Hu, H.; Huang, D.; Huang, X.; Lin, J.; Lou, J.; Qiao, C.; Wang, T.; Xie, Y.; Xu, D.; Xu, R.; Zhang, J.; Zhao, W.; Bauerdick, L. A. T.; Blucher, E.; Bonvicini, G.; Bossi, F.; Boudreau, J.; Casper, D.; Drevermann, H.; Forty, R. W.; Ganis, G.; Gay, C.; Hagelberg, R.; Harvey, J.; Haywood, S.; Hilgart, J.; Jacobsen, R.; Jost, B.; Knobloch, J.; Lançon, E.; Lehraus, I.; Lohse, T.; Lusiani, A.; Martinez, M.; Mato, P.; Meinhard, H.; Minten, A.; Miquel, R.; Moser, H.-G.; Palazzi, P.; Perlas, J. A.; Pusztaszeri, J.-F.; Ranjard, F.; Redlinger, G.; Rolandi, L.; Rothberg, J.; Ruan, T.; Saich, M.; Schlatter, D.; Schmelling, M.; Sefkow, F.; Tejessy, W.; Wachsmuth, H.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Badaud, F.; Bardadin-Otwinowska, M.; Bencheikh, A. M.; El Fellous, R.; Falvard, A.; Gay, P.; Guicheney, C.; Henrad, P.; Jousset, J.; Michel, B.; Montret, J.-C.; Pallin, D.; Perret, P.; Pietrzyk, B.; Proriol, J.; Prulhière, F.; Stimpfl, G.; Fearnley, T.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Møllerud, R.; Nilsson, B. S.; Efthymiopoulos, I.; Kyriakis, A.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Badier, J.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Fouque, G.; Orteu, S.; Rosowsky, A.; Rougé, A.; Rumpf, M.; Tanaka, R.; Verderi, M.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Veitch, E.; Moneta, L.; Parrini, G.; Corden, M.; Georgiopoulos, C.; Ikeda, M.; Lannutti, J.; Levinthal, D.; Mermikides, M.; Sawyer, L.; Wasserbaech, S.; Antonelli, A.; Baldini, R.; Bencivenni, G.; Bologna, G.; Campana, P.; Capon, G.; Cerutti, F.; Chiarella, V.; D'Ettorre-Piazzoli, B.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Passalacqua, L.; Pepe-Altarelli, M.; Picchi, P.; Altoon, B.; Boyle, O.; Colrain, P.; Ten Have, I.; Lynch, J. G.; Maitland, W.; Morton, W. T.; Raine, C.; Scarr, J. M.; Smith, K.; Thompson, A. S.; Turnbull, R. M.; Brandl, B.; Braun, O.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Maumary, Y.; Putzer, A.; Rensch, B.; Stahl, A.; Tittel, K.; Wunsch, M.; Belk, A. T.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Cattaneo, M.; Colling, D. J.; Dornan, P. J.; Dugeay, S.; Greene, A. M.; Hassard, J. F.; Lieske, N. M.; Nash, J.; Patton, S. J.; Payne, D. G.; Phillips, M. J.; Sedgbeer, J. K.; Tomalin, I. R.; Wright, A. G.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bowdery, C. K.; Brodbeck, T. J.; Finch, A. J.; Foster, F.; Hughes, G.; Jackson, D.; Keemer, N. R.; Nuttall, M.; Patel, A.; Sloan, T.; Snow, S. W.; Whelan, E. P.; Kleinknecht, K.; Raab, J.; Renk, B.; Sander, H.-G.; Schmidt, H.; Steeg, F.; Walther, S. M.; Wolf, B.; Aubert, J.-J.; Benchouk, C.; Bonissent, A.; Carr, J.; Coyle, P.; Drinkard, J.; Etienne, F.; Papalexiou, S.; Payre, P.; Qian, Z.; Roos, L.; Rousseau, D.; Schwemling, P.; Talby, M.; Adlung, S.; Bauer, C.; Blum, W.; Brown, D.; Cattaneo, P.; Cowan, G.; Dehning, B.; Dietl, H.; Dydak, F.; Fernandez-Bosman, M.; Frank, M.; Halley, A. W.; Lauber, J.; Lütjens, G.; Lutz, G.; Männer, W.; Richter, R.; Rotscheidt, H.; Schröder, J.; Schwarz, A. S.; Settles, R.; Seywerd, H.; Stierlin, U.; Stiegler, U.; Denis, R. St.; Takashima, M.; Thomas, J.; Wolf, G.; Boucrot, J.; Callot, O.; Cordier, A.; Davier, M.; Grivaz, J.-F.; Heusse, Ph.; Jaffe, D. E.; Janot, P.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Schune, M.-H.; Veillet, J.-J.; Videau, I.; Zhang, Z.; Abbaneo, D.; Amendolia, S. R.; Bagliesi, G.; Batignani, G.; Bosisio, L.; Bottigli, U.; Bozzi, C.; Bradaschia, C.; Carpinelli, M.; Ciocci, M. A.; Dell'Orso, R.; Ferrante, I.; Fidecaro, F.; Foà, L.; Focardi, E.; Forti, F.; Giassi, A.; Giorgi, M. A.; Ligabue, F.; Mannelli, E. B.; Marrocchesi, P. S.; Messineo, A.; Palla, F.; Rizzo, G.; Sanguinetti, G.; Spagnolo, P.; Steinberger, J.; Tenchini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Venturi, A.; Verdini, P. G.; Walsh, J.; Carter, J. M.; Green, M. G.; March, P. V.; Mir, Ll. M.; Medcalf, T.; Quazi, I. S.; Strong, J. A.; West, L. R.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Edwards, M.; Fisher, S. M.; Jones, T. J.; Norton, P. R.; Salmon, D. P.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Duarte, H.; Kozanecki, W.; Lemaire, M. C.; Locci, E.; Loucatos, S.; Monnier, E.; Perez, P.; Perrier, F.; Rander, J.; Renardy, J.-F.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Vallage, B.; Johnson, R. P.; Litke, A. M.; Taylor, G.; Wear, J.; Ashman, J. G.; Babbage, W.; Booth, C. N.; Buttar, C.; Carney, R. E.; Cartwright, S.; Combley, F.; Hatfield, F.; Reeves, P.; Thompson, L. F.; Barberio, E.; Böhrer, A.; Brandt, S.; Grupen, C.; Mirabito, L.; Rivera, F.; Schäfer, U.; Giannini, G.; Gobbo, B.; Ragusa, F.; Bellantoni, L.; Chen, W.; Cinabro, D.; Conway, J. S.; Cowen, D. F.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; Grahl, J.; Harton, J. L.; Jared, R. C.; Leclaire, B. W.; Lishka, C.; Pan, Y. B.; Pater, J. R.; Saadi, Y.; Sharma, V.; Schmitt, M.; Shi, Z. H.; Walsh, A. M.; Weber, F. V.; Whitney, M. H.; Sau Lan Wu; Wu, X.; Zobernig, G.; Aleph Collaboration

    1992-11-01

    An improved measurement of the average lifetime of b hadrons has been performed with the ALEPH detector. From a sample of 260 000 hadronic Z 0 decays, recorded during the 1991 LEP run with the silicon vertex detector fully operational, a fit to the impact parameter distribution of lepton tracks coming from semileptonic decays yields an average b hadron lifetime of 1.49 ± 0.03 ± 0.06 ps.

  4. Characterization of mirror-based modulation-averaging structures.

    PubMed

    Komljenovic, Tin; Babić, Dubravko; Sipus, Zvonimir

    2013-05-10

    Modulation-averaging reflectors have recently been proposed as a means for improving the link margin in self-seeded wavelength-division multiplexing in passive optical networks. In this work, we describe simple methods for determining key parameters of such structures and use them to predict their averaging efficiency. We characterize several reflectors built by arraying fiber-Bragg gratings along a segment of an optical fiber and show very good agreement between experiments and theoretical models. PMID:23669835

  5. Flavor Physics Data from the Heavy Flavor Averaging Group (HFAG)

    DOE Data Explorer

    The Heavy Flavor Averaging Group (HFAG) was established at the May 2002 Flavor Physics and CP Violation Conference in Philadelphia, and continues the LEP Heavy Flavor Steering Group's tradition of providing regular updates to the world averages of heavy flavor quantities. Data are provided by six subgroups that each focus on a different set of heavy flavor measurements: B lifetimes and oscillation parameters, Semi-leptonic B decays, Rare B decays, Unitarity triangle parameters, B decays to charm final states, and Charm Physics.

  6. Geodesic estimation for large deformation anatomical shape averaging and interpolation.

    PubMed

    Avants, Brian; Gee, James C

    2004-01-01

    The goal of this research is to promote variational methods for anatomical averaging that operate within the space of the underlying image registration problem. This approach is effective when using the large deformation viscous framework, where linear averaging is not valid, or in the elastic case. The theory behind this novel atlas building algorithm is similar to the traditional pairwise registration problem, but with single image forces replaced by average forces. These group forces drive an average transport ordinary differential equation allowing one to estimate the geodesic that moves an image toward the mean shape configuration. This model gives large deformation atlases that are optimal with respect to the shape manifold as defined by the data and the image registration assumptions. We use the techniques in the large deformation context here, but they also pertain to small deformation atlas construction. Furthermore, a natural, inherently inverse consistent image registration is gained for free, as is a tool for constant arc length geodesic shape interpolation. The geodesic atlas creation algorithm is quantitatively compared to the Euclidean anatomical average to elucidate the need for optimized atlases. The procedures generate improved average representations of highly variable anatomy from distinct populations. PMID:15501083

  7. Average Soil Water Retention Curves Measured by Neutron Radiography

    SciTech Connect

    Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  8. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  9. The average longitudinal air shower profile: exploring the shape information

    NASA Astrophysics Data System (ADS)

    Conceição, R.; Andringa, S.; Diogo, F.; Pimenta, M.

    2015-08-01

    The shape of the extensive air shower (EAS) longitudinal profile contains information about the nature of the primary cosmic ray. However, with the current detection capabilities, the assessment of this quantity in an event-by-event basis is still very challenging. In this work we show that the average longitudinal profile can be used to characterise the average behaviour of high energy cosmic rays. Using the concept of universal shower profile it is possible to describe the shape of the average profile in terms of two variables, which can be already measured by the current experiments. These variables present sensitivity to both average primary mass composition and to hadronic interaction properties in shower development. We demonstrate that the shape of the average muon production depth profile can be explored in the same way as the electromagnetic profile having a higher power of discrimination for the state of the art hadronic interaction models. The combination of the shape variables of both profiles provides a new powerful test to the existing hadronic interaction models, and may also provide important hints about multi-particle production at the highest energies.

  10. Spectral Approach to Optimal Estimation of the Global Average Temperature.

    NASA Astrophysics Data System (ADS)

    Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.

  11. Spectral approach to optimal estimation of the global average temperature

    SciTech Connect

    Shen, S.S.P.; North, G.R.; Kim, K.Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.

  12. Model Averaging for Improving Inference from Causal Diagrams

    PubMed Central

    Hamra, Ghassan B.; Kaufman, Jay S.; Vahratian, Anjel

    2015-01-01

    Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as “wish bias”. Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives. PMID:26270672

  13. Do conservative solutes migrate at average pore-water velocity?

    PubMed

    Rovey, Charles W; Niemann, William L

    2005-01-01

    According to common understanding, the advective velocity of a conservative solute equals the average linear pore-water velocity. Yet direct monitoring indicates that the two velocities may be different in heterogeneous media. For example, at the Camp Dodge, Iowa, site the advective velocity of discrete Cl- plumes was less than one tenth of the average pore-water velocity calculated from Darcy's law using the measured hydraulic gradient, effective porosity, and hydraulic conductivity (K) from large-scale three-dimensional (3D) techniques, e.g., pumping tests. Possibly, this difference reflects the influence of different pore systems, if the K relevant to transient solute flux is influenced more by lower-K heterogeneity than a steady or quasi-steady water flux. To test this idea, tracer tests were conducted under controlled laboratory conditions. Under one-dimensional flow conditions, the advective velocity of discrete conservative solutes equaled the average pore-water velocity determined from volumetric flow rates and Darcy's law. In a larger 3D flow system, however, the same solutes migrated at approximately 65% of the average pore-water velocity. These results, coupled with direct observation of dye tracers and their velocities as they migrated through both homogeneous and heterogeneous sections of the same model, demonstrate that heterogeneity can slow the advective velocity of discrete solute plumes relative to the average pore-water velocity within heterogeneous 3D flow sytems. PMID:15726924

  14. Discrete Models of Fluids: Spatial Averaging, Closure, and Model Reduction

    SciTech Connect

    Panchenko, Alexander; Tartakovsky, Alexandre; Cooper, Kevin

    2014-03-06

    The main question addressed in the paper is how to obtain closed form continuum equations governing spatially averaged dynamics of semi-discrete ODE models of fluid flow. In the presence of multiple small scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy balance equations of mass, momentum and energy. These equations are exact, but they do not form a continuum model in the true sense of the word because calculation of stress and heat flux requires solving the underlying ODE system. To produce continuum equations that can be simulated without resolving micro-scale dynamics, we developed a closure method based on the use of regularized deconvolutions. We mostly deal with non-linear averaging suitable for Lagrangian particle solvers, but consider Eulerian linear averaging where appropriate. The results of numerical experiments show good agreement between our closed form flux approximations and their exact counterparts.

  15. Despeckling vs averaging of retinal UHROCT tomograms: advantages and limitations

    NASA Astrophysics Data System (ADS)

    Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.

    2011-03-01

    Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5μm axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.

  16. Rare events and the convergence of exponentially averaged work values

    NASA Astrophysics Data System (ADS)

    Jarzynski, Christopher

    2006-04-01

    Equilibrium free energy differences are given by exponential averages of nonequilibrium work values; such averages, however, often converge poorly, as they are dominated by rare realizations. I show that there is a simple and intuitively appealing description of these rare but dominant realizations. This description is expressed as a duality between “forward” and “reverse” processes, and provides both heuristic insights and quantitative estimates regarding the number of realizations needed for convergence of the exponential average. Analogous results apply to the equilibrium perturbation method of estimating free energy differences. The pedagogical example of a piston and gas [R.C. Lua and A.Y. Grosberg, J. Phys. Chem. B 109, 6805 (2005)] is used to illustrate the general discussion.

  17. Time-average TV holography for vibration fringe analysis

    SciTech Connect

    Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2009-06-01

    Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

  18. Spatial and frequency averaging techniques for a polarimetric scatterometer system

    SciTech Connect

    Monakov, A.A.; Stjernman, A.S.; Nystroem, A.K. ); Vivekanandan, J. )

    1994-01-01

    An accurate estimation of backscattering coefficients for various types of rough surfaces is the main theme of remote sensing. Radar scattering signals from distributed targets exhibit fading due to interference associated with coherent scattering from individual scatterers within the resolution volume. Uncertainty in radar measurements which arises as a result of fading is reduced by averaging independent samples. Independent samples are obtained by collecting the radar returns from nonoverlapping footprints (spatial averaging) and/or nonoverlapping frequencies (frequency agility techniques). An improved formulation of fading characteristics for the spatial averaging and frequency agility technique is derived by taking into account the rough surface scattering process. Kirchhoff's approximation is used to describe rough surface scattering. Expressions for fading decorrelation distance and decorrelation bandwidth are derived. Rough surface scattering measurements are performed between L and X bands. Measured frequency and spatial correlation coefficients show good agreement with theoretical results.

  19. The average chemical composition of the lunar surface

    NASA Technical Reports Server (NTRS)

    Turkevich, A. L.

    1973-01-01

    The available analytical data from twelve locations on the moon are used to estimate the average amounts of the principal chemical elements (O, Na, Mg, Al, Si, Ca, Ti, and Fe) in the mare, the terra, and the average lunar surface regolith. These chemical elements comprise about 99% of the atoms on the lunar surface. The relatively small variability in the amounts of these elements at different mare (or terra) sites, and the evidence from the orbital measurements of Apollo 15 and 16, suggest that the lunar surface is much more homogeneous than the surface of the earth. The average chemical composition of the lunar surface may now be known as well as, if not better than, that of the solid part of the earth's surface.

  20. Motion artifacts reduction from PPG using cyclic moving average filter.

    PubMed

    Lee, Junyeon

    2014-01-01

    The photoplethysmogram (PPG) is an extremely useful medical diagnostic tool. However, PPG signals are highly susceptible to motion artifacts. In this paper, we propose a cyclic moving average filter that use similarity of Photoplethysmogram. This filtering method has the average value of each samples through separating the cycle of PPG signal. If there are some motion artifacts in continuous PPG signal, disjoin the signal based on cycle. And then, we made these signals to have same cycle by coordinating the number of sample. After arrange these cycles in 2 dimension, we put the average value of each samples from starting till now. So, we can eliminate the motion artifacts without damaged PPG signal. PMID:24704660

  1. Neutron average cross sections of {sup 237}Np

    SciTech Connect

    Noguere, G.

    2010-04-15

    This work reports {sup 237}Np neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the {sup 237}Np neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width =39.3+-1.0 meV, a neutron strength function 10{sup 4}S{sub 0}=1.02+-0.14, a mean level spacing D{sub 0}=0.60+-0.03 eV, and a potential scattering length R{sup '}=9.8+-0.1 fm.

  2. An Advanced Time Averaging Modelling Technique for Power Electronic Circuits

    NASA Astrophysics Data System (ADS)

    Jankuloski, Goce

    For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.

  3. Trapping ultracold atoms in a time-averaged adiabatic potential

    SciTech Connect

    Gildemeister, M.; Nugent, E.; Sherlock, B. E.; Kubasik, M.; Sheard, B. T.; Foot, C. J.

    2010-03-15

    We report an experimental realization of ultracold atoms confined in a time-averaged, adiabatic potential (TAAP). This trapping technique involves using a slowly oscillating ({approx}kHz) bias field to time-average the instantaneous potential given by dressing a bare magnetic potential with a high-frequency ({approx}MHz) magnetic field. The resultant potentials provide a convenient route to a variety of trapping geometries with tunable parameters. We demonstrate the TAAP trap in a standard time-averaged orbiting potential trap with additional Helmholtz coils for the introduction of the radio frequency dressing field. We have evaporatively cooled 5x10{sup 4} atoms of {sup 87}Rb to quantum degeneracy and observed condensate lifetimes of longer than 3 s.

  4. Time-averaged photon-counting digital holography.

    PubMed

    Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario

    2015-09-15

    Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907

  5. The Health Effects of Income Inequality: Averages and Disparities.

    PubMed

    Truesdale, Beth C; Jencks, Christopher

    2016-03-18

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health. PMID:26735427

  6. Genuine non-self-averaging and ultraslow convergence in gelation

    NASA Astrophysics Data System (ADS)

    Cho, Y. S.; Mazza, M. G.; Kahng, B.; Nagler, J.

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  7. Genuine non-self-averaging and ultraslow convergence in gelation.

    PubMed

    Cho, Y S; Mazza, M G; Kahng, B; Nagler, J

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation. PMID:27627355

  8. Size and emotion averaging: costs of dividing attention after all.

    PubMed

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention. PMID:22390476

  9. Optimum orientation versus orientation averaging description of cluster radioactivity

    NASA Astrophysics Data System (ADS)

    Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.

    2016-07-01

    While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel–Kramers–Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske–Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.

  10. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and

  11. A preliminary measurement of the average B hadron lifetime

    SciTech Connect

    Manly, S.L.; SLD Collaboration

    1994-09-01

    The average B hadron lifetime was measured using data collected with the SLD detector at the SLC in 1993. From a sample of {approximately}50,000 Z{sup 0} events, a sample enriched in Z{sup 0} {yields} b{bar b} was selected by applying an impact parameter tag. The lifetime was extracted from the decay length distribution of inclusive vertices reconstructed in three dimensions. A binned maximum likelihood method yielded an average B hadron lifetime of {tau}{sub B} = 1.577{plus_minus}0.032(stat.){plus_minus}0.046(syst.) ps.

  12. A preliminary, precise measurement of the average B hadron lifetime

    SciTech Connect

    SLD Collaboration

    1994-07-01

    The average B hadron lifetime was measured using data collected with the SLD detector at the SLC in 1993. From a sample of {approximately}50,000 Z{sup 0} events, a sample enriched in Z{sup 0} {yields} b{bar b} was selected by applying an impact parameter tag. The lifetime was extracted from the decay length distribution of inclusive vertices reconstructed in three dimensions. A binned maximum likelihood method yielded an average B hadron lifetime of {tau}{sub B} = 1.577 {plus_minus} 0.032(stat.) {plus_minus} 0.046(syst.) ps.

  13. High average power scaleable thin-disk laser

    DOEpatents

    Beach, Raymond J.; Honea, Eric C.; Bibeau, Camille; Payne, Stephen A.; Powell, Howard; Krupke, William F.; Sutton, Steven B.

    2002-01-01

    Using a thin disk laser gain element with an undoped cap layer enables the scaling of lasers to extremely high average output power values. Ordinarily, the power scaling of such thin disk lasers is limited by the deleterious effects of amplified spontaneous emission. By using an undoped cap layer diffusion bonded to the thin disk, the onset of amplified spontaneous emission does not occur as readily as if no cap layer is used, and much larger transverse thin disks can be effectively used as laser gain elements. This invention can be used as a high average power laser for material processing applications as well as for weapon and air defense applications.

  14. Bounce-averaged Fokker-Planck code for stellarator transport

    SciTech Connect

    Mynick, H.E.; Hitchon, W.N.G.

    1985-07-01

    A computer code for solving the bounce-averaged Fokker-Planck equation appropriate to stellarator transport has been developed, and its first applications made. The code is much faster than the bounce-averaged Monte-Carlo codes, which up to now have provided the most efficient numerical means for studying stellarator transport. Moreover, because the connection to analytic kinetic theory of the Fokker-Planck approach is more direct than for the Monte-Carlo approach, a comparison of theory and numerical experiment is now possible at a considerably more detailed level than previously.

  15. Average patterns and coherent phenomena in wide aperture lasers

    NASA Astrophysics Data System (ADS)

    D'Alessandro, G.; Papoff, F.; Louvergneaux, E.; Glorieux, P.

    2004-06-01

    Using a realistic model of wide aperture, weakly astigmatic lasers we develop a framework to analyze experimental average intensity patterns. We use the model to explain the appearance of patterns in terms of the modes of the cavity and to show that the breaking of the symmetry of the average intensity patterns is caused by overlaps in the frequency spectra of nonvanishing of modes with different parity. This result can be used even in systems with very fast dynamics to detect experimentally overlaps of frequency spectra of modes.

  16. The ground-state average structure of methyl isocyanide

    NASA Astrophysics Data System (ADS)

    Mackenzie, M. W.; Duncan, J. L.

    The use of recently determined highly precise inertial data for various isotopic modifications of methyl isocyanide has enabled the ground-state average, or rz, structure to be determined to within very narrow limits. Harmonic corrections to ground-state rotational constants have been calculated using a high-quality, experimentally determined harmonic force field. The derived zero-point inertial constants are sufficiently accurate to enable changes in the CH bond length and NCH bond angle on deuteration to be determined. The present rz structure determination is believed to be a physically realistic estimate of the ground-state average geometry of methyl isocyanide.

  17. The ground-state average structure of methyl isocyanide

    NASA Astrophysics Data System (ADS)

    Mackenzie, M. W.; Duncan, J. L.

    1982-11-01

    The use of recently determined highly precise inertial data for various isotopic modifications of methyl isocyanide has enabled the ground-state average, or rz, structure to be determined to within very narrow limits. Harmonic corrections to ground-state rotational constants have been calculated using a high-quality, experimentally determined harmonic force field. The derived zero-point inertial constants are sufficiently accurate to enable changes in the CH bond length and NCH bond angle on deuteration to be determined. The present rz structure determination is believed to be a physically realistic estimate of the ground-state average geometry of methyl isocyanide.

  18. Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei

    2015-07-01

    We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.

  19. The expanding role of signal-averaged electrocardiography.

    PubMed

    Gant, R H; Henkin, R; Morton, P G

    1999-10-01

    Signal-averaged electrocardiography is a valuable diagnostic tool for determining which patients recovering from myocardial infarction are at risk of sudden death due to ventricular arrhythmias. Additionally, the value of this technique in determining which patients with ischemic heart disease and unexplained syncope are likely to have inducible sustained ventricular tachycardia has been established. This noninvasive screening procedure has shown promise in other clinical situations, but more investigation is needed before definitive recommendation can be made. Critical care nurses can help promote the success of signal-averaged electrocardiography by educating patients, promoting acquisition of a quality recording, helping allay patients' concerns, and participating in research activities. PMID:10808814

  20. AMPERE AVERAGE CURRENT PHOTOINJECTOR AND ENERGY RECOVERY LINAC.

    SciTech Connect

    BEN-ZVI,I.; BURRILL,A.; CALAGA,R.; ET AL.

    2004-08-17

    High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. We describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode, an accelerator cavity, both capable of producing of the order of one ampere average current and plans for an ERL based on these units.

  1. Analytical solution of average path length for Apollonian networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongzhi; Chen, Lichao; Zhou, Shuigeng; Fang, Lujun; Guan, Jihong; Zou, Tao

    2008-01-01

    With the help of recursion relations derived from the self-similar structure, we obtain the solution of average path length, dmacr t , for Apollonian networks. In contrast to the well-known numerical result dmacr t∝(lnNt)3/4 [J. S. Andrade, Jr. , Phys. Rev. Lett. 94, 018702 (2005)], our rigorous solution shows that the average path length grows logarithmically as dmacr t∝lnNt in the infinite limit of network size Nt . The extensive numerical calculations completely agree with our closed-form solution.

  2. Characterizing average permeability in oil and gas formations

    SciTech Connect

    Rollins, J.B. ); Holditch, S.A.; Lee, W.J. )

    1992-03-01

    This paper reports that permeability in a formation frequently follows a unimodal probability distribution. In many formations, particularly sedimentary ones, the permeability distribution is similar to the log-normal distribution. Theoretical considerations, field cases, and a reservoir simulation example show that the median, rather than the arithmetic mean, is the appropriate measure of central tendency or average value of the permeability distribution in a formation. Use of the correct estimate of average permeability is of particular importance in the classification of tight gas formations under statues in the 1978 Natural Gas Policy Act (NGPA).

  3. Spatial average ambiguity function for array radar with stochastic signals

    NASA Astrophysics Data System (ADS)

    Zha, Guofeng; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang

    2016-03-01

    For analyzing the spatial resolving performance of multi-transmitter single-receiver (MTSR) array radar with stochastic signals, the spatial average ambiguity function (SAAF) is introduced based on the statistical average theory. The analytic expression of SAAF and the corresponding resolutions in vertical range and in horizontal range are derived. Since spatial resolving performance is impacted by many parameters including signal modulation schemes, signal bandwidth, array aperture's size and target's spatial position, comparisons are implemented to analyze these influences. Simulation results are presented to validate the whole analysis.

  4. 152 W average power Tm-doped fiber CPA system.

    PubMed

    Stutzki, Fabian; Gaida, Christian; Gebhardt, Martin; Jansen, Florian; Wienke, Andreas; Zeitner, Uwe; Fuchs, Frank; Jauregui, Cesar; Wandt, Dieter; Kracht, Dietmar; Limpert, Jens; Tünnermann, Andreas

    2014-08-15

    A high-power thulium (Tm)-doped fiber chirped-pulse amplification system emitting a record compressed average output power of 152 W and 4 MW peak power is demonstrated. This result is enabled by utilizing Tm-doped photonic crystal fibers with mode-field diameters of 35 μm, which mitigate detrimental nonlinearities, exhibit slope efficiencies of more than 50%, and allow for reaching a pump-power-limited average output power of 241 W. The high-compression efficiency has been achieved by using multilayer dielectric gratings with diffraction efficiencies higher than 98%. PMID:25121845

  5. An averaging analysis of discrete-time indirect adaptive control

    NASA Technical Reports Server (NTRS)

    Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.

    1988-01-01

    An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.

  6. Average wave function method for gas-surface scattering

    NASA Astrophysics Data System (ADS)

    Singh, Harjinder; Dacol, Dalcio K.; Rabitz, Herschel

    1986-02-01

    The average wave function method (AWM) is applied to scattering of a gas off a solid surface. The formalism is developed for both periodic as well as disordered surfaces. For an ordered lattice an explicit relation is derived for the Bragg peaks along with a numerical illustration. Numerical results are presented for atomic clusters on a flat hard wall with a Gaussian-like potential at each atomic scattering site. The effect of relative lateral displacement of two clusters upon the scattering pattern is shown. The ability of AWM to accommodate disorder through statistical averaging over cluster configurations is illustrated. Enhanced uniform backscattering is observed with increasing roughness on the surface.

  7. Averaging processes in granular flows driven by gravity

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  8. Dynamic consensus estimation of weighted average on directed graphs

    NASA Astrophysics Data System (ADS)

    Li, Shuai; Guo, Yi

    2015-07-01

    Recent applications call for distributed weighted average estimation over sensor networks, where sensor measurement accuracy or environmental conditions need to be taken into consideration in the final consensused group decision. In this paper, we propose new dynamic consensus filter design to distributed estimate weighted average of sensors' inputs on directed graphs. Based on recent advances in the filed, we modify the existing proportional-integral consensus filter protocol to remove the requirement of bi-directional gain exchange between neighbouring sensors, so that the algorithm works for directed graphs where bi-directional communications are not possible. To compensate for the asymmetric structure of the system introduced by such a removal, sufficient gain conditions are obtained for the filter protocols to guarantee the convergence. It is rigorously proved that the proposed filter protocol converges to the weighted average of constant inputs asymptotically, and to the weighted average of time-varying inputs with a bounded error. Simulations verify the effectiveness of the proposed protocols.

  9. Evaluating Methods for Constructing Average High-Density Electrode Positions

    PubMed Central

    Richards, John E.; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M.C.

    2014-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel “Geodesic Sensor Net” (GSN; EGI, Inc.), 38 participants with the 128 channel “Hydrocel Geodesic Sensor Net” (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants’ original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants) PMID:25234713

  10. Average vs. Normal IQ: An Empirical Follow-Up.

    ERIC Educational Resources Information Center

    Hess, Harrie F.; Worgull, Norman

    The broad hypothesis that children whose histories imply the possible presence of an intellect-reducing condition will score lower on an IQ test than children whose histories do not imply such conditions was tested in a study designed to illustrate the distinction between average and normal IQ. Health and behavior histories of a sample of second…

  11. Social Reasoning, Anxiety, and Collaboration with Rejected and Average Children.

    ERIC Educational Resources Information Center

    Crosby, Kimberly A.; Rose, Marcy D.; Fireman, Gary D.

    The current study examined peer nominated non-aggressive rejected children on their levels of social reasoning, anxiety, goals and perceptions of self-efficacy, and communication styles when collaborating with another peer. Sociometric measures were used to identify 15 average and 10 non-aggressive rejected 5th and 6th grade children. Pre- and…

  12. Speckle averaging system for laser raster-scan image projection

    DOEpatents

    Tiszauer, D.H.; Hackel, L.A.

    1998-03-17

    The viewers` perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts. 5 figs.

  13. Speckle averaging system for laser raster-scan image projection

    DOEpatents

    Tiszauer, Detlev H.; Hackel, Lloyd A.

    1998-03-17

    The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.

  14. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... credits to the extent such a transfer would result in the transferor having a negative credit balance at... through the transfer of benzene credits provided that: (i) The credits were generated in the same averaging period as they are used; (ii) The credit transfer takes place no later than fifteen working...

  15. Averaging analysis of state-switched piezoelectric structural systems

    NASA Astrophysics Data System (ADS)

    Kurdila, A. J.; Lesieutre, G. A.; Zhang, X.; Prazenica, C.; Niezrecki, C.

    2005-05-01

    This paper develops an averaging analysis for qualitative and quantitative study of switched piezostructural systems. The study of piezostructural systems including passive and active shunt circuits has been carried out for some time. Far less is known regarding analytical methods for the study of switched piezostructural systems. The technique developed in this paper is motivated by the success of averaging methods for the analysis of switched power supplies. In this paper it is shown that averaging analysis provides a means of determining time domain as well as frequency domain response characteristics of switched piezostructural systems that include switched capacitive shunt circuits. The time domain and frequency domain performance of a tunable piezoceramic vibration absorber is derived via averaging in this paper. The proposed switching architecture provides an essentially continuous range of tunable notch frequencies, in contrast to a finite and fixed collection of discrete notch frequencies available in some implementations of capacitively shunted piezostructures. The technique for analysis appears promising for the study of vibration damping and energy harvesting piezostructures whose underlying operating principle is similar.

  16. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    PubMed

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  17. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging

    PubMed Central

    Brezis, Noam; Bronfman, Zohar Z.; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  18. HIGH AVERAGE POWER UV FREE ELECTRON LASER EXPERIMENTS AT JLAB

    SciTech Connect

    Douglas, David; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle D; Tennant, Christopher; Williams, Gwyn

    2012-07-01

    Having produced 14 kW of average power at {approx}2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  19. Homological equations for tensor fields and periodic averaging

    NASA Astrophysics Data System (ADS)

    Avendaño Camacho, M.; Vorobiev, Y. M.

    2011-09-01

    Homological equations of tensor type associated to periodic flows on a manifold are studied. The Cushman intrinsic formula [4] is generalized to the case of multivector fields and differential forms. Some applications to normal forms and the averaging method for perturbed Hamiltonian systems on slow-fast phase spaces are given.

  20. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... elevation is calculated as the average of the elevation along a straight line path from 3 to 16 kilometers (2 and 10 miles) extending radially from the antenna site. If a portion of the radial path extends... elevation unless the radial path again passes over United States land between 16 and 134 kilometers (10...

  1. Factors Influencing Grade Point Averages at a Community College.

    ERIC Educational Resources Information Center

    Johnson, Marvin L.; Walberg, Herbert J.

    1989-01-01

    Examines the applicability of Walberg's model of educational productivity to a community college setting. Finds that prior achievement, use of out-of-school time, motivation, social context of the classroom, and age have positive effects on grade point average, while quantity of instruction and emphasis on education at home have negative effects.…

  2. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  3. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  4. Stochastic averaging of energy envelope of Preisach hysteretic systems

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ying, Z. G.; Zhu, W. Q.

    2009-04-01

    A new stochastic averaging technique for analyzing the response of a single-degree-of-freedom Preisach hysteretic system with nonlocal memory under stationary Gaussian stochastic excitation is proposed. An equivalent nonhysteretic nonlinear system with amplitude-envelope-dependent damping and stiffness is firstly obtained from the given system by using the generalized harmonic balance technique. The relationship between the amplitude envelope and the energy envelope is then established, and the equivalent damping and stiffness coefficients are expressed as functions of the energy envelope. The available range of the yielding force of the system is extended and also the strong nonlinear stiffness of the system is incorporated so as to improve the response prediction. Finally, an averaged Itô stochastic differential equation for the energy envelope of the system as one-dimensional diffusion process is derived by using the stochastic averaging method of energy envelope, and the Fokker-Planck-Kolmogorov equation associated with the averaged Itô equation is solved to obtain stationary probability densities of the energy envelope and amplitude envelope. The approximate solutions are validated by using the Monte Carlo simulation.

  5. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  6. Touching Epistemologies: Meanings of Average and Variation in Nursing Practice.

    ERIC Educational Resources Information Center

    Noss, Richard; Pozzi, Stefano; Hoyles, Celia

    1999-01-01

    Presents a study on the meanings of average and variation displayed by pediatric nurses. Traces how these meanings shape and are shaped by nurses' interpretations of trends in patient and population data. Suggests a theoretical framework for making sense of the data that compares and contrasts nurses' epistemology with that of official…

  7. 26 CFR 1.1301-1 - Averaging of farm income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1.1301-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME...

  8. Reducing Noise by Repetition: Introduction to Signal Averaging

    ERIC Educational Resources Information Center

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  9. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Wastewater streams treated in biological treatment units cannot be used to generate credits. These two types... 40 Protection of Environment 12 2012-07-01 2011-07-01 true Emissions averaging provisions. 63.1332... based on either organic HAP or TOC. (3) For the purposes of these provisions, whenever Method 18, 40...

  10. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Emissions averaging provisions. 63.652 Section 63.652 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous...

  11. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false National average monthly bid amount. 423.279 Section 423.279 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) VOLUNTARY MEDICARE PRESCRIPTION DRUG BENEFIT Submission of Bids and Monthly Beneficiary Premiums;...

  12. Fuel optimum low-thrust elliptic transfer using numerical averaging

    NASA Astrophysics Data System (ADS)

    Tarzi, Zahi; Speyer, Jason; Wirz, Richard

    2013-05-01

    Low-thrust electric propulsion is increasingly being used for spacecraft missions primarily due to its high propellant efficiency. As a result, a simple and fast method for low-thrust trajectory optimization is of great value for preliminary mission planning. However, few low-thrust trajectory tools are appropriate for preliminary mission design studies. The method presented in this paper provides quick and accurate solutions for a wide range of transfers by using numerical orbital averaging to improve solution convergence and include orbital perturbations. Thus, preliminary trajectories can be obtained for transfers which involve many revolutions about the primary body. This method considers minimum fuel transfers using first-order averaging to obtain the fuel optimum rates of change of the equinoctial orbital elements in terms of each other and the Lagrange multipliers. Constraints on thrust and power, as well as minimum periapsis, are implemented and the equations are averaged numerically using a Gausian quadrature. The use of numerical averaging allows for more complex orbital perturbations to be added in the future without great difficulty. The effects of zonal gravity harmonics, solar radiation pressure, and thrust limitations due to shadowing are included in this study. The solution to a transfer which minimizes the square of the thrust magnitude is used as a preliminary guess for the minimum fuel problem, thus allowing for faster convergence to a wider range of problems. Results from this model are shown to provide a reduction in propellant mass required over previous minimum fuel solutions.

  13. Pollutant roses for daily averaged ambient air pollutant concentrations

    NASA Astrophysics Data System (ADS)

    Cosemans, Guido; Kretzschmar, Jan; Mensink, Clemens

    Pollutant roses are indispensable tools to identify unknown (fugitive) sources of heavy metals at industrial sites whose current impact exceeds the target values imposed for the year 2012 by the European Air Quality Daughter Directive 2004/207/EC. As most of the measured concentrations of heavy metals in ambient air are daily averaged values, a method to obtain high quality pollutant roses from such data is of practical interest for cost-effective air quality management. A computational scheme is presented to obtain, from daily averaged concentrations, 10° angular resolution pollutant roses, called PRP roses, that are in many aspects comparable to pollutant roses made with half-hourly concentrations. The computational scheme is a ridge regression, based on three building blocks: ordinary least squares regression; outlier handling by weighting based on expected values of the higher percentiles in a lognormal distribution; weighted averages whereby observed values, raised to a power m, and daily wind rose frequencies are used as weights. Distance measures are used to find the optimal value for m. The performance of the computational scheme is illustrated by comparing the pollutant roses, constructed with measured half-hourly SO 2 data for 10 monitoring sites in the Antwerp harbour, with the PRP roses made with the corresponding daily averaged SO 2 concentrations. A miniature dataset, made up of 7 daily concentrations and of half-hourly wind directions assigned to 4 wind sectors, is used to illustrate the formulas and their results.

  14. AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN

    EPA Science Inventory

    The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...

  15. All above Average: Secondary School Improvement as an Impossible Endeavour

    ERIC Educational Resources Information Center

    Taylor, Phil

    2015-01-01

    This article argues that secondary school improvement in England, when viewed as a system, has become an impossible endeavour. This arises from the conflation of improvement with effectiveness, judged by a narrow range of outcome measures and driven by demands that all schools should somehow be above average. The expectation of comparable…

  16. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  17. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  18. Evaluating methods for constructing average high-density electrode positions.

    PubMed

    Richards, John E; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M C

    2015-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel "Geodesic Sensor Net" (GSN; EGI, Inc.), 38 participants with the 128 channel "Hydrocel Geodesic Sensor Net" (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants' original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants). PMID:25234713

  19. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... kilometers (10 and 83 miles) away from the station. At least 50 evenly spaced data points for each radial... second point or better topographic data file. The file must be identified. If a 30 second point data file...; otherwise, the nearest point may be used. In cases of dispute, average terrain elevation determinations...

  20. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... kilometers (10 and 83 miles) away from the station. At least 50 evenly spaced data points for each radial... second point or better topographic data file. The file must be identified. If a 30 second point data file...; otherwise, the nearest point may be used. In cases of dispute, average terrain elevation determinations...

  1. Designing a Response Scale to Improve Average Group Response Reliability

    ERIC Educational Resources Information Center

    Davies, Randall

    2008-01-01

    Creating surveys is a common task in evaluation research; however, designing a survey instrument to gather average group response data that can be interpreted in a meaningful way over time can be challenging. When surveying groups of people for the purpose of longitudinal analysis, the reliability of the result is often determined by the response…

  2. Punching Wholes into Parts, or Beating the Percentile Averages.

    ERIC Educational Resources Information Center

    Carwile, Nancy R.

    1990-01-01

    Presents a facetious, ingenious resolution to the percentile dilemma concerning above- and below-average test scores. If schools enrolled the same number of pigs as students and tested both groups, the pigs would fill up the bottom half and all children would rank in the top 50 percent. However, some wrinkles need to be ironed out! (MLH)

  3. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959...

  4. Development of Cognitive Averaging: When Light and Light Make Dark.

    ERIC Educational Resources Information Center

    Jager, Stephan; Wilkening, Friedrich

    2001-01-01

    Two experiments examined developmental changes in reasoning about intensive quantities--predicting mixture intensity of pairs of liquids with different intensities of red color. Results showed that cognitive averaging in this domain developed late and slowly. Predominating up to 12 years was an extensivity bias, a strong tendency to use rules that…

  5. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements.

    PubMed

    Hourdakis, C J

    2011-04-01

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, Ū(P), the average, Ū, the effective, U(eff) or the maximum peak, U(P) tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average Ū or the average peak, Ū(p) voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k(PPV,kVp) and the average k(PPV,Uav) conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated-according to the proposed method-PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from Ū(p) and Ū measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements. PMID:21403184

  6. The Average Quality Factors by TEPC for Charged Particles

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.

    2004-01-01

    The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.

  7. High-average-power diode-pumped Yb: YAG lasers

    SciTech Connect

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-10-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.

  8. Discrete models of fluids: spatial averaging, closure and model reduction

    SciTech Connect

    Panchenko, Alexander; Tartakovsky, Alexandre M.; Cooper, Kevin

    2014-04-15

    We consider semidiscrete ODE models of single-phase fluids and two-fluid mixtures. In the presence of multiple fine-scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy exact balance equations of mass, momentum, and energy. These equations do not form a satisfactory continuum model because evaluation of stress and heat flux requires solving the underlying ODEs. To produce continuum equations that can be simulated without resolving microscale dynamics, we recently proposed a closure method based on the use of regularized deconvolution. Here we continue the investigation of deconvolution closure with the long term objective of developing consistent computational upscaling for multiphase particle methods. The structure of the fine-scale particle solvers is reminiscent of molecular dynamics. For this reason we use nonlinear averaging introduced for atomistic systems by Noll, Hardy, and Murdoch-Bedeaux. We also consider a simpler linear averaging originally developed in large eddy simulation of turbulence. We present several simple but representative examples of spatially averaged ODEs, where the closure error can be analyzed. Based on this analysis we suggest a general strategy for reducing the relative error of approximate closure. For problems with periodic highly oscillatory material parameters we propose a spectral boosting technique that augments the standard deconvolution and helps to correctly account for dispersion effects. We also conduct several numerical experiments, one of which is a complete mesoscale simulation of a stratified two-fluid flow in a channel. In this simulation, the operation count per coarse time step scales sublinearly with the number of particles.

  9. High average power diode pumped solid state lasers for CALIOPE

    SciTech Connect

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

  10. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  11. Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2005-01-01

    Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.

  12. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  13. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  14. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  15. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  16. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  17. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  18. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  19. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  20. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... excluded from both the calculation of the fleet average standard for a manufacturer under 40 CFR 86.1818-12... conditioning efficiency credits for the applicable vehicle category, in megagrams, from 40 CFR 86.1868-12(c... vehicle category, in megagrams, from 40 CFR 86.1869-12(e), and rounded to the nearest whole number;...

  1. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... excluded from both the calculation of the fleet average standard for a manufacturer under 40 CFR 86.1818-12... megagrams, from 40 CFR 86.1868-12(c), and rounded to the nearest whole number; VLM = vehicle lifetime miles... technology credits for the applicable vehicle category, in megagrams, from 40 CFR 86.1869-12(e), and...

  2. Inferring average generation via division-linked labeling.

    PubMed

    Weber, Tom S; Perié, Leïla; Duffy, Ken R

    2016-08-01

    For proliferating cells subject to both division and death, how can one estimate the average generation number of the living population without continuous observation or a division-diluting dye? In this paper we provide a method for cell systems such that at each division there is an unlikely, heritable one-way label change that has no impact other than to serve as a distinguishing marker. If the probability of label change per cell generation can be determined and the proportion of labeled cells at a given time point can be measured, we establish that the average generation number of living cells can be estimated. Crucially, the estimator does not depend on knowledge of the statistics of cell cycle, death rates or total cell numbers. We explore the estimator's features through comparison with physiologically parameterized stochastic simulations and extrapolations from published data, using it to suggest new experimental designs. PMID:26733310

  3. EVENT BY EVENT AVERAGES IN HEAVY ION COLLISIONS.

    SciTech Connect

    TANNENBAUM,M.J.; MITCHELL,J.T.

    2002-03-16

    Na49 (Pb+Pb, CERN), PHENIX and STAR (Au+Au, BNL) have presented measurements of the event-by-event average p{sub T} (denoted M{sub pT}) in relativistic heavy ion collisions. Event-by-event averages are most useful to resolve the case of two or several classes of events with e.g. different temperature parameters. The distribution of M{sub pT} is discussed, with emphasis on the case of statistically independent emission according to the semi-inclusive p{sub T} and charged multiplicity distributions. Deviations from statistically independent emission are quantified in terms of a simple two component model, with the individual components being Gamma distributions.

  4. Refined similarity hypothesis using three-dimensional local averages

    NASA Astrophysics Data System (ADS)

    Iyer, Kartik P.; Sreenivasan, Katepalli R.; Yeung, P. K.

    2015-12-01

    The refined similarity hypotheses of Kolmogorov, regarded as an important ingredient of intermittent turbulence, has been tested in the past using one-dimensional data and plausible surrogates of energy dissipation. We employ data from direct numerical simulations, at the microscale Reynolds number Rλ˜650 , on a periodic box of 40963 grid points to test the hypotheses using three-dimensional averages. In particular, we study the small-scale properties of the stochastic variable V =Δ u (r ) /(rɛr) 1 /3 , where Δ u (r ) is the longitudinal velocity increment and ɛr is the dissipation rate averaged over a three-dimensional volume of linear size r . We show that V is universal in the inertial subrange. In the dissipation range, the statistics of V are shown to depend solely on a local Reynolds number.

  5. Refined similarity hypothesis using three-dimensional local averages.

    PubMed

    Iyer, Kartik P; Sreenivasan, Katepalli R; Yeung, P K

    2015-12-01

    The refined similarity hypotheses of Kolmogorov, regarded as an important ingredient of intermittent turbulence, has been tested in the past using one-dimensional data and plausible surrogates of energy dissipation. We employ data from direct numerical simulations, at the microscale Reynolds number R(λ)∼650, on a periodic box of 4096(3) grid points to test the hypotheses using three-dimensional averages. In particular, we study the small-scale properties of the stochastic variable V=Δu(r)/(rε(r))(1/3), where Δu(r) is the longitudinal velocity increment and ε(r) is the dissipation rate averaged over a three-dimensional volume of linear size r. We show that V is universal in the inertial subrange. In the dissipation range, the statistics of V are shown to depend solely on a local Reynolds number. PMID:26764821

  6. Detrending moving average algorithm: Frequency response and scaling performances.

    PubMed

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed. PMID:27415389

  7. Doubly robust estimation of the local average treatment effect curve

    PubMed Central

    Ogburn, Elizabeth L.; Rotnitzky, Andrea; Robins, James M.

    2014-01-01

    Summary We consider estimation of the causal effect of a binary treatment on an outcome, conditionally on covariates, from observational studies or natural experiments in which there is a binary instrument for treatment. We describe a doubly robust, locally efficient estimator of the parameters indexing a model for the local average treatment effect conditionally on covariates V when randomization of the instrument is only true conditionally on a high dimensional vector of covariates X, possibly bigger than V. We discuss the surprising result that inference is identical to inference for the parameters of a model for an additive treatment effect on the treated conditionally on V that assumes no treatment–instrument interaction. We illustrate our methods with the estimation of the local average effect of participating in 401(k) retirement programs on savings by using data from the US Census Bureau's 1991 Survey of Income and Program Participation. PMID:25663814

  8. Detrending moving average algorithm: Frequency response and scaling performances

    NASA Astrophysics Data System (ADS)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed.

  9. Partial Averaged Navier-Stokes approach for cavitating flow

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zhang, Y. N.

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results.

  10. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  11. Correct averaging in transmission radiography: Analysis of the inverse problem

    NASA Astrophysics Data System (ADS)

    Wagner, Michael; Hampel, Uwe; Bieberle, Martina

    2016-05-01

    Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.

  12. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  13. Microchannel heatsinks for high average power laser diode arrays

    SciTech Connect

    Beach, R.; Benett, B.; Freitas, B.; Ciarlo, D.; Sperry, V.; Comaskey, B.; Emanuel, M.; Solarz, R.; Mundinger, D.

    1992-01-01

    Detailed performance results and fabrication techniques for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor or even CW operation of fully filled laser diode arrays is enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using a photolithographic pattern definition procedure followed by anisotropic chemical etching. A modular rack-and-stack architecture is adopted for the heatsink design allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel cooled heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that result from coupling the diode light to several nanometers wide absorption features characteristic of leasing ions in crystals.

  14. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  15. A K-fold Averaging Cross-validation Procedure

    PubMed Central

    Jung, Yoonsuh; Hu, Jianhua

    2015-01-01

    Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate ‘optimal’ models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significantly reduced. This new procedure results in more stable and efficient parameter estimation than the classical K-fold cross validation procedure. In addition, we show the asymptotic equivalence between the proposed and classical cross validation procedures in the linear regression setting. We also demonstrate the broad applicability of the proposed procedure via two examples of parameter sparsity regularization and quantile smoothing splines modeling. We illustrate the promise of the proposed method through simulations and a real data example.

  16. Thermal effects in high average power optical parametric amplifiers.

    PubMed

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given. PMID:23455291

  17. Ampere Average Current Photoinjector and Energy Recovery Linac

    SciTech Connect

    Ilan Ben-Zvi; A. Burrill; R. Calaga; P. Cameron; X. Chang; D. Gassner; H. Hahn; A. Hershcovitch; H.C. Hseuh; P. Johnson; D. Kayran; J. Kewisch; R. Lambiase; Vladimir N. Litvinenko; G. McIntyre; A. Nicoletti; J. Rank; T. Roser; J. Scaduto; K. Smith; T. Srinivasan-Rao; K.-C. Wu; A. Zaltsman; Y. Zhao; H. Bluem; A. Burger; Mike Cole; A. Favale; D. Holmes; John Rathke; Tom Schultheiss; A. Todd; J. Delayen; W. Funk; L. Phillips; Joe Preble

    2004-08-01

    High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode, as demonstrated by the spectacular success of the Jefferson Laboratory IR-Demo. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. BNL's Collider-Accelerator Department is pursuing some of these technologies for a different application, that of electron cooling of high-energy hadron beams. I will describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode and an accelerator cavity, both capable of producing of the order of one ampere average current.

  18. The B-dot Earth Average Magnetic Field

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  19. A collisional-radiative average atom model for hot plasmas

    SciTech Connect

    Rozsnyai, B.F.

    1996-10-17

    A collisional-radiative `average atom` (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab.

  20. High average power supercontinuum generation in a fluoroindate fiber

    NASA Astrophysics Data System (ADS)

    Swiderski, J.; Théberge, F.; Michalska, M.; Mathieu, P.; Vincent, D.

    2014-01-01

    We report the first demonstration of Watt-level supercontinuum (SC) generation in a step-index fluoroindate (InF3) fiber pumped by a 1.55 μm fiber master-oscillator power amplifier (MOPA) system. The SC is generated in two steps: first ˜1 ns amplified laser diode pulses are broken up into soliton-like sub-pulses leading to initial spectrum extension and then launched into a fluoride fiber to obtain further spectral broadening. The pump MOPA system can operate at a changeable repetition frequency delivering up to 19.2 W of average power at 2 MHz. When the 8-m long InF3 fiber was pumped with 7.54 W at 420 kHz, output average SC power as high as 2.09 W with 27.8% of slope efficiency was recorded. The achieved SC spectrum spread from 1 to 3.05 μm.

  1. Average diagonal entropy in nonequilibrium isolated quantum systems.

    PubMed

    Giraud, Olivier; García-Mata, Ignacio

    2016-07-01

    The diagonal entropy was introduced as a good entropy candidate especially for isolated quantum systems out of equilibrium. Here we present an analytical calculation of the average diagonal entropy for systems undergoing unitary evolution and an external perturbation in the form of a cyclic quench. We compare our analytical findings with numerical simulations of various quantum systems. Our calculations elucidate various heuristic relations proposed recently in the literature. PMID:27575092

  2. Self-averaging in complex brain neuron signals

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.

    2002-12-01

    Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.

  3. Industry-grade high average power femtosecond light source

    NASA Astrophysics Data System (ADS)

    Heckl, O. H.; Weiler, S.; Fleischhaker, R.; Gebs, R.; Budnicki, A.; Wolf, M.; Kleinbauer, J.; Russ, S.; Kumkar, M.; Sutter, D. H.

    2014-03-01

    Ultrashort pulses are capable of processing practically any material with negligible heat affected zone. Typical pulse durations for industrial applications are situated in the low picosecond-regime. Pulse durations of 5 ps or below are a well established compromise between the electron-phonon interaction time of most materials and the need for pulses long enough to suppress detrimental effects such as nonlinear interaction with the ablated plasma plume. However, sub-picosecond pulses can further increase the ablation efficiency for certain materials, depending on the available average power, pulse energy and peak fluence. Based on the well established TruMicro 5000 platform (first release in 2007, third generation in 2011) an Yb:YAG disk amplifier in combination with a broadband seed laser was used to scale the output power for industrial femtosecond-light sources: We report on a subpicosecond amplifier that delivers a maximum of 160 W of average output power at pulse durations of 750 fs. Optimizing the system for maximum peak power allowed for pulse energies of 850 μJ at pulse durations of 650 fs. Based on this study and the approved design of the TruMicro 5000 product-series, industrygrade, high average power femtosecond-light sources are now available for 24/7 operation. Since their release in May 2013 we were able to increase the average output power of the TruMicro 5000 FemtoEdition from 40 W to 80 W while maintaining pulse durations around 800 fs. First studies on metals reveal a drastic increase of processing speed for some micro processing applications.

  4. Averaging analysis for discrete time and sampled data adaptive systems

    NASA Technical Reports Server (NTRS)

    Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.

    1986-01-01

    Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.

  5. Average dynamics of a finite set of coupled phase oscillators

    SciTech Connect

    Dima, Germán C. Mindlin, Gabriel B.

    2014-06-15

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  6. Separability criteria with angular and Hilbert space averages

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo; Oh, C. H.; Umetsu, Koichiro; Yu, Sixia

    2016-05-01

    The practically useful criteria of separable states ρ =∑kwkρk in d = 2 × 2 are discussed. The equality G(a , b) = 4 [ < ψ | P(a) ⊗ P(b) | ψ > - < ψ | P(a) ⊗ 1 | ψ > < ψ | 1 ⊗ P(b) | ψ > ] = 0 for any two projection operators P(a) and P(b) provides a necessary and sufficient separability criterion in the case of a separable pure state ρ = | ψ > < ψ | . We propose the separability criteria of mixed states, which are given by Tr ρ { a ṡ σ ⊗ b ṡ σ } =(1 / 3) C cos φ for two spin 1 / 2 systems and 4 Tr ρ { P(a) ⊗ P(b) } = 1 +(1 / 2) C cos 2 φ for two photon systems, respectively, after taking a geometrical angular average of a and b with fixed cos φ = a ṡ b. Here - 1 ≤ C ≤ 1, and the difference in the numerical coefficients 1 / 2 and 1 / 3 arises from the different rotational properties of the spinor and the transverse photon. If one instead takes an average over the states in the d = 2 Hilbert space, the criterion for two photon systems is replaced by 4 Tr ρ { P(a) ⊗ P(b) } = 1 +(1 / 3) C cos 2 φ. Those separability criteria are shown to be very efficient using the existing experimental data of Aspect et al. in 1981 and Sakai et al. in 2006. When the Werner state is applied to two photon systems, it is shown that the Hilbert space average can judge its inseparability but not the geometrical angular average.

  7. Spatiotemporal averaging of perceived brightness along an apparent motion trajectory.

    PubMed

    Nagai, Takehiro; Beer, R Dirk; Krizay, Erin A; Macleod, Donald I A

    2011-01-01

    Objects are critical functional units for many aspects of visual perception and recognition. Many psychophysical experiments support the concept of an "object file" consisting of characteristics attributed to a single object on the basis of successive views of it, but there has been little evidence that object identity influences apparent brightness and color. In this study, we investigated whether the perceptual identification of successive flashed stimuli as views of a single moving object could affect brightness perception. Our target stimulus was composed of eight wedge-shaped sectors. The sectors were presented successively at different inter-flash intervals along an annular trajectory. At inter-flash intervals of around 100 ms, the impression was of a single moving object undergoing long-range apparent motion. By modulating the luminance between successive views, we measured the perception of luminance modulation along the trajectory of this long-range apparent motion. At the inter-flash intervals where the motion perception was strongest, the luminance difference was perceptually underestimated, and forced-choice luminance discrimination thresholds were elevated. Moreover, under such conditions, it became difficult for the observer to correctly associate or "bind" spatial positions and wedge luminances. These results indicate that the different luminances of wedges that were perceived as a single object were averaged along its apparent motion trajectory. The large spatial step size of our stimulus makes it unlikely that the results could be explained by averaging in a low-level mechanism that has a compact spatiotemporal receptive field (such as V1 and V2 neurons); higher level global motion or object mechanisms must be invoked to account for the averaging effect. The luminance averaging and the ambiguity of position-luminance "binding" suggest that the visual system may evade some of the costs of rapidly computing apparent brightness by adopting the

  8. Non-self-averaging in Ising spin glasses and hyperuniversality

    NASA Astrophysics Data System (ADS)

    Lundow, P. H.; Campbell, I. A.

    2016-01-01

    Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U22(T ,L ) for the spin glass susceptibility [and for higher moments Un n(T ,L ) ] is reported for dimensions 2 ,3 ,4 ,5 , and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ (T ,L ) as Un n(β ,L ) =[Kdξ (T ,L ) /L ] d and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996), 10.1103/PhysRevLett.77.3700]. Empirically, it is found that the Kd values are independent of d to within the statistics. The maximum values [Unn(T,L ) ] max are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [Unn(T,L ) ] max peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and X Y spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.

  9. Light-cone averages in a Swiss-cheese universe

    SciTech Connect

    Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino

    2008-01-15

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the {lambda}CDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w{sub 0} and w{sub a} follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.

  10. Effects of velocity averaging on the shapes of absorption lines

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.

    1980-01-01

    The velocity averaging of collision cross sections produces non-Lorentz line shapes, even at densities where Doppler broadening is not apparent. The magnitude of the effects will be described using a model in which the collision broadening depends on a simple velocity power law. The effect of the modified profile on experimental measures of linewidth, shift and amplitude will be examined and an improved approximate line shape will be derived.

  11. Targeted Cancer Screening in Average-Risk Individuals.

    PubMed

    Marcus, Pamela M; Freedman, Andrew N; Khoury, Muin J

    2015-11-01

    Targeted cancer screening refers to use of disease risk information to identify those most likely to benefit from screening. Researchers have begun to explore the possibility of refining screening regimens for average-risk individuals using genetic and non-genetic risk factors and previous screening experience. Average-risk individuals are those not known to be at substantially elevated risk, including those without known inherited predisposition, without comorbidities known to increase cancer risk, and without previous diagnosis of cancer or pre-cancer. In this paper, we describe the goals of targeted cancer screening in average-risk individuals, present factors on which cancer screening has been targeted, discuss inclusion of targeting in screening guidelines issued by major U.S. professional organizations, and present evidence to support or question such inclusion. Screening guidelines for average-risk individuals currently target age; smoking (lung cancer only); and, in some instances, race; family history of cancer; and previous negative screening history (cervical cancer only). No guidelines include common genomic polymorphisms. RCTs suggest that targeting certain ages and smoking histories reduces disease-specific cancer mortality, although some guidelines extend ages and smoking histories based on statistical modeling. Guidelines that are based on modestly elevated disease risk typically have either no or little evidence of an ability to affect a mortality benefit. In time, targeted cancer screening is likely to include genetic factors and past screening experience as well as non-genetic factors other than age, smoking, and race, but it is of utmost importance that clinical implementation be evidence-based. PMID:26165196

  12. The role of the harmonic vector average in motion integration

    PubMed Central

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA. PMID:24155716

  13. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA. PMID:24155716

  14. Separability criteria with angular and Hilbert space averages

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo; Oh, C. H.; Umetsu, Koichiro; Yu, Sixia

    2016-05-01

    The practically useful criteria of separable states ρ =∑kwkρk in d = 2 × 2 are discussed. The equality G(a , b) = 4 [ < ψ | P(a) ⊗ P(b) | ψ > - < ψ | P(a) ⊗ 1 | ψ > < ψ | 1 ⊗ P(b) | ψ > ] = 0 for any two projection operators P(a) and P(b) provides a necessary and sufficient separability criterion in the case of a separable pure state ρ = | ψ > < ψ | . We propose the separability criteria of mixed states, which are given by Tr ρ { a ṡ σ ⊗ b ṡ σ } =(1 / 3) C cos φ for two spin 1 / 2 systems and 4 Tr ρ { P(a) ⊗ P(b) } = 1 +(1 / 2) C cos 2 φ for two photon systems, respectively, after taking a geometrical angular average of a and b with fixed cos φ = a ṡ b. Here - 1 ≤ C ≤ 1, and the difference in the numerical coefficients 1 / 2 and 1 / 3 arises from the different rotational properties of the spinor and the transverse photon. If one instead takes an average over the states in the d = 2 Hilbert space, the criterion for two photon systems is replaced by 4 Tr ρ { P(a) ⊗ P(b) } = 1 +(1 / 3) C cos 2 φ. Those separability criteria are shown to be very efficient using the existing experimental data of Aspect et al. in 1981 and Sakai et al. in 2006. When the Werner state is applied to two photon systems, it is shown that the Hilbert space average can judge its inseparability but not the geometrical angular average.

  15. Characterizing individual painDETECT symptoms by average pain severity

    PubMed Central

    Sadosky, Alesia; Koduru, Vijaya; Bienen, E Jay; Cappelleri, Joseph C

    2016-01-01

    Background painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure), a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe), but their ability to discriminate individual item severity has not been evaluated. Methods Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624). Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level. Results A probability >50% for a better outcome (less severe pain) was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain) and highest probability was 76.4% (on cold/heat for mild vs severe pain). The pain radiation item was significant (P<0.05) and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ. Conclusion painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain-severity levels can serve as proxies to determine treatment effects, thus indicating probabilities for more favorable outcomes on pain symptoms. PMID:27555789

  16. Averaging cross section data so we can fit it

    SciTech Connect

    Brown, D.

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  17. Local and average behaviour in inhomogeneous superdiffusive media

    NASA Astrophysics Data System (ADS)

    Vezzani, Alessandro; Burioni, Raffaella; Caniparoli, Luca; Lepri, Stefano

    2011-05-01

    We consider a random walk on one-dimensional inhomogeneous graphs built from Cantor fractals. Our study is motivated by recent experiments that demonstrated superdiffusion of light in complex disordered materials, thereby termed Lévy glasses. We introduce a geometric parameter α which plays a role analogous to the exponent characterising the step length distribution in random systems. We study the large-time behaviour of both local and average observables; for the latter case, we distinguish two different types of averages, respectively over the set of all initial sites and over the scattering sites only. The 'single long-jump approximation" is applied to analytically determine the different asymptotic behaviour as a function of α and to understand their origin. We also discuss the possibility that the root of the mean square displacement and the characteristic length of the walker distribution may grow according to different power laws; this anomalous behaviour is typical of processes characterised by Lévy statistics and here, in particular, it is shown to influence average quantities.

  18. High Average Power, High Energy Short Pulse Fiber Laser System

    SciTech Connect

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  19. Role of spatial averaging in multicellular gradient sensing

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation–global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation–global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  20. Role of spatial averaging in multicellular gradient sensing.

    PubMed

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-01-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations. PMID:27203129

  1. Averaged equilibrium and stability in low-aspect-ratio stellarators

    SciTech Connect

    Garcia, L.; Carreras, B.A.; Dominguez, N.

    1989-01-01

    The MHD equilibrium and stability calculations or stellarators are complex because of the intrinsic three-dimensional (3-D) character of these configurations. The stellarators expansion simplifies the equilibrium calculation by reducing it to a two-dimensional (2-D) problem. The classical stellarator expansion includes terms up to order epsilon/sup 2/, and the vacuum magnetic field is also included up to this order. For large-aspect-ratio configurations, the results of the stellarator expansion agree well with 3-D numerical equilibrium results. But for low-aspect-ratio configurations, these are significant discrepancies with 3-D equilibrium calculations. The main reason for these discrepancies is the approximation in the vacuum field contributions. This problem can be avoided by applying the average method in a vacuum flux coordinate system. In this way, the exact vacuum magnetic field contribution is included and the results agree well with 3-D equilibrium calculations even for low-aspect-ratio configurations. Using the average method in a vacuum flux coordinate system also permit the accurate calculation of local stability properties with the Mercier criterion. The main improvement is in the accurate calculation of the geodesic curvature term. In this paper, we discuss the application of the average method in flux coordinates to the calculation of the Mercier criterion for low-aspect-ratio stellarator configurations. 12 refs., 3 figs.

  2. Kilowatt average-power laser for subpicosecond materials processing

    NASA Astrophysics Data System (ADS)

    Benson, Stephen V.; Neil, George R.; Bohn, Courtlandt L.; Biallas, George; Douglas, David; Dylla, H. Frederick; Fugitt, Jock; Jordan, Kevin; Krafft, Geoffrey; Merminga, Lia; Preble, Joe; Shinn, Michelle D.; Siggins, Tim; Walker, Richard; Yunn, Byung

    2000-04-01

    The performance of laser pulses in the sub-picosecond range for materials processing is substantially enhanced over similar fluences delivered in longer pulses. Recent advances in the development of solid state lasers have progressed significantly toward the higher average powers potentially useful for many applications. Nonetheless, prospects remain distant for multi-kilowatt sub-picosecond solid state systems such as would be required for industrial scale surface processing of metals and polymers. We present operation results from the world's first kilowatt scale ultra-fast materials processing laser. A Free Electron Laser (FEL) called the IR Demo is operational as a User Facility at Thomas Jefferson National Accelerator Facility in Newport News, Virginia, USA. In its initial operation at high average power it is capable of wavelengths in the 2 to 6 micron range and can produce approximately 0.7 ps pulses in a continuous train at approximately 75 MHz. This pulse length has been shown to be nearly optimal for deposition of energy in materials at the surface. Upgrades in the near future will extend operation beyond 10 kW CW average power in the near IR and kilowatt levels of power at wavelengths from 0.3 to 60 microns. This paper will cover the design and performance of this groundbreaking laser and operational aspects of the User Facility.

  3. Probability density function transformation using seeded localized averaging

    SciTech Connect

    Dimitrov, N. B.; Jordanov, V. T.

    2011-07-01

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  4. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2015-12-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  5. Noise reduction of video imagery through simple averaging

    NASA Astrophysics Data System (ADS)

    Vorder Bruegge, Richard W.

    1999-02-01

    Examiners in the Special Photographic Unit of the Federal Bureau of Investigation Laboratory Division conduct examinations of questioned photographic evidence of all types, including surveillance imagery recorded on film and video tape. A primary type of examination includes side-by- side comparisons, in which unknown objects or people depicted in the questioned images are compared with known objects recovered from suspects or with photographs of suspects themselves. Most imagery received in the SPU for such comparisons originate from time-lapse video or film systems. In such circumstances, the delay between sequential images is so great that standard image summing and/or averaging techniques are useless as a means of improving image detail in questioned subjects or objects without also resorting to processing-intensive pattern reconstruction algorithms. Occasionally, however, the receipt of real-time video imagery will include a questioned object at rest. In such cases, it is possible to use relatively simple image averaging techniques as a means of reducing transient noise in the images, without further compromising the already-poor resolution inherent in most video surveillance images. This paper presents an example of one such case in which multiple images were averaged to reduce the transient noise to a sufficient degree to permit the positive identification of a vehicle based upon the presence of scrape marks and dents on the side of the vehicle.

  6. CD SEM metrology macro CD technology: beyond the average

    NASA Astrophysics Data System (ADS)

    Bunday, Benjamin D.; Michelson, Di K.; Allgair, John A.; Tam, Aviram; Chase-Colin, David; Dajczman, Asaf; Adan, Ofer; Har-Zvi, Michael

    2005-05-01

    Downscaling of semiconductor fabrication technology requires an ever-tighter control of the production process. CD-SEM, being the major image-based critical dimension metrology tool, is constantly being improved in order to fulfill these requirements. One of the methods used for increasing precision is averaging over several or many (ideally identical) features, usually referred to as "Macro CD". In this paper, we show that there is much more to Macro CD technology- metrics characterizing an arbitrary array of similar features within a single SEM image-than just the average. A large amount of data is accumulated from a single scan of a SEM image, providing informative and statistically valid local process characterization. As opposed to other technologies, Macro CD not only provides extremely precise average metrics, but also allows for the reporting of full information on each of the measured features and of various statistics (such as the variability) on all currently reported CD SEM metrics. We present the mathematical background behind Macro CD technology and the opportunity for reducing number of sites for SPC, along with providing enhanced-sensitivity CD metrics.

  7. Average wavefunction method for multiple scattering theory and applications

    SciTech Connect

    Singh, H.

    1985-01-01

    A general approximation scheme, the average wavefunction approximation (AWM), applicable to scattering of atoms and molecules off multi-center targets, is proposed. The total potential is replaced by a sum of nonlocal, separable interactions. Each term in the sum projects the wave function onto a weighted average in the vicinity of a given scattering center. The resultant solution is an infinite order approximation to the true solution, and choosing the weighting function as the zeroth order solution guarantees agreement with the Born approximation to second order. In addition, the approximation also becomes increasingly more accurate in the low energy long wave length limit. A nonlinear, nonperturbative literature scheme for the wave function is proposed. An extension of the scheme to multichannel scattering suitable for treating inelastic scattering is also presented. The method is applied to elastic scattering of a gas off a solid surface. The formalism is developed for both periodic as well as disordered surfaces. Numerical results are presented for atomic clusters on a flat hard wall with a Gaussian like potential at each atomic scattering site. The effect of relative lateral displacement of two clusters upon the scattering pattern is shown. The ability of AWM to accommodate disorder through statistical averaging over cluster configuration is illustrated. Enhanced uniform back scattering is observed with increasing roughness on the surface. Finally, the AWM is applied to atom-molecule scattering.

  8. How to Address Measurement Noise in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Schöniger, A.; Wöhling, T.; Nowak, W.

    2014-12-01

    When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting

  9. 40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity for 1...) Average opacity for 1-minute interval blowing without visible emission interferences(percent)...

  10. Endotoxin exposure-response in a fiberglass manufacturing facility.

    PubMed

    Milton, D K; Wypij, D; Kriebel, D; Walters, M D; Hammond, S K; Evans, J S

    1996-01-01

    Peak expiratory flow (PEF) and workplace exposure to endotoxin, phenolic resin, and formaldehyde were measured to investigate asthma symptoms and medication use among employees in a fiberglass wool manufacturing plant. Self-recorded PEF was obtained from 37 workers, for a total of 181 days off work and 187 days at work with concurrent personal exposure monitoring. Pre- and post-shift spirometry were obtained on at least 2 days. The 8 hr time-weighted average personal exposure ranges were endotoxin; 0.4-759 ng/m3; phenolic resin, 5.7-327 micrograms/m3; and formaldehyde, 1.2-265 micrograms/m3. Amplitude percent mean peak flow was associated with years since starting regular work in the highest endotoxin exposure area, although current assignment in that area was associated with reduced amplitude--evidence for a healthy worker effect. Exposure-response was analyzed by regression of lung function change on exposure using generalized estimating equations with robust variance estimates. Endotoxin exposure above 4 ng/m3 (8 hr time-weighted average) was associated with a decline in lung function across the work shift, and with drops in lung function 16-20 hr after exposure. Phenolic resin exposure was not consistently associated with decrements, and formaldehyde was not associated with decrements in lung function. PMID:8808037

  11. New model of the average neutron and proton pairing gaps

    NASA Astrophysics Data System (ADS)

    Madland, David G.; Nix, J. Rayford

    1988-01-01

    By use of the BCS approximation applied to a distribution of dense, equally spaced levels, we derive new expressions for the average neutron pairing gap ¯gD n and average proton pairing gap ¯gD p. These expressions, which contain exponential terms, take into account the dependencies of ¯gD n and ¯gD p upon both the relative neutron excess and shape of the nucleus. The three constants that appear are determined by a least-squares adjustment to experimental pairing gaps obtained by use of fourth-order differences of measured masses. For this purpose we use the 1986 Audi-Wapstra mid-stream mass evaluation and take into account experimental uncertainties. Our new model explains not only the dependencies of ¯gD n and ¯gD p upon relative neutron excess and nuclear shape, but also the experimental result that for medium and heavy nuclei ¯gD n is generally smaller than ¯gD p. We also introduce a new expression for the average residual neutron-proton interaction energy ¯gd that appears in the masses of odd-odd nuclei, and determine the constant that appears by an analogous least-squares adjustment to experimental mass differences. Our new expressions for ¯gD n, ¯gD p and ¯gd should permit extrapolation of these quantities to heavier nuclei and to nuclei farther removed from the valley of β stability than do previous parameterizations.

  12. Human facial beauty : Averageness, symmetry, and parasite resistance.

    PubMed

    Thornhill, R; Gangestad, S W

    1993-09-01

    It is hypothesized that human faces judged to be attractive by people possess two features-averageness and symmetry-that promoted adaptive mate selection in human evolutionary history by way of production of offspring with parasite resistance. Facial composites made by combining individual faces are judged to be attractive, and more attractive than the majority of individual faces. The composites possess both symmetry and averageness of features. Facial averageness may reflect high individual protein heterozygosity and thus an array of proteins to which parasites must adapt. Heterozygosity may be an important defense of long-lived hosts against parasites when it occurs in portions of the genome that do not code for the essential features of complex adaptations. In this case heterozygosity can create a hostile microenvironment for parasites without disrupting adaptation. Facial bilateral symmetry is hypothesized to affect positive beauty judgments because symmetry is a certification of overall phenotypic quality and developmental health, which may be importantly influenced by parasites. Certain secondary sexual traits are influenced by testosterone, a hormone that reduces immunocompetence. Symmetry and size of the secondary sexual traits of the face (e.g., cheek bones) are expected to correlate positively and advertise immunocompetence honestly and therefore to affect positive beauty judgments. Facial attractiveness is predicted to correlate with attractive, nonfacial secondary sexual traits; other predictions from the view that parasite-driven selection led to the evolution of psychological adaptations of human beauty perception are discussed. The view that human physical attractiveness and judgments about human physical attractiveness evolved in the context of parasite-driven selection leads to the hypothesis that both adults and children have a species-typical adaptation to the problem of identifying and favoring healthy individuals and avoiding parasite

  13. Understanding coastal morphodynamic patterns from depth-averaged sediment concentration

    NASA Astrophysics Data System (ADS)

    Ribas, F.; Falqués, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.

    2015-06-01

    This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand ridges. We present a formulation and methodology, based on the knowledge of the DASC (which equals the sediment load divided by the water depth), that has been successfully used to understand the characteristics of these features. These sand bodies, relevant for coastal engineering and other disciplines, are located in different parts of the coastal zone and are characterized by different spatial and temporal scales, but the same technique can be used to understand them. Since the sand bodies occur in the presence of depth-averaged currents, the sediment transport approximately equals a sediment load times the current. Moreover, it is assumed that waves essentially mobilize the sediment, and the current increases this mobilization and advects the sediment. In such conditions, knowing the spatial distribution of the DASC and the depth-averaged currents induced by the forcing (waves, wind, and pressure gradients) over the patterns allows inferring the convergence/divergence of sediment transport. Deposition (erosion) occurs where the current flows from areas of high to low (low to high) values of DASC. The formulation and methodology are especially useful to understand the positive feedback mechanisms between flow and morphology leading to the formation of those morphological features, but the physical mechanisms for their migration, their finite-amplitude behavior and their decay can also be explored.

  14. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  15. Signal averaging x-ray streak camera with picosecond jitter

    NASA Astrophysics Data System (ADS)

    Maksimchuk, A.; Kim, M.; Workman, J.; Korn, G.; Squier, J.; Du, D.; Umstadter, D.; Mourou, G.; Bouvier, M.

    1996-03-01

    We have developed an averaging picosecond x-ray streak camera using a dc-biased photoconductive switch as a generator of a high-voltage ramp. The streak camera is operated at a sweep speed of up to 8 ps/mm, shot-to-shot jitter is less than ±1 ps. The streak camera has been used to measure the time history of broadband x-ray emission from an ultrashort pulse laser-produced plasma. Accumulation of the streaked x-ray signals significantly improved the signal-to-noise ratio of the data obtained.

  16. Fingerprinting Codes for Multimedia Data against Averaging Attack

    NASA Astrophysics Data System (ADS)

    Yagi, Hideki; Matsushima, Toshiyasu; Hirasawa, Shigeichi

    Code construction for digital fingerprinting, which is a copyright protection technique for multimedia, is considered. Digital fingerprinting should deter collusion attacks, where several fingerprinted copies of the same content are mixed to disturb their fingerprints. In this paper, we consider the averaging attack, which is known to be effective for multimedia fingerprinting with the spread spectrum technique. We propose new methods for constructing fingerprinting codes to increase the coding rate of conventional fingerprinting codes, while they guarantee to identify the same number of colluders. Due to the new fingerprinting codes, the system can deal with a larger number of users to supply digital contents.

  17. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  18. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  19. Boundedness of generalized Cesaro averaging operators on certain function spaces

    NASA Astrophysics Data System (ADS)

    Agrawal, M. R.; Howlett, P. G.; Lucas, S. K.; Naik, S.; Ponnusamy, S.

    2005-08-01

    We define a two-parameter family of Cesaro averaging operators , where , is analytic on the unit disc [Delta], and F(a,b;c;z) is the classical hypergeometric function. In the present article the boundedness of , , on various function spaces such as Hardy, BMOA and a-Bloch spaces is proved. In the special case b=1+[alpha] and c=1, becomes the [alpha]-Cesaro operator , . Thus, our results connect the special functions in a natural way and extend and improve several well-known results of Hardy-Littlewood, Miao, Stempak and Xiao.

  20. Glycogen with short average chain length enhances bacterial durability

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  1. Constructing the Average Natural History of HIV-1 Infection

    NASA Astrophysics Data System (ADS)

    Diambra, L.; Capurro, A.; Malta, C. P.

    2007-05-01

    Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.

  2. The average configuration of the induced Venus magnetotail

    NASA Technical Reports Server (NTRS)

    Mccomas, D. J.; Spence, H. E.; Russell, C. T.

    1987-01-01

    The interaction of the solar-wind flow with Venus is discussed as well as the morphology of magnetic-field-line draping in the Venus magnetotail. Emphasis is placed on the importance of the interplanetary magnetic field X-component in controlling the configuration of field draping in this induced magnetotail. The average magnetic configuration of this magnetotail is studied. A connection is made between the derived consistent plasma flow speed and density and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter plasma analyzer.

  3. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    NASA Astrophysics Data System (ADS)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of

  4. Status of Average-x from Lattice QCD

    SciTech Connect

    Dru Renner

    2011-09-01

    As algorithms and computing power have advanced, lattice QCD has become a precision technique for many QCD observables. However, the calculation of nucleon matrix elements remains an open challenge. I summarize the status of the lattice effort by examining one observable that has come to represent this challenge, average-x: the fraction of the nucleon's momentum carried by its quark constituents. Recent results confirm a long standing tendency to overshoot the experimentally measured value. Understanding this puzzle is essential to not only the lattice calculation of nucleon properties but also the broader effort to determine hadron structure from QCD.

  5. Optical Parametric Amplification for High Peak and Average Power

    SciTech Connect

    Jovanovic, I

    2001-11-26

    Optical parametric amplification is an established broadband amplification technology based on a second-order nonlinear process of difference-frequency generation (DFG). When used in chirped pulse amplification (CPA), the technology has been termed optical parametric chirped pulse amplification (OPCPA). OPCPA holds a potential for producing unprecedented levels of peak and average power in optical pulses through its scalable ultrashort pulse amplification capability and the absence of quantum defect, respectively. The theory of three-wave parametric interactions is presented, followed by a description of the numerical model developed for nanosecond pulses. Spectral, temperature and angular characteristics of OPCPA are calculated, with an estimate of pulse contrast. An OPCPA system centered at 1054 nm, based on a commercial tabletop Q-switched pump laser, was developed as the front end for a large Nd-glass petawatt-class short-pulse laser. The system does not utilize electro-optic modulators or multi-pass amplification. The obtained overall 6% efficiency is the highest to date in OPCPA that uses a tabletop commercial pump laser. The first compression of pulses amplified in highly nondegenerate OPCPA is reported, with the obtained pulse width of 60 fs. This represents the shortest pulse to date produced in OPCPA. Optical parametric amplification in {beta}-barium borate was combined with laser amplification in Ti:sapphire to produce the first hybrid CPA system, with an overall conversion efficiency of 15%. Hybrid CPA combines the benefits of high gain in OPCPA with high conversion efficiency in Ti:sapphire to allow significant simplification of future tabletop multi-terawatt sources. Preliminary modeling of average power limits in OPCPA and pump laser design are presented, and an approach based on cascaded DFG is proposed to increase the average power beyond the single-crystal limit. Angular and beam quality effects in optical parametric amplification are modeled

  6. 100 W average power femtosecond laser at 343 nm.

    PubMed

    Rothhardt, Jan; Rothhardt, Carolin; Müller, Michael; Klenke, Arno; Kienel, Marco; Demmler, Stefan; Elsmann, Tino; Rothhardt, Manfred; Limpert, Jens; Tünnermann, Andreas

    2016-04-15

    We present a femtosecond laser system delivering up to 100 W of average power at 343 nm. The laser system employs a Yb-based femtosecond fiber laser and subsequent second- and third-harmonic generation in beta barium borate (BBO) crystals. Thermal gradients within these BBO crystals are mitigated by sapphire heat spreaders directly bonded to the front and back surface of the crystals. Thus, a nearly diffraction-limited beam quality (M2 < 1.4) is achieved, despite the high thermal load to the nonlinear crystals. This laser source is expected to push many industrial and scientific applications in the future. PMID:27082370

  7. Weighted Average Consensus-Based Unscented Kalman Filtering.

    PubMed

    Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong

    2016-02-01

    In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453

  8. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  9. Control of average spacing of OMCVD grown gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Rezaee, Asad

    Metallic nanostructures and their applications is a rapidly expanding field. Nobel metals such as silver and gold have historically been used to demonstrate plasmon effects due to their strong resonances, which occur in the visible part of the electromagnetic spectrum. Localized surface plasmon resonance (LSPR) produces an enhanced electromagnetic field at the interface between a gold nanoparticle (Au NP) and the surrounding dielectric. This enhanced field can be used for metal-dielectric interfacesensitive optical interactions that form a powerful basis for optical sensing. In addition to the surrounding material, the LSPR spectral position and width depend on the size, shape, and average spacing between these particles. Au NP LSPR based sensors depict their highest sensitivity with optimized parameters and usually operate by investigating absorption peak: shifts. The absorption peak: of randomly deposited Au NPs on surfaces is mostly broad. As a result, the absorption peak: shifts, upon binding of a material onto Au NPs might not be very clear for further analysis. Therefore, novel methods based on three well-known techniques, self-assembly, ion irradiation, and organo-meta1lic chemical vapour deposition (OMCVD) are introduced to control the average-spacing between Au NPs. In addition to covalently binding and other advantages of OMCVD grown Au NPs, interesting optical features due to their non-spherical shapes are presented. The first step towards the average-spacing control is to uniformly form self-assembled monolayers (SAMs) of octadecyltrichlorosilane (OTS) as resists for OMCVD Au NPs. The formation and optimization of the OTS SAMs are extensively studied. The optimized resist SAMs are ion-irradiated by a focused ion beam (Fill) and ions generated by a Tandem accelerator. The irradiated areas are refilled with 3-mercaptopropyl-trimethoxysilane (MPTS) to provide nucleation sites for the OMCVD Au NP growth. Each step during sample preparation is monitored by

  10. Yearly average performance of the principal solar collector tapes

    NASA Astrophysics Data System (ADS)

    Rabl, A.

    1981-01-01

    The results of hour by hour simulations for 26 meteorological stations were used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types; flat plate, evacuated tubes, CPC, single and dual axis tracking collectors, and central receiver. The correlations are first and second order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour by hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators.

  11. Advanced distortion-invariant minimum average correlation energy (MACE) filters.

    PubMed

    Casasent, D; Ravichandran, G

    1992-03-10

    The original minimum average correlation energy (MACE) filter is addressed by using a new database (strategic relocatable objects, missile launchers) and including noise performance, depression angle, and resolution effects on the number of training set images that are required. Major attention is given to our new MACE filter algorithms for distortion-invariant pattern recognition: shifted-MACE filters (to suppress large false correlation peaks), minimum variance-MACE filters (for improved noise performance), multiple symbolic encoded filters (to reduce the effect of false correlation peaks), and Gaussian-MACE filters (to improve noise performance and intraclass recognition and reduce the training set size). PMID:20720728

  12. Advanced distortion-invariant minimum average correlation energy (MACE) filters

    NASA Astrophysics Data System (ADS)

    Casasent, David; Ravichandran, Gopalan

    1992-03-01

    The original minimum average correlation energy (MACE) filter is addressed by using a new database (strategic relocatable objects and missile launchers) and including noise performance, depression angle, and resolution effects on the number of training set images that are required. Major attention is given to new MACE filter algorithms for distortion-invariant pattern recognition: shifted-MACE filters to suppress large false correlation peaks, minimum variance-MACE filters for improved noise performance, multiple symbolic encoded filters to reduce the effect of false correlation peaks, and Gaussian-MACE filters to improve noise performance and intraclass recognition and reduce the training set size.

  13. Atom-molecule scattering with the average wavefunction method

    NASA Astrophysics Data System (ADS)

    Singh, Harjinder; Dacol, Dalcio K.; Rabitz, Herschel

    1987-08-01

    The average wavefunction method (AWM) is applied to atom-molecule scattering. In its simplest form the labor involved in solving the AWM equations is equivalent to that involved for elastic scattering in the same formulation. As an initial illustration, explicit expressions for the T-matrix are derived for the scattering of an atom and a rigid rotor. Results are presented for low-energy scattering and corrections to the Born approximation are clearly evident. In general, the AWM is particularly suited to polyatom scattering due to its reduction of the potential in terms of a separable atom-atom potential.

  14. Thermal management in high average power pulsed compression systems

    SciTech Connect

    Wavrik, R.W.; Reed, K.W.; Harjes, H.C.; Weber, G.J.; Butler, M.; Penn, K.J.; Neau, E.L.

    1992-08-01

    High average power repetitively pulsed compression systems offer a potential source of electron beams which may be applied to sterilization of wastes, treatment of food products, and other environmental and consumer applications. At Sandia National Laboratory, the Repetitive High Energy Pulsed Power (RHEPP) program is developing a 7 stage magnetic pulse compressor driving a linear induction voltage adder with an electron beam diode load. The RHEPP machine is being design to deliver 350 kW of average power to the diode in 60 ns FWHM, 2.5 MV, 3 kJ pulses at a repetition rate of 120 Hz. In addition to the electrical design considerations, the repetition rate requires thermal management of the electrical losses. Steady state temperatures must be kept below the material degradation temperatures to maximize reliability and component life. The optimum design is a trade off between thermal management, maximizing overall electrical performance of the system, reliability, and cost effectiveness. Cooling requirements and configurations were developed for each of the subsystems of RHEPP. Finite element models that combine fluid flow and heat transfer were used to screen design concepts. The analysis includes one, two, and three dimensional heat transfer using surface heat transfer coefficients and boundary layer models. Experiments were conducted to verify the models as well as to evaluate cooling channel fabrication materials and techniques in Metglas wound cores. 10 refs.

  15. Thermal management in high average power pulsed compression systems

    SciTech Connect

    Wavrik, R.W.; Reed, K.W.; Harjes, H.C.; Weber, G.J.; Butler, M.; Penn, K.J.; Neau, E.L.

    1992-01-01

    High average power repetitively pulsed compression systems offer a potential source of electron beams which may be applied to sterilization of wastes, treatment of food products, and other environmental and consumer applications. At Sandia National Laboratory, the Repetitive High Energy Pulsed Power (RHEPP) program is developing a 7 stage magnetic pulse compressor driving a linear induction voltage adder with an electron beam diode load. The RHEPP machine is being design to deliver 350 kW of average power to the diode in 60 ns FWHM, 2.5 MV, 3 kJ pulses at a repetition rate of 120 Hz. In addition to the electrical design considerations, the repetition rate requires thermal management of the electrical losses. Steady state temperatures must be kept below the material degradation temperatures to maximize reliability and component life. The optimum design is a trade off between thermal management, maximizing overall electrical performance of the system, reliability, and cost effectiveness. Cooling requirements and configurations were developed for each of the subsystems of RHEPP. Finite element models that combine fluid flow and heat transfer were used to screen design concepts. The analysis includes one, two, and three dimensional heat transfer using surface heat transfer coefficients and boundary layer models. Experiments were conducted to verify the models as well as to evaluate cooling channel fabrication materials and techniques in Metglas wound cores. 10 refs.

  16. Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2015-08-01

    In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.

  17. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  18. Microstructural effects on the average properties in porous battery electrodes

    NASA Astrophysics Data System (ADS)

    García-García, Ramiro; García, R. Edwin

    2016-03-01

    A theoretical framework is formulated to analytically quantify the effects of the microstructure on the average properties of porous electrodes, including reactive area density and the through-thickness tortuosity as observed in experimentally-determined tomographic sections. The proposed formulation includes the microstructural non-idealities but also captures the well-known perfectly spherical limit. Results demonstrate that in the absence of any particle alignment, the through-thickness Bruggeman exponent α, reaches an asymptotic value of α ∼ 2 / 3 as the shape of the particles become increasingly prolate (needle- or fiber-like). In contrast, the Bruggeman exponent diverges as the shape of the particles become increasingly oblate, regardless of the degree of particle alignment. For aligned particles, tortuosity can be dramatically suppressed, e.g., α → 1 / 10 for ra → 1 / 10 and MRD ∼ 40 . Particle size polydispersity impacts the porosity-tortuosity relation when the average particle size is comparable to the thickness of the electrode layers. Electrode reactivity density can be arbitrarily increased as the particles become increasingly oblate, but asymptotically reach a minimum value as the particles become increasingly prolate. In the limit of a porous electrode comprised of fiber-like particles, the area density decreases by 24% , with respect to a distribution of perfectly spherical particles.

  19. Spectral attenuation and backscattering as indicators of average particle size.

    PubMed

    Slade, Wayne Homer; Boss, Emmanuel

    2015-08-20

    Measurements of the particulate beam attenuation coefficient at multiple wavelengths in the ocean typically exhibit a power law dependence on wavelength, and the slope of that power law has been related to the slope of the particle size distribution (PSD), when assumed to be a power law function of particle size. Recently, spectral backscattering coefficient measurements have been made using sensors deployed at moored observatories, on autonomous underwater vehicles, and even retrieved from space-based measurements of remote sensing reflectance. It has been suggested that these backscattering measurements may also be used to obtain information about the shape of the PSD. In this work, we directly compared field-measured PSD with multispectral beam attenuation and backscattering coefficients in a coastal bottom boundary later. The results of this comparison demonstrated that (1) the beam attenuation spectral slope correlates with the average particle size as suggested by theory for idealized particles and PSD; and (2) measurements of spectral backscattering also contain information reflective of the average particle size in spite of large deviations of the PSD from a spectral power law shape. PMID:26368762

  20. Yearly average performance of the principal solar collector types

    SciTech Connect

    Rabl, A.

    1981-01-01

    The results of hour-by-hour simulations for 26 meteorological stations are used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types: flat plate, evacuated tubes, CPC, single- and dual-axis tracking collectors, and central receiver. The correlations are first- and second-order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour-by-hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators. This method can be applied to collectors that operate year-around in such a way that no collected energy is discarded, including photovoltaic systems, solar-augmented industrial process heat systems, and solar thermal power systems. The method is also recommended for rating collectors of different type or manufacturer by yearly average performance, evaluating the effects of collector degradation, the benefits of collector cleaning, and the gains from collector improvements (due to enhanced optical efficiency or decreased heat loss per absorber surface). For most of these applications, the method is accurate enough to replace a system simulation.

  1. Cause of the exceptionally high AE average for 2003

    NASA Astrophysics Data System (ADS)

    Prestes, A.

    2012-04-01

    In this work we focus on the year of 2003 when the AE index was extremely high (AE=341nT, with peak intensity more than 2200nT), this value is almost 100 nT higher when compared with others years of the cycle 23. Interplanetary magnetic field (IMF) and plasma data are compared with geomagnetic AE and Dst indices to determine the causes of exceptionally high AE average value. Analyzing the solar wind parameters we found that the annual average speed value was extremely high, approximately 542 km/s (peak value ~1074 km/s). These values were due to recurrent high-speed solar streams from large coronal holes, which stretch to the solar equator, and low-latitude coronal holes, which exist for many solar rotations. AE was found to increase with increasing solar wind speed and decrease when solar wind speed decrease. The cause of the high AE activity during 2003 is the presence of the high-speed corotating streams that contain large-amplitude Alfvén waves throughout the streams, which resulted in a large number of HILDCAAs events. When plasma and field of solar wind impinge on Earth's magnetosphere, the southward field turnings associated with the wave fluctuations cause magnetic reconnection and consequential high levels of AE activity and very long recovery phases on Dst, sometimes lasting until the next stream arrives.

  2. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  3. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary.

    PubMed

    Tinmouth, Jill; Vella, Emily T; Baxter, Nancy N; Dubé, Catherine; Gould, Michael; Hey, Amanda; Ismaila, Nofisat; McCurdy, Bronwen R; Paszat, Lawrence

    2016-01-01

    Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS) prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs). There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial) to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program. PMID:27597935

  4. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  5. Average and individual B hadron lifetimes at CDF

    SciTech Connect

    Schneider, O.; CDF Collaboration

    1993-09-01

    Bottom hadron lifetime measurements have been performed using B {yields} J/{psi} {yields} {mu}+{mu}{sup {minus}}X dacays recorded with the collider Detector at Fermilab (CDF) during the first half of the 1992--1993 Tevatron collider run. These decays have been reconstructed in a silicon vertex detector. Using 5344 {plus_minus} 73 inclusive J/{psi} events, the average lifetime of all bottom hadrons produced in 1.8 TeV p{bar p} collisions and decaying into a J/{psi} events, the average lifetime of all bottom hadrons produced in 1.8 TeV p{bar p} collisions and decaying into a J/{psi} is found to be 1.46 {plus_minus} 0.06(stat) {plus_minus}0.06(sys)ps. The charged and neutral B meson lifetimes have been measured separately using 75 {plus_minus}10 (charged) and 61{plus_minus}9 (neutral) fully reconstructed decays; preliminary results are {tau}{sup {plus_minus}} = 1.63 {plus_minus} 0.21(stat) {plus_minus} 0.16(sys) {plus_minus} 0. 10(sys) ps, yielding a lifetime ratio of {tau}{sup {plus_minus}}/{tau}{sup 0} = 1.06{plus_minus} 0.20(stat){plus_minus}0.12(sys).

  6. Vibrationally averaged dipole moments of methane and benzene isotopologues

    NASA Astrophysics Data System (ADS)

    Arapiraca, A. F. C.; Mohallem, J. R.

    2016-04-01

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C6H3D3 is about twice as large as the measured dipole moment of C6H5D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  7. Rainfall Estimation Over Tropical Oceans. 1; Area Average Rain Rate

    NASA Technical Reports Server (NTRS)

    Cuddapah, Prabhakara; Cadeddu, Maria; Meneghini, R.; Short, David A.; Yoo, Jung-Moon; Dalu, G.; Schols, J. L.; Weinman, J. A.

    1997-01-01

    Multichannel dual polarization microwave radiometer SSM/I observations over oceans do not contain sufficient information to differentiate quantitatively the rain from other hydrometeors on a scale comparable to the radiometer field of view (approx. 30 km). For this reason we have developed a method to retrieve average rain rate over a mesoscale grid box of approx. 300 x 300 sq km area over the TOGA COARE region where simultaneous radiometer and radar observations are available for four months (Nov. 92 to Feb. 93). The rain area in the grid box, inferred from the scattering depression due to hydrometeors in the 85 Ghz brightness temperature, constitutes a key parameter in this method. Then the spectral and polarization information contained in all the channels of the SSM/I is utilized to deduce a second parameter. This is the ratio S/E of scattering index S, and emission index E calculated from the SSM/I data. The rain rate retrieved from this method over the mesoscale area can reproduce the radar observed rain rate with a correlation coefficient of about 0.85. Furthermore monthly total rainfall estimated from this method over that area has an average error of about 15%.

  8. The visual system discounts emotional deviants when extracting average expression

    PubMed Central

    Haberman, Jason; Whitney, David

    2011-01-01

    There has been a recent surge in the study of ensemble coding, the idea that the visual system represents a set of similar items using summary statistics (Alvarez & Oliva, 2008; Ariely, 2001; Chong & Treisman, 2003; Parkes, Lund, Angelucci, Solomon, & Morgan, 2001). We previously demonstrated that this ability extends to faces and thus requires a high level of object processing (Haberman & Whitney, 2007, 2009). Recent debate has centered on the nature of the summary representation of size (e.g., Myczek & Simons, 2008) and whether the perceived average simply reflects the sampling of a very small subset of the items in a set. In the present study, we explored this further in the context of faces, asking observers to judge the average expressions of sets of faces containing emotional outliers. Our results suggest that the visual system implicitly and unintentionally discounts the emotional outliers, thereby computing a summary representation that encompasses the vast majority of the information present. Additional computational modeling and behavioral results reveal that an intentional, cognitive sampling strategy does not accurately capture observer performance. Observers derive precise ensemble information given a 250-msec exposure, suggesting a rapid and flexible system not bound by the limits of serial attention. PMID:20952781

  9. Numerical Study of Fractional Ensemble Average Transport Equations

    NASA Astrophysics Data System (ADS)

    Kim, S.; Park, Y.; Gyeong, C. B.; Lee, O.

    2014-12-01

    In this presentation, a newly developed theory is applied to the case of stationary and non-stationary stochastic advective flow field, and a numerical solution method is presented for the resulting fractional Fokker-Planck equation (fFPE), which describes the evolution of the probability density function (PDF) of contaminant concentration. The derived fFPE is evaluated for three different form: 1) purely advective form, 2) second-order moment form and 3) second-order cumulant form. The Monte Carlo analysis of the fractional governing equation is then performed in a stochastic flow field, generated by a fractional Brownian motion for the stationary and non-stationary stochastic advection, in order to provide a benchmark for the results obtained from the fFPEs. When compared to the Monte Carlo simulation based PDFs and their ensemble average, the second-order cumulant form gives a good fit in terms of the shape and mode of the PDF of the contaminant concentration. Therefore, it is quite promising that the non-Fickian transport behavior can be modeled by the derived fractional ensemble average transport equations either by means of the long memory in the underlying stochastic flow, or by means of the time-space non-stationarity of the underlying stochastic flow, or by means of the time and space fractional derivatives of the transport equations. This subject is supported by Korea Ministry of Environment as "The Eco Innovation Project : Non-point source pollution control research group"

  10. A local average distance descriptor for flexible protein structure comparison

    PubMed Central

    2014-01-01

    Background Protein structures are flexible and often show conformational changes upon binding to other molecules to exert biological functions. As protein structures correlate with characteristic functions, structure comparison allows classification and prediction of proteins of undefined functions. However, most comparison methods treat proteins as rigid bodies and cannot retrieve similarities of proteins with large conformational changes effectively. Results In this paper, we propose a novel descriptor, local average distance (LAD), based on either the geodesic distances (GDs) or Euclidean distances (EDs) for pairwise flexible protein structure comparison. The proposed method was compared with 7 structural alignment methods and 7 shape descriptors on two datasets comprising hinge bending motions from the MolMovDB, and the results have shown that our method outperformed all other methods regarding retrieving similar structures in terms of precision-recall curve, retrieval success rate, R-precision, mean average precision and F1-measure. Conclusions Both ED- and GD-based LAD descriptors are effective to search deformed structures and overcome the problems of self-connection caused by a large bending motion. We have also demonstrated that the ED-based LAD is more robust than the GD-based descriptor. The proposed algorithm provides an alternative approach for blasting structure database, discovering previously unknown conformational relationships, and reorganizing protein structure classification. PMID:24694083

  11. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary

    PubMed Central

    Baxter, Nancy N.; Dubé, Catherine; Hey, Amanda

    2016-01-01

    Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS) prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs). There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial) to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program. PMID:27597935

  12. On averaging multiview relations for 3D scan registration.

    PubMed

    Govindu, Venu Madhav; Pooja, A

    2014-03-01

    In this paper, we present an extension of the iterative closest point (ICP) algorithm that simultaneously registers multiple 3D scans. While ICP fails to utilize the multiview constraints available, our method exploits the information redundancy in a set of 3D scans by using the averaging of relative motions. This averaging method utilizes the Lie group structure of motions, resulting in a 3D registration method that is both efficient and accurate. In addition, we present two variants of our approach, i.e., a method that solves for multiview 3D registration while obeying causality and a transitive correspondence variant that efficiently solves the correspondence problem across multiple scans. We present experimental results to characterize our method and explain its behavior as well as those of some other multiview registration methods in the literature. We establish the superior accuracy of our method in comparison to these multiview methods with registration results on a set of well-known real datasets of 3D scans. PMID:23412615

  13. Probabilistic climate change predictions applying Bayesian model averaging.

    PubMed

    Min, Seung-Ki; Simonis, Daniel; Hense, Andreas

    2007-08-15

    This study explores the sensitivity of probabilistic predictions of the twenty-first century surface air temperature (SAT) changes to different multi-model averaging methods using available simulations from the Intergovernmental Panel on Climate Change fourth assessment report. A way of observationally constrained prediction is provided by training multi-model simulations for the second half of the twentieth century with respect to long-term components. The Bayesian model averaging (BMA) produces weighted probability density functions (PDFs) and we compare two methods of estimating weighting factors: Bayes factor and expectation-maximization algorithm. It is shown that Bayesian-weighted PDFs for the global mean SAT changes are characterized by multi-modal structures from the middle of the twenty-first century onward, which are not clearly seen in arithmetic ensemble mean (AEM). This occurs because BMA tends to select a few high-skilled models and down-weight the others. Additionally, Bayesian results exhibit larger means and broader PDFs in the global mean predictions than the unweighted AEM. Multi-modality is more pronounced in the continental analysis using 30-year mean (2070-2099) SATs while there is only a little effect of Bayesian weighting on the 5-95% range. These results indicate that this approach to observationally constrained probabilistic predictions can be highly sensitive to the method of training, particularly for the later half of the twenty-first century, and that a more comprehensive approach combining different regions and/or variables is required. PMID:17569647

  14. The partially averaged field approach to cosmic ray diffusion

    NASA Technical Reports Server (NTRS)

    Jones, F. C.; Birmingham, T. J.; Kaiser, T. B.

    1976-01-01

    The kinetic equation for particles interacting with turbulent fluctuations is derived by a new nonlinear technique which successfully corrects the difficulties associated with quasilinear theory. In this new method the effects of the fluctuations are evaluated along particle orbits which themselves include the effects of a statistically averaged subset of the possible configurations of the turbulence. The new method is illustrated by calculating the pitch angle diffusion coefficient D sub Mu Mu for particles interacting with slab model magnetic turbulence, i.e., magnetic fluctuations linearly polarized transverse to a mean magnetic field. Results are compared with those of quasilinear theory and also with those of Monte Carlo calculations. The major effect of the nonlinear treatment in this illustration is the determination of D sub Mu Mu in the vicinity of 90 deg pitch angles where quasilinear theory breaks down. The spatial diffusion coefficient parallel to a mean magnetic field is evaluated using D sub Mu Mu as calculated by this technique. It is argued that the partially averaged field method is not limited to small amplitude fluctuating fields and is hence not a perturbation theory.

  15. Loss of lifetime due to radiation exposure-averaging problems.

    PubMed

    Raicević, J J; Merkle, M; Ehrhardt, J; Ninković, M M

    1997-04-01

    A new method is presented for assessing a years of life lost (YLL) due to stochastic effects caused by the exposure to ionizing radiation. The widely accepted method from the literature uses a ratio of means of two quantities, defining in fact the loss of life as a derived quantity. We start from the real stochastic nature of the quantity (YLL), which enables us to obtain its mean values in a consistent way, using the standard averaging procedures, based on the corresponding joint probability density functions needed in this problem. Our method is mathematically different and produces lower values of average YLL. In this paper we also found certain similarities with the concept of loss of life expectancy among exposure induced deaths (LLE-EID), which is accepted in the recently published UNSCEAR report, where the same quantity is defined as years of life lost per radiation induced case (YLC). Using the same data base, the YLL and the LLE-EID are calculated and compared for the simplest exposure case-the discrete exposure at age a. It is found that LLE-EID overestimates the YLL, and that the magnitude of this overestimation reaches more than 15%, which depends on the effect under consideration. PMID:9119679

  16. A high average power electro-optic switch using KTP

    SciTech Connect

    Ebbers, C.A.; Cook, W.M.; Velsko, S.P.

    1994-04-01

    High damage threshold, high thermal conductivity, and small thermo-optic coefficients make KTiOPO{sub 4} (KTP) an attractive material for use in a high average power Q-switch. However, electro-chromic damage and refractive index homogeneity have prevented the utilization of KTP in such a device in the past. This work shows that electro-chromic damage is effectively suppressed using capacitive coupling, and a KTP crystal can be Q-switched for 1.5 {times} 10{sup 9} shots without any detectable electro-chromic damage. In addition, KTP with the high uniformity and large aperture size needed for a KTP electro-optic Q-switch can be obtained from flux crystals grown at constant temperature. A thermally compensated, dual crystal KTP Q-switch, which successfully produced 50 mJ pulses with a pulse width of 8 ns (FWHM), has been constructed. In addition, in off-line testing the Q-switch showed less than 7% depolarization at an average power loading of 3.2 kW/cm{sup 2}.

  17. Phase-averaged measurements of perturbations introduced into boundary layers

    NASA Technical Reports Server (NTRS)

    Watmuff, Jonathan H.

    1991-01-01

    Large-scale structures in turbulent and transitional wall-bounded flows make a significant contribution to the Reynolds stress and turbulent energy. The behavior of these structures is examined. Small perturbations are introduced into a laminar and a turbulent boundary layer to trigger the formation of large-scale features. Both flows use the same inlet unit Reynolds number, and they experience the same pressure gradient history, i.e. a favorable pressure gradient (FPG) followed by an adverse pressure gradient (APG). The perturbation consists of a small short duration flow repetitively introduced through a hole in the wall located at the C(sub p) minimum. Hot-wire data are averaged on the basis of the phase of the disturbance, and automation of the experiment was used to obtain measurements on large spatially dense grids. In the turbulent boundary, the perturbation evolves into a vortex loop which retains its identity for a considerable streamwise distance. In the laminar layer, the perturbation decays to a very small magnitude before growing rapidly and triggering the transition process in the APG. The 'time-like' animations of the phase-averaged data are used to gain insight into the naturally occurring physical mechanisms in each flow.

  18. Interpreting multiple risk scales for sex offenders: evidence for averaging.

    PubMed

    Lehmann, Robert J B; Hanson, R Karl; Babchishin, Kelly M; Gallasch-Nemitz, Franziska; Biedermann, Jürgen; Dahle, Klaus-Peter

    2013-09-01

    This study tested 3 decision rules for combining actuarial risk instruments for sex offenders into an overall evaluation of risk. Based on a 9-year follow-up of 940 adult male sex offenders, we found that Rapid Risk Assessment for Sex Offender Recidivism (RRASOR), Static-99R, and Static-2002R predicted sexual, violent, and general recidivism and provided incremental information for the prediction of all 3 outcomes. Consistent with previous findings, the incremental effect of RRASOR was positive for sexual recidivism but negative for violent and general recidivism. Averaging risk ratios was a promising approach to combining these risk scales, showing good calibration between predicted (E) and observed (O) recidivism rates (E/O index = 0.93, 95% CI [0.79, 1.09]) and good discrimination (area under the curve = 0.73, 95% CI [0.69, 0.77]) for sexual recidivism. As expected, choosing the lowest (least risky) risk tool resulted in underestimated sexual recidivism rates (E/O = 0.67, 95% CI [0.57, 0.79]) and choosing the highest (riskiest) resulted in overestimated risk (E/O = 1.37, 95% CI [1.17, 1.60]). For the prediction of violent and general recidivism, the combination rules provided similar or lower discrimination compared with relying solely on the Static-99R or Static-2002R. The current results support an averaging approach and underscore the importance of understanding the constructs assessed by violence risk measures. PMID:23730829

  19. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    PubMed

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  20. Vibrationally averaged dipole moments of methane and benzene isotopologues.

    PubMed

    Arapiraca, A F C; Mohallem, J R

    2016-04-14

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C6H3D3 is about twice as large as the measured dipole moment of C6H5D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments. PMID:27083715

  1. Using Bayes Model Averaging for Wind Power Forecasts

    NASA Astrophysics Data System (ADS)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  2. Ultra-low noise miniaturized neural amplifier with hardware averaging

    NASA Astrophysics Data System (ADS)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the

  3. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation

  4. Simple average expression for shear-stress relaxation modulus

    NASA Astrophysics Data System (ADS)

    Wittmer, J. P.; Xu, H.; Baschnagel, J.

    2016-01-01

    Focusing on isotropic elastic networks we propose a simple-average expression G (t ) =μA-h (t ) for the computational determination of the shear-stress relaxation modulus G (t ) of a classical elastic solid or fluid. Here, μA=G (0 ) characterizes the shear transformation of the system at t =0 and h (t ) the (rescaled) mean-square displacement of the instantaneous shear stress τ ̂(t ) as a function of time t . We discuss sampling time and ensemble effects and emphasize possible pitfalls of alternative expressions using the shear-stress autocorrelation function. We argue finally that our key relation may be readily adapted for more general linear response functions.

  5. Higher in status, (Even) better-than-average

    PubMed Central

    Varnum, Michael E. W.

    2015-01-01

    In 5 studies (total N = 1357) conducted online using Amazon's MTurk the relationship between socioeconomic status (SES) and the better-than-average effect (BTAE) was tested. Across the studies subjective measures of SES were positively correlated with magnitude of BTAE. Effects of objective measures (income and education) were weaker and less consistent. Measures of childhood SES (both objective and subjective) were positively correlated with BTAE magnitude, though less strongly and less consistently than measures of current subjective SES. Meta-analysis revealed all measures of chronic SES (with the exception of education) were significantly correlated with BTAE. However, manipulations of SES in terms of subjective status (Study 2), power (Study 3), and dominance (Study 4) did not have strong effects on BTAE magnitude (d's ranging from −0.04 to −0.14). Taken together the results suggest that chronic, but not temporary, status may be linked with a stronger tendency to overestimate one's abilities and positive traits. PMID:25972824

  6. Highly flexible ultrafast laser system with 260W average power

    NASA Astrophysics Data System (ADS)

    Mans, Tl; Dolkemeyer, Jan; Russbüldt, P.; Schnitzler, Claus

    2011-02-01

    A flexible ultrafast laser amplifier system based on Ytterbium Innoslab technology with an average power exceeding 200W is presented. The pulse duration of the system can be continuously tuned between 500fs and 6ps, limited only by the amplification bandwidth of Yb:YAG and the stretcher of the seed source. The repetition rate can be varied from 26.6MHz down to 1MHz. For the ps-regime more than 200μJ and for the fs-regime more than 50μJ are demonstrated without the need of temporal compression of the high power beam after the amplifier. Spectral bandwidth is close to the transform limit of the shortest measured pulses. Beam quality is measured to be near the diffraction limit (M2<1.3).

  7. A high-average-power FEL for industrial applications

    SciTech Connect

    Dylla, H.F.; Benson, S.; Bisognano, J.

    1995-12-31

    CEBAF has developed a comprehensive conceptual design of an industrial user facility based on a kilowatt UV (150-1000 nm) and IR (2-25 micron) FEL driven by a recirculating, energy-recovering 200 MeV superconducting radio-frequency (SRF) accelerator. FEL users{endash}CEBAF`s partners in the Laser Processing Consortium, including AT&T, DuPont, IBM, Northrop-Grumman, 3M, and Xerox{endash}plan to develop applications such as polymer surface processing, metals and ceramics micromachining, and metal surface processing, with the overall effort leading to later scale-up to industrial systems at 50-100 kW. Representative applications are described. The proposed high-average-power FEL overcomes limitations of conventional laser sources in available power, cost-effectiveness, tunability and pulse structure. 4 refs., 3 figs., 2 tabs.

  8. Auto-Parametric Resonance in Cyclindrical Shells Using Geometric Averaging

    NASA Astrophysics Data System (ADS)

    MCROBIE, F. A.; POPOV, A. A.; THOMPSON, J. M. T.

    1999-10-01

    A study is presented of internal auto-parametric instabilities in the free non-linear vibrations of a cylindrical shell, focussed on two modes (a concertina mode and a chequerboard mode) whose non-linear interaction breaks the in-out symmetry of the linear vibration theory: the two mode interaction leads to preferred vibration patterns with larger deflection inwards than outwards, and at internal resonance, significant energy transfer occurs between the modes. A Rayleigh-Ritz discretization of the von Kármán-Donnell equations leads to the Hamiltonian and transformation into action-angle co-ordinates followed by averaging provides readily a geometric description of the modal interaction. It was established that the interaction should be most pronounced when there are slightly less than 2√N square chequerboard panels circumferentially, where N is the ratio of shell radius to thickness.

  9. Average System Cost Methodology : Administrator's Record of Decision.

    SciTech Connect

    United States. Bonneville Power Administration.

    1984-06-01

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  10. Voter dynamics on an adaptive network with finite average connectivity

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  11. Moving average rules as a source of market instability

    NASA Astrophysics Data System (ADS)

    Chiarella, Carl; He, Xue-Zhong; Hommes, Cars

    2006-10-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets has both a fundamentalist and a chartist component. The chartist demand is governed by the difference between current price and a (long-run) MA. Our simulations show that the MA is a source of market instability, and the interaction of the MA and market noises can lead to the tendency for the market price to take long excursions away from the fundamental. The model reveals various market price phenomena, the coexistence of apparent market efficiency and a large chartist component, price resistance levels, long memory and skewness and kurtosis of returns.

  12. Simple average expression for shear-stress relaxation modulus.

    PubMed

    Wittmer, J P; Xu, H; Baschnagel, J

    2016-01-01

    Focusing on isotropic elastic networks we propose a simple-average expression G(t)=μ_{A}-h(t) for the computational determination of the shear-stress relaxation modulus G(t) of a classical elastic solid or fluid. Here, μ_{A}=G(0) characterizes the shear transformation of the system at t=0 and h(t) the (rescaled) mean-square displacement of the instantaneous shear stress τ[over ̂](t) as a function of time t. We discuss sampling time and ensemble effects and emphasize possible pitfalls of alternative expressions using the shear-stress autocorrelation function. We argue finally that our key relation may be readily adapted for more general linear response functions. PMID:26871020

  13. Average electric field behavior in the ionosphere above Arecibo

    NASA Technical Reports Server (NTRS)

    Ganguly, Suman; Behnke, Richard A.; Emery, Barbara A.

    1987-01-01

    Plasma drift measurements taken at Arecibo during the solar minimum period of 1974-1977 are examined to determine their average behavior in the E, F2, and F regions. The drifts are generally diurnal in the E region and semidiurnal in the F1 region. These lower thermospheric drifts are set up by polarization fields generated by propagating and in situ atmospheric tides. In the F region the diurnal component is more pronounced, especially in the zonal direction. The magnitude of the drifts is of the order of 25-30 m/s (or 1 mV/m). Enhanced geomagnetic activity appears to increase the westward component of the drift in agreement with the theory of the ionospheric disturbance dynamo (Blanc and Richmond, 1980). Nighttime drifts appear to be at least partly explained in terms of polarization fields.

  14. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  15. A vertically averaged spectral model for tidal circulation in estuaries

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    A frequency dependent computer model based on the two-dimensional vertically averaged shallow-water equations is described for general purpose application in tidally dominated embayments. This model simulates the response of both tides and tidal currents to user-specified geometries and boundary conditions. The mathematical formulation and practical application of the model are discussed in detail. Salient features of the model include the ability to specify: (1) stage at the open boundaries as well as within the model grid, (2) velocities on open boundaries (river inflows and so forth), (3) spatially variable wind stress, and (4) spatially variable bottom friction. Using harmonically analyzed field data as boundary conditions, this model can be used to make real time predictions of tides and tidal currents. (USGS)

  16. Average annual precipitation and runoff for Arkansas, 1951-1980

    USGS Publications Warehouse

    Freiwald, David A.

    1984-01-01

    Ten intercomparison studies to determine the accuracy of pH and specific-conductance measurements, using dilute-nitric acid solutions, were managed by the U.S. Geological Survey for the National Atmospheric Deposition Program and the National Trends Network precipitation networks. These precipitation networks set quality-control goals for site-operator measurements of pH and specific conductance. The accuracy goal for pH is plus or minus 0.1 pH unit; the accuracy goal for specific conductance is plus or minus 4 microsiemens per centimeter at 25 degrees Celsius. These intercomparison studies indicated that an average of 65 percent of the site-operator pH measurements and 79 percent of the site-operator specific-conductance measurements met the quality-control goal. A statistical approach that is resistant to outliers was used to evaluate and illustrate the results obtained from these intercomparisons. (USGS)

  17. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  18. Improving Protein Expression Prediction Using Extra Features and Ensemble Averaging

    PubMed Central

    Fernandes, Armando; Vinga, Susana

    2016-01-01

    The article focus is the improvement of machine learning models capable of predicting protein expression levels based on their codon encoding. Support vector regression (SVR) and partial least squares (PLS) were used to create the models. SVR yields predictions that surpass those of PLS. It is shown that it is possible to improve the models predictive ability by using two more input features, codon identification number and codon count, besides the already used codon bias and minimum free energy. In addition, applying ensemble averaging to the SVR or PLS models also improves the results even further. The present work motivates the test of different ensembles and features with the aim of improving the prediction models whose correlation coefficients are still far from perfect. These results are relevant for the optimization of codon usage and enhancement of protein expression levels in synthetic biology problems. PMID:26934190

  19. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, David, Jr. (Inventor)

    2014-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  20. STREMR: Numerical model for depth-averaged incompressible flow

    NASA Astrophysics Data System (ADS)

    Roberts, Bernard

    1993-09-01

    The STREMR computer code is a two-dimensional model for depth-averaged incompressible flow. It accommodates irregular boundaries and nonuniform bathymetry, and it includes empirical corrections for turbulence and secondary flow. Although STREMR uses a rigid-lid surface approximation, the resulting pressure is equivalent to the displacement of a free surface. Thus, the code can be used to model free-surface flow wherever the local Froude number is 0.5 or less. STREMR uses a finite-volume scheme to discretize and solve the governing equations for primary flow, secondary flow, and turbulence energy and dissipation rate. The turbulence equations are taken from the standard k-Epsilon turbulence model, and the equation for secondary flow is developed herein. Appendices to this report summarize the principal equations, as well as the procedures used for their discrete solution.

  1. A Multichannel Averaging Phasemeter for Picometer Precision Laser Metrology

    NASA Technical Reports Server (NTRS)

    Halverson, Peter G.; Johnson, Donald R.; Kuhnert, Andreas; Shaklan, Stuart B.; Sero, Robert

    1999-01-01

    The Micro-Arcsecond Metrology (MAM) team at the Jet Propulsion Laboratory has developed a precision phasemeter for the Space Interferometry Mission (SIM). The current version of the phasemeter is well-suited for picometer accuracy distance measurements and tracks at speeds up to 50 cm/sec, when coupled to SIM's 1.3 micron wavelength heterodyne laser metrology gauges. Since the phasemeter is implemented with industry standard FPGA chips, other accuracy/speed trade-off points can be programmed for applications such as metrology for earth-based long-baseline astronomical interferometry (planet finding), and industrial applications such as translation stage and machine tool positioning. The phasemeter is a standard VME module, supports 6 metrology gauges, a 128 MHz clock, has programmable hardware averaging, and a maximum range of 232 cycles (2000 meters at 1.3 microns).

  2. REVISITING THE SOLAR TACHOCLINE: AVERAGE PROPERTIES AND TEMPORAL VARIATIONS

    SciTech Connect

    Antia, H. M.; Basu, Sarbani E-mail: sarbani.basu@yale.edu

    2011-07-10

    The tachocline is believed to be the region where the solar dynamo operates. With over a solar cycle's worth of data available from the Michelson Doppler Imager and Global Oscillation Network Group instruments, we are in a position to investigate not merely the average structure of the solar tachocline, but also its time variations. We determine the properties of the tachocline as a function of time by fitting a two-dimensional model that takes latitudinal variations of the tachocline properties into account. We confirm that if we consider the central position of the tachocline, it is prolate. Our results show that the tachocline is thicker at latitudes higher than the equator, making the overall shape of the tachocline more complex. Of the tachocline properties examined, the transition of the rotation rate across the tachocline, and to some extent the position of the tachocline, show some temporal variations.

  3. Absolute surface metrology by rotational averaging in oblique incidence interferometry.

    PubMed

    Lin, Weihao; He, Yumei; Song, Li; Luo, Hongxin; Wang, Jie

    2014-06-01

    A modified method for measuring the absolute figure of a large optical flat surface in synchrotron radiation by a small aperture interferometer is presented. The method consists of two procedures: the first step is oblique incidence measurement; the second is multiple rotating measurements. This simple method is described in terms of functions that are symmetric or antisymmetric with respect to reflections at the vertical axis. Absolute deviations of a large flat surface could be obtained when mirror antisymmetric errors are removed by N-position rotational averaging. Formulas are derived for measuring the absolute surface errors of a rectangle flat, and experiments on high-accuracy rectangle flats are performed to verify the method. Finally, uncertainty analysis is carried out in detail. PMID:24922410

  4. THE FIRST LUNAR MAP OF THE AVERAGE SOIL ATOMIC MASS

    SciTech Connect

    O. GASNAULT; W. FELDMAN; ET AL

    2001-01-01

    Measurements of indexes of lunar surface composition were successfully made during Lunar Prospector (LP) mission, using the Neutron Spectrometers (NS) [1]. This capability is demonstrated for fast neutrons in Plates 1 of Maurice et al. [2] (similar to Figure 2 here). Inspection shows a clear distinction between mare basalt (bright) and highland terranes [2]. Fast neutron simulations demonstrate the sensitivity of the fast neutron leakage flux to the presence of iron and titanium in the soil [3]. The dependence of the flux to a third element (calcium or aluminum) was also suspected [4]. We expand our previous work in this study by estimating fast neutron leakage fluxes for a more comprehensive set of assumed lunar compositions. We find a strong relationship between the fast neutron fluxes and the average soil atomic mass: . This relation can be inverted to provide a map of from the measured map of fast neutrons from the Moon.

  5. Graph-balancing algorithms for average consensus over directed networks

    NASA Astrophysics Data System (ADS)

    Fan, Yuan; Han, Runzhe; Qiu, Jianbin

    2016-01-01

    Consensus strategies find extensive applications in coordination of robot groups and decision-making of agents. Since balanced graph plays an important role in the average consensus problem and many other coordination problems for directed communication networks, this work explores the conditions and algorithms for the digraph balancing problem. Based on the analysis of graph cycles, we prove that a digraph can be balanced if and only if the null space of its incidence matrix contains positive vectors. Then, based on this result and the corresponding analysis, two weight balance algorithms have been proposed, and the conditions for obtaining a unique balanced solution and a set of analytical results on weight balance problems have been introduced. Then, we point out the relationship between the weight balance problem and the features of the corresponding underlying Markov chain. Finally, two numerical examples are presented to verify the proposed algorithms.

  6. Note on scaling arguments in the effective average action formalism

    NASA Astrophysics Data System (ADS)

    Pagani, Carlo

    2016-08-01

    The effective average action (EAA) is a scale-dependent effective action where a scale k is introduced via an infrared regulator. The k dependence of the EAA is governed by an exact flow equation to which one associates a boundary condition at a scale μ . We show that the μ dependence of the EAA is controlled by an equation fully analogous to the Callan-Symanzik equation which allows one to define scaling quantities straightforwardly. Particular attention is paid to composite operators which are introduced along with new sources. We discuss some simple solutions to the flow equation for composite operators and comment on their implications in the case of a local potential approximation.

  7. Average dimension and magnetic structure of the distant Venus magnetotail

    NASA Technical Reports Server (NTRS)

    Saunders, M. A.; Russell, C. T.

    1986-01-01

    The first major statistical investigation of the far wake of an unmagnetized object embedded in the solar wind is reported. The investigation is based on Pioneer Venus Orbiter magnetometer data from 70 crossings of the Venus wake at altitudes between 5 and 11 Venus radii during reasonably steady IMF conditions. It is found that Venus has a well-developed-tail, flaring with altitude and possibly broader in the direction parallel to the IMF cross-flow component. Tail lobe field polarities and the direction of the cross-tail field are consistent with tail accretion from the solar wind. Average values for the cross-tail field (2 nT) and the distant tail flux (3 MWb) indicate that most distant tail field lines close across the center of the tail and are not rooted in the Venus ionosphere. The findings are illustrated in a three-dimensional schematic.

  8. The average ionospheric electrodynamics for the different substorm phases

    SciTech Connect

    Kamide, Y.; Sun, W.; Akasofu, S.I.

    1996-01-01

    The average patterns of the electrostatic potential, current vectors, and Joule heating in the polar ionosphere, as well as the associated field-aligned currents, are determined for a quiet time, the growth phase, the expansion phase, the peak epoch, and the recovery phase of substorms. For this purpose, the Kamide-Richmond-Matsushita magnetogram-inversion algorithm is applied to a data set (for March 17, 18, and 19, 1978) from the six meridian magnetometer chains (the total number of magnetometer stations being 71) which were operated during the period of the International Magnetospheric Study (IMS). This is the first attempt at obtaining, on the basis of individual substorms, the average pattern of substorm quantitities in the polar ionosphere for the different epochs. The main results are as follows: (1) The substorm-time current patterns over the entire polar region consist of two components. The first one is related to the two-cell convection pattern, and the second one is the westward electrojet in the dark sector which is related to the wedge current. (2) Time variations of the two components for the four substorm epochs are shown to be considerably different. (3) The dependence of these differences on the ionospheric electric field and the conductivities (Hall and Pedersen) is identified. (4) It is shown that the large-scale two-cell pattern in the electric potential is dominant during the growth phase of substorms. (5) The expansion phase is characterized by the appearance of a strong westward electrojet, which is added to the two-cell pattern. (6) The large-scale potential pattern becomes complicated during the recovery phase of substorms, but the two-cell pattern appears to be relatively dominant again during their late recovery as the wedge current subsides. These and many other earlier results are consistent with the present ones, which are more quantitatively and comprehensively demonstrated in this global study. 39 refs., 9 figs., 1 tab.

  9. Ultrafast green laser exceeding 400 W of average power

    NASA Astrophysics Data System (ADS)

    Gronloh, Bastian; Russbueldt, Peter; Jungbluth, Bernd; Hoffmann, Hans-Dieter

    2014-05-01

    We present the world's first laser at 515 nm with sub-picosecond pulses and an average power of 445 W. To realize this beam source we utilize an Yb:YAG-based infrared laser consisting of a fiber MOPA system as a seed source, a rod-type pre-amplifier and two Innoslab power amplifier stages. The infrared system delivers up to 930 W of average power at repetition rates between 10 and 50 MHz and with pulse durations around 800 fs. The beam quality in the infrared is M2 = 1.1 and 1.5 in fast and slow axis. As a frequency doubler we chose a Type-I critically phase-matched Lithium Triborate (LBO) crystal in a single-pass configuration. To preserve the infrared beam quality and pulse duration, the conversion was carefully modeled using numerical calculations. These take dispersion-related and thermal effects into account, thus enabling us to provide precise predictions of the properties of the frequency-doubled beam. To be able to model the influence of thermal dephasing correctly and to choose appropriate crystals accordingly, we performed extensive absorption measurements of all crystals used for conversion experiments. These measurements provide the input data for the thermal FEM analysis and calculation. We used a Photothermal Commonpath Interferometer (PCI) to obtain space-resolved absorption data in the bulk and at the surfaces of the LBO crystals. The absorption was measured at 1030 nm as well as at 515 nm in order to take into account the different absorption behavior at both occurring wavelengths.

  10. Molecular dynamics averaging of Xe chemical shifts in liquids

    NASA Astrophysics Data System (ADS)

    Jameson, Cynthia J.; Sears, Devin N.; Murad, Sohail

    2004-11-01

    The Xe nuclear magnetic resonance chemical shift differences that afford the discrimination between various biological environments are of current interest for biosensor applications and medical diagnostic purposes. In many such environments the Xe signal appears close to that in water. We calculate average Xe chemical shifts (relative to the free Xe atom) in solution in eleven liquids: water, isobutane, perfluoro-isobutane, n-butane, n-pentane, neopentane, perfluoroneopentane, n-hexane, n-octane, n-perfluorooctane, and perfluorooctyl bromide. The latter is a liquid used for intravenous Xe delivery. We calculate quantum mechanically the Xe shielding response in Xe-molecule van der Waals complexes, from which calculations we develop Xe (atomic site) interpolating functions that reproduce the ab initio Xe shielding response in the complex. By assuming additivity, these Xe-site shielding functions can be used to calculate the shielding for any configuration of such molecules around Xe. The averaging over configurations is done via molecular dynamics (MD). The simulations were carried out using a MD technique that one of us had developed previously for the simulation of Henry's constants of gases dissolved in liquids. It is based on separating a gaseous compartment in the MD system from the solvent using a semipermeable membrane that is permeable only to the gas molecules. We reproduce the experimental trends in the Xe chemical shifts in n-alkanes with increasing number of carbons and the large chemical shift difference between Xe in water and in perfluorooctyl bromide. We also reproduce the trend for a given solvent of decreasing Xe chemical shift with increasing temperature. We predict chemical shift differences between Xe in alkanes vs their perfluoro counterparts.

  11. The importance of ensemble averaging in enzyme kinetics.

    PubMed

    Masgrau, Laura; Truhlar, Donald G

    2015-02-17

    CONSPECTUS: The active site of an enzyme is surrounded by a fluctuating environment of protein and solvent conformational states, and a realistic calculation of chemical reaction rates and kinetic isotope effects of enzyme-catalyzed reactions must take account of this environmental diversity. Ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT) was developed as a way to carry out such calculations. This theory incorporates ensemble averaging, quantized vibrational energies, energy, tunneling, and recrossing of transition state dividing surfaces in a systematic way. It has been applied successfully to a number of hydrogen-, proton-, and hydride-transfer reactions. The theory also exposes the set of effects that should be considered in reliable rate constants calculations. We first review the basic theory and the steps in the calculation. A key role is played by the generalized free energy of activation profile, which is obtained by quantizing the classical potential of mean force as a function of a reaction coordinate because the one-way flux through the transition state dividing surface can be written in terms of the generalized free energy of activation. A recrossing transmission coefficient accounts for the difference between the one-way flux through the chosen transition state dividing surface and the net flux, and a tunneling transmission coefficient converts classical motion along the reaction coordinate to quantum mechanical motion. The tunneling calculation is multidimensional, accounting for the change in vibrational frequencies along the tunneling path and shortening of the tunneling path with respect to the minimum energy path (MEP), as promoted by reaction-path curvature. The generalized free energy of activation and the transmission coefficients both involve averaging over an ensemble of reaction paths and conformations, and this includes the coupling of protein motions to the rearrangement of chemical bonds

  12. The Importance of Ensemble Averaging in Enzyme Kinetics

    PubMed Central

    2015-01-01

    Conspectus The active site of an enzyme is surrounded by a fluctuating environment of protein and solvent conformational states, and a realistic calculation of chemical reaction rates and kinetic isotope effects of enzyme-catalyzed reactions must take account of this environmental diversity. Ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT) was developed as a way to carry out such calculations. This theory incorporates ensemble averaging, quantized vibrational energies, energy, tunneling, and recrossing of transition state dividing surfaces in a systematic way. It has been applied successfully to a number of hydrogen-, proton-, and hydride-transfer reactions. The theory also exposes the set of effects that should be considered in reliable rate constants calculations. We first review the basic theory and the steps in the calculation. A key role is played by the generalized free energy of activation profile, which is obtained by quantizing the classical potential of mean force as a function of a reaction coordinate because the one-way flux through the transition state dividing surface can be written in terms of the generalized free energy of activation. A recrossing transmission coefficient accounts for the difference between the one-way flux through the chosen transition state dividing surface and the net flux, and a tunneling transmission coefficient converts classical motion along the reaction coordinate to quantum mechanical motion. The tunneling calculation is multidimensional, accounting for the change in vibrational frequencies along the tunneling path and shortening of the tunneling path with respect to the minimum energy path (MEP), as promoted by reaction-path curvature. The generalized free energy of activation and the transmission coefficients both involve averaging over an ensemble of reaction paths and conformations, and this includes the coupling of protein motions to the rearrangement of chemical bonds

  13. Average radiation exposure values for three diagnostic radiographic examinations

    SciTech Connect

    Rueter, F.G.; Conway, B.J.; McCrohan, J.L.; Suleiman, O.H. )

    1990-11-01

    National surveys of more than 600 facilities that performed chest, lumbosacral spine, and abdominal examinations were conducted as a part of the Nationwide Evaluation of X-Ray Trends program. Radiation exposures were measured with use of a set of standard phantoms developed by the Center for Devices and Radiological Health of the Food and Drug Administration, U.S. Public Health Service. X-ray equipment parameters, film processing data, and data regarding techniques used were collected. There were no differences in overall posteroanterior chest exposures between hospitals and private practices. Seventy-six percent of hospitals used grids, compared with 33% of private practices. In general, hospitals favored a high tube voltage technique, and private facilities favored a low tube voltage technique. Forty-one percent of private practices and 17% of hospitals underprocessed their film. Underprocessing in hospitals increased from 17% in 1984 to 33% in 1987. Average exposure values for these examinations may be useful as guidelines in meeting some of the new requirements of the Joint Commission on Accreditation of Healthcare Organizations.

  14. Resolution improvement by 3D particle averaging in localization microscopy

    PubMed Central

    Broeken, Jordi; Johnson, Hannah; Lidke, Diane S.; Liu, Sheng; Nieuwenhuizen, Robert P.J.; Stallinga, Sjoerd; Lidke, Keith A.; Rieger, Bernd

    2015-01-01

    Inspired by recent developments in localization microscopy that applied averaging of identical particles in 2D for increasing the resolution even further, we discuss considerations for alignment (registration) methods for particles in general and for 3D in particular. We detail that traditional techniques for particle registration from cryo electron microscopy based on cross-correlation are not suitable, as the underlying image formation process is fundamentally different. We argue that only localizations, i.e. a set of coordinates with associated uncertainties, are recorded and not a continuous intensity distribution. We present a method that owes to this fact and that is inspired by the field of statistical pattern recognition. In particular we suggest to use an adapted version of the Bhattacharyya distance as a merit function for registration. We evaluate the method in simulations and demonstrate it on three-dimensional super-resolution data of Alexa 647 labelled to the Nup133 protein in the nuclear pore complex of Hela cells. From the simulations we find suggestions that for successful registration the localization uncertainty must be smaller than the distance between labeling sites on a particle. These suggestions are supported by theoretical considerations concerning the attainable resolution in localization microscopy and its scaling behavior as a function of labeling density and localization precision. PMID:25866640

  15. The Kernel Adaptive Autoregressive-Moving-Average Algorithm.

    PubMed

    Li, Kan; Príncipe, José C

    2016-02-01

    In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049

  16. Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.

    2016-06-01

    Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.

  17. The average size and temperature profile of quasar accretion disks

    SciTech Connect

    Jiménez-Vicente, J.; Mediavilla, E.; Muñoz, J. A.; Motta, V.; Falco, E.

    2014-03-01

    We use multi-wavelength microlensing measurements of a sample of 10 image pairs from 8 lensed quasars to study the structure of their accretion disks. By using spectroscopy or narrowband photometry, we have been able to remove contamination from the weakly microlensed broad emission lines, extinction, and any uncertainties in the large-scale macro magnification of the lens model. We determine a maximum likelihood estimate for the exponent of the size versus wavelength scaling (r{sub s} ∝λ {sup p}, corresponding to a disk temperature profile of T∝r {sup –1/p}) of p=0.75{sub −0.2}{sup +0.2} and a Bayesian estimate of p = 0.8 ± 0.2, which are significantly smaller than the prediction of the thin disk theory (p = 4/3). We have also obtained a maximum likelihood estimate for the average quasar accretion disk size of r{sub s}=4.5{sub −1.2}{sup +1.5} lt-day at a rest frame wavelength of λ = 1026 Å for microlenses with a mean mass of M = 1 M {sub ☉}, in agreement with previous results, and larger than expected from thin disk theory.

  18. An averaging theorem for a perturbed KdV equation

    NASA Astrophysics Data System (ADS)

    Guan, Huang

    2013-06-01

    We consider a perturbed KdV equation: \\begin{equation*} \\fl \\dot{u}+u_{xxx}-6uu_x=\\epsilon f(x,u(\\cdot)),\\quad x\\in {T},\\tqs\\int_{{T}} u \\,\\rmd x=0. \\end{equation*} For any periodic function u(x), let I(u)=(I_1(u),I_2(u),\\ldots)\\in{R}_+^{\\infty} be the vector, formed by the KdV integrals of motion, calculated for the potential u(x). Assuming that the perturbation ɛf(x, u(x)) defines a smoothing mapping u(x) ↦ f(x, u(x)) (e.g. it is a smooth function ɛ f(x), independent from u), and that solutions of the perturbed equation satisfy some mild a priori assumptions, we prove that for solutions u(t, x) with typical initial data and for 0 ⩽ t ≲ ɛ-1, the vector I(u (t)) may be well approximated by a solution of the averaged equation.

  19. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time. PMID:26093410

  20. Metal deep engraving with high average power femtosecond lasers

    NASA Astrophysics Data System (ADS)

    Faucon, M.; Mincuzzi, G.; Morin, F.; Hönninger, C.; Mottay, E.; Kling, R.

    2015-03-01

    Deep engraving of 3D textures is a very demanding process for the creation of master tool e. g molds, forming tools or coining dies. As these masters are uses for reproduction of 3D patterns the materials for the tools are typically hard and brittle and thus difficult to machine. The new generation of industrial femtosecond lasers provides both high accuracy engraving results and high ablation rates at the same time. Operation at pulse energies of typically 40 μJ and repetition rates in the Mhz range the detrimental effect of heat accumulation has to be avoided. Therefore high scanning speeds are required to reduce the pulse overlap below 90%. As a consequence scan speeds in the range of 25-50 m/s a needed, which is beyond the capability of galvo scanners. In this paper we present results using a combination of a polygon scanner with a high average power femtosecond laser and compare this to results with conventional scanners. The effects of pulse energy and scan speed of the head on geometrical accuracy are discussed. The quality of the obtained structures is analyzed by means of 3D surface metrology microscope as well as SEM images.

  1. Average annual precipitation classes to characterize watersheds in North Carolina

    USGS Publications Warehouse

    Terziotti, Silvia; Eimers, Jo Leslie

    2001-01-01

    This web site contains the Federal Geographic Data Committee-compliant metadata (documentation) for digital data produced for the North Carolina, Department of Environment and Natural Resources, Public Water Supply Section, Source Water Assessment Program. The metadata are for 11 individual Geographic Information System data sets. An overlay and indexing method was used with the data to derive a rating for unsaturated zone and watershed characteristics for use by the State of North Carolina in assessing more than 11,000 public water-supply wells and approximately 245 public surface-water intakes for susceptibility to contamination. For ground-water supplies, the digital data sets used in the assessment included unsaturated zone rating, vertical series hydraulic conductance, land-surface slope, and land cover. For assessment of public surface-water intakes, the data sets included watershed characteristics rating, average annual precipitation, land-surface slope, land cover, and ground-water contribution. Documentation for the land-use data set applies to both the unsaturated zone and watershed characteristics ratings. Documentation for the estimated depth-to-water map used in the calculation of the vertical series hydraulic conductance also is included.

  2. Application of the moving averaging technique in surplus production models

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Liu, Qun

    2014-08-01

    Surplus production models are the simplest analytical methods effective for fish stock assessment and fisheries management. In this paper, eight surplus production estimators (three estimation procedures) were tested on Schaefer and Fox type simulated data in three simulated fisheries (declining, well-managed, and restoring fisheries) at two white noise levels. Monte Carlo simulation was conducted to verify the utility of moving averaging (MA), which was an important technique for reducing the effect of noise in data in these models. The relative estimation error (REE) of maximum sustainable yield (MSY) was used as an indicator for the analysis, and one-way ANOVA was applied to test the significance of the REE calculated at four levels of MA. Simulation results suggested that increasing the value of MA could significantly improve the performance of the surplus production model (low REE) in all cases when the white noise level was low (coefficient of variation ( CV)=0.02). However, when the white noise level increased ( CV=0.25), adding the value of MA could still significantly enhance the performance of most models. Our results indicated that the best model performance occurred frequently when MA was equal to 3; however, some exceptions were observed when MA was higher.

  3. A simple depth-averaged model for dry granular flow

    NASA Astrophysics Data System (ADS)

    Hung, Chi-Yao; Stark, Colin P.; Capart, Herve

    Granular flow over an erodible bed is an important phenomenon in both industrial and geophysical settings. Here we develop a depth-averaged theory for dry erosive flows using balance equations for mass, momentum and (crucially) kinetic energy. We assume a linearized GDR-Midi rheology for granular deformation and Coulomb friction along the sidewalls. The theory predicts the kinematic behavior of channelized flows under a variety of conditions, which we test in two sets of experiments: (1) a linear chute, where abrupt changes in tilt drive unsteady uniform flows; (2) a rotating drum, to explore steady non-uniform flow. The theoretical predictions match the experimental results well in all cases, without the need to tune parameters or invoke an ad hoc equation for entrainment at the base of the flow. Here we focus on the drum problem. A dimensionless rotation rate (related to Froude number) characterizes flow geometry and accounts not just for spin rate, drum radius and gravity, but also for grain size, wall friction and channel width. By incorporating Coriolis force the theory can treat behavior under centrifuge-induced enhanced gravity. We identify asymptotic flow regimes at low and high dimensionless rotation rates that exhibit distinct power-law scaling behaviors.

  4. Face Averages Enhance User Recognition for Smartphone Security

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  5. Coherent and stochastic averaging in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Nevzorov, Alexander A.

    2014-12-01

    A new approach for calculating solid-state NMR lineshapes of uniaxially rotating membrane proteins under the magic-angle spinning conditions is presented. The use of stochastic Liouville equation (SLE) allows one to account for both coherent sample rotation and stochastic motional averaging of the spherical dipolar powder patterns by uniaxial diffusion of the spin-bearing molecules. The method is illustrated via simulations of the dipolar powder patterns of rigid samples under the MAS conditions, as well as the recent method of rotational alignment in the presence of both MAS and rotational diffusion under the conditions of dipolar recoupling. It has been found that it is computationally more advantageous to employ direct integration over a spherical grid rather than to use a full angular basis set for the SLE solution. Accuracy estimates for the bond angles measured from the recoupled amide 1H-15N dipolar powder patterns have been obtained at various rotational diffusion coefficients. It has been shown that the rotational alignment method is applicable to membrane proteins approximated as cylinders with radii of approximately 20 Å, for which uniaxial rotational diffusion within the bilayer is sufficiently fast and exceeds the rate 2 × 105 s-1.

  6. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  7. Potential of high-average-power solid state lasers

    SciTech Connect

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  8. Effects of Polynomial Trends on Detrending Moving Average Analysis

    NASA Astrophysics Data System (ADS)

    Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2015-07-01

    The detrending moving average (DMA) algorithm is one of the best performing methods to quantify the long-term correlations in nonstationary time series. As many long-term correlated time series in real systems contain various trends, we investigate the effects of polynomial trends on the scaling behaviors and the performances of three widely used DMA methods including backward algorithm (BDMA), centered algorithm (CDMA) and forward algorithm (FDMA). We derive a general framework for polynomial trends and obtain analytical results for constant shifts and linear trends. We find that the behavior of the CDMA method is not influenced by constant shifts. In contrast, linear trends cause a crossover in the CDMA fluctuation functions. We also find that constant shifts and linear trends cause crossovers in the fluctuation functions obtained from the BDMA and FDMA methods. When a crossover exists, the scaling behavior at small scales comes from the intrinsic time series while that at large scales is dominated by the constant shifts or linear trends. We also derive analytically the expressions of crossover scales and show that the crossover scale depends on the strength of the polynomial trends, the Hurst index, and in some cases (linear trends for BDMA and FDMA) the length of the time series. In all cases, the BDMA and the FDMA behave almost the same under the influence of constant shifts or linear trends. Extensive numerical experiments confirm excellently the analytical derivations. We conclude that the CDMA method outperforms the BDMA and FDMA methods in the presence of polynomial trends.

  9. Statistical properties of the gyro-averaged standard map

    NASA Astrophysics Data System (ADS)

    da Fonseca, Julio D.; Sokolov, Igor M.; Del-Castillo-Negrete, Diego; Caldas, Ibere L.

    2015-11-01

    A statistical study of the gyro-averaged standard map (GSM) is presented. The GSM is an area preserving map model proposed in as a simplified description of finite Larmor radius (FLR) effects on ExB chaotic transport in magnetized plasmas with zonal flows perturbed by drift waves. The GSM's effective perturbation parameter, gamma, is proportional to the zero-order Bessel function of the particle's Larmor radius. In the limit of zero Larmor radius, the GSM reduces to the standard, Chirikov-Taylor map. We consider plasmas in thermal equilibrium and assume a Larmor radius' probability density function (pdf) resulting from a Maxwell-Boltzmann distribution. Since the particles have in general different Larmor radii, each orbit is computed using a different perturbation parameter, gamma. We present analytical and numerical computations of the pdf of gamma for a Maxwellian distribution. We also compute the pdf of global chaos, which gives the probability that a particle with a given Larmor radius exhibits global chaos, i.e. the probability that Kolmogorov-Arnold-Moser (KAM) transport barriers do not exist.

  10. Gains in accuracy from averaging ratings of abnormality

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.

    1999-05-01

    Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.

  11. Lagrange and average interpolation over 3D anisotropic elements

    NASA Astrophysics Data System (ADS)

    Acosta, Gabriel

    2001-10-01

    An average interpolation is introduced for 3-rectangles and tetrahedra, and optimal order error estimates in the H1 norm are proved. The constant in the estimate depends "weakly" (improving the results given in Durán (Math. Comp. 68 (1999) 187-199) on the uniformity of the mesh in each direction. For tetrahedra, the constant also depends on the maximum angle of the element. On the other hand, merging several known results (Acosta and Durán, SIAM J. Numer. Anal. 37 (1999) 18-36; Durán, Math. Comp. 68 (1999) 187-199; Krízek, SIAM J. Numer. Anal. 29 (1992) 513-520; Al Shenk, Math. Comp. 63 (1994) 105-119), we prove optimal order error for the -Lagrange interpolation in W1,p, p>2, with a constant depending on p as well as the maximum angle of the element. Again, under the maximum angle condition, optimal order error estimates are obtained in the H1 norm for higher degree interpolations.

  12. S-index: Measuring significant, not average, citation performance

    NASA Astrophysics Data System (ADS)

    Antonoyiannakis, Manolis

    2009-03-01

    We recently [1] introduced the ``citation density curve'' (or cumulative impact factor curve) that captures the full citation performance of a journal: its size, impact factor, the maximum number of citations per paper, the relative size of the different-cited portions of the journal, etc. The citation density curve displays a universal behavior across journals. We exploit this universality to extract a simple metric (the ``S-index'') to characterize the citation impact of ``significant'' papers in each journal. In doing so, we go beyond the journal impact factor, which only measures the impact of the average paper. The conventional wisdom of ranking journals according to their impact factors is thus challenged. Having shown the utility and robustness of the S-index in comparing and ranking journals of different sizes but within the same field, we explore the concept further, going beyond a single field, and beyond journals. Can we compare different scientific fields, departments, or universities? And how should one generalize the citation density curve and the S-index to address these questions? [1] M. Antonoyiannakis and S. Mitra, ``Is PRL too large to have an `impact'?'', Editorial, Physical Review Letters, December 2008.

  13. Lack of self-averaging and family trees

    NASA Astrophysics Data System (ADS)

    Serva, Maurizio

    2004-02-01

    We consider a large population of asexually reproducing individuals in absence of selective pressure. The population size is maintained constant by the environment. We find out that distances between individuals (time from the last common ancestor) exhibit highly non-trivial properties. In particular their distribution in a single population is random even in the thermodynamical limit, i.e., there is lack of self-averaging. As a result, not only distances are different for different pairs of individuals but also the mean distance of the individuals of a given population is different at different times. All computed quantities are parameters free and only scale linearly with the population size. Results in this paper may have some relevance in the ‘Out of Africa/Multi-regional’ debate about the origin of modern man. In fact, the recovery of mitochondrial DNA from Neandertal fossils in three different loci: Feldhofer (Germany), Mezmaiskaya (Northern Caucaso), Vinjia (Croatia), permitted to compare Neandertal/Neandertal distances with Neandertal/modern and modern/modern ones.

  14. High Average Power Nd:YAG Slab Laser

    NASA Astrophysics Data System (ADS)

    Kasai, Takeshi; Sindo, Yoshihiko; Haga, Keiji

    1989-07-01

    A slab geometry Nd:YAG laser with a zigzag optical path is described. The dimensions of the Nd:YAG slab are 5.6 x 18.4 x 153.9 mm, and Nei' ion concentration is 1.1 at.%. Two krypton flashlamps, one located on each side of the YAG slab, are used for pumping. The conditions for normal pulsed operation were as follows: the repetition rate was from 5 to 27 pps, and the pulse durations were 4 and 9.9 ms. With the above conditions, a maximum average output power of 500 W was obtained with an efficiency of 2 %, the slope efficiency being 2.4 %. The beam divergence was estimated to be 10x25 mrad. The stability of the laser output power was about +/-1.5 %. Another oscillator that includes intra-cavity cylindrical lenses, was also designed. Using this resonator configuration reduced the beam divergence to about 7.6 x8.2 mrad. The preliminary laser processing experiment was attemped using this laser oscillator.

  15. The development of a high average power glass laser source

    NASA Astrophysics Data System (ADS)

    Myers, J. D.

    1984-05-01

    The subject contract has as its objective the development of a high average power glass laser by systematically improving the factors which influence the ability of a laser glass to handle large power levels. Based upon the availability of the thermal laser glass composition Q-100, the rationale used was toward the improvement of the efficiency of a glass laser by developing methods to increase the pumping efficiency and toward the improvement of the power handling capability of the glass laser rod itself. These incremental developments were broken down as follows: (1) Characterization of Q-100 Laser Glass: The measurement of its thermo-physical and thermo-optical properties to better define its engineering design parameters. (2) Improve Pumping Efficiency or Q-100: Primarily by cladding Q-100 with a matching cladding glass which would act as a lens and improve the transfer of pumping energy from the flashlamp. (3) Reduce thermal loading of Q-100 by Selective filtering of the flashlamp radiation and/or use energy transfer schemes to increase that portion of the flashlamp radiation corresponding to the neodymium pump bands. (4) Increase the rupture strength of Q-100 to directly increase its power-handling capability. (5) Investigate alternate pump sources to improve efficiency.

  16. Attention Disengagement Difficulties among Average Weight Women Who Binge Eat.

    PubMed

    Lyu, Zhenyong; Zheng, Panpan; Jackson, Todd

    2016-07-01

    In this study, we assessed biases in attention disengagement among average-weight women with binge-eating (n = 33) and non-eating disordered controls (n = 31). Participants engaged in a spatial cueing paradigm task wherein they first observed high-calorie food, low-calorie food, or neutral images and then had to quickly locate targets in either the same or a different location. Within both groups, reaction times (RTs) were longer to valid-cued trials (i.e. target appearing in location of preceding cue) than to invalid-cued trials (i.e. targets appearing in location different from initial location), reflecting a general inhibition of return (IOR) effect. However, RT findings also indicated that women with BE had significantly more difficulty disengaging from high-calorie food images than did controls, even though neither group had disengagement problems related to other image types. Selective attention disengagement difficulties related to high-calorie food images suggested that increased reward sensitivity to such cues is related to binge eating risk. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. PMID:26856539

  17. Analytic continuation by averaging Padé approximants

    NASA Astrophysics Data System (ADS)

    Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor

    2016-02-01

    The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.

  18. MHD stability of torsatrons using the average method

    SciTech Connect

    Holmes, J.A.; Carreras, B.A.; Charlton, L.A.; Garcia, L.; Hender, T.C.; Hicks, H.R.; Lynch, V.E.

    1985-01-01

    The stability of torsatrons is studied using the average method, or stellarator expansion. Attention is focused upon the Advanced Toroidal Fusion Device (ATF), an l = 2, 12 field period, moderate aspect ratio configuration which, through a combination of shear and toroidally induced magnetic well, is stable to ideal modes. Using the vertical field (VF) coil system of ATF it is possible to enhance this stability by shaping the plasma to control the rotational transform. The VF coils are also useful tools for exploring the stability boundaries of ATF. By shifting the plasma inward along the major radius, the magnetic well can be removed, leading to three types of long wavelength instabilities: (1) A free boundary ''edge mode'' occurs when the rotational transform at the plasma edge is just less than unity. This mode is stabilized by the placement of a conducting wall at 1.5 times the plasma radius. (2) A free boundary global kink mode is observed at high ..beta... When either ..beta.. is lowered or a conducting wall is placed at the plasma boundary, the global mode is suppressed, and (3) an interchange mode is observed instead. For this interchange mode, calculations of the second, third, etc., most unstable modes are used to understand the nature of the degeneracy breaking induced by toroidal effects. Thus, the ATF configuration is well chosen for the study of torsatron stability limits.

  19. Reach-averaged sediment routing model of a canyon river

    USGS Publications Warehouse

    Wiele, S.M.; Wilcock, P.R.; Grams, P.E.

    2007-01-01

    Spatial complexity in channel geometry indicates that accurate prediction of sediment transport requires modeling in at least two dimensions. However, a one-dimensional model may be the only practical or possible alternative, especially for longer river reaches of practical concern in river management or landscape modeling. We have developed a one-dimensional model of the Colorado River through upper Grand Canyon that addresses this problem by reach averaging the channel properties and predicting changes in sand storage using separate source and sink functions coupled to the sand routing model. The model incorporates results from the application of a two-dimensional model of flow, sand transport, and bed evolution, and a new algorithm for setting the near-bed sand boundary condition for sand transported over an exposed bouldery bed. Model predictions were compared to measurements of sand discharge during intermittent tributary inputs and varying discharges controlled by dam releases. The model predictions generally agree well with the timing and magnitude of measured sand discharges but tend to overpredict sand discharge during the early stages of a high release designed to redistribute sand to higher-elevation deposits.

  20. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    NASA Astrophysics Data System (ADS)

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-01

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.