Science.gov

Sample records for 8-hr time-weighted average

  1. Occupational dimethylformamide exposure. 1. Diffusive sampling of dimethylformamide vapor for determination of time-weighted average concentration in air.

    PubMed

    Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M

    1992-01-01

    A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523

  2. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    PubMed

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428

  3. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    PubMed

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method.

  4. [Evaluation of +Gz tolerance following simulation of 8-hr flight].

    PubMed

    Khomenko, M N; Bukhtiiarov, I V; Malashchuk, L S

    2005-01-01

    Tolerance of +Gz (head-pelvis) centrifugation of pilots was evaluated following simulation of a long flight on single-seat fighter. The experiment involved 5 test-subjects who were exposed to +Gz before and after simulated 8-hr flight with a growth gradient of 0.1 u/s without anti-g suits and muscles relaxed; in addition, limiting tolerance of intricate profile +Gz loads of 2.0 to 9.0 units with a growth gradient of 1.0 u/s of test-subjects in anti-g suits (AGS) with a change-over pressure valve in the peak mode using muscle straining and breathing maneuvers. To counteract the negative effects of extended flight, various seat configurations: with a back inclination at 30 degrees to the +Gz vector and changeable geometry with a back inclination at 55 degrees to the vector. The other counter-measures applied were cool air shower, suit ventilation, physical exercises, lower body massage with AGS, electrostimulation of the back and lumber region, profiling of the supporting and soft parts of the seat, and 30-s exposure to +5 Gz. Hemodynamic and respiration parameters as well as body temperature were measured in the course of 8 hrs of flight and during and shortly after centrifugation. According to the results of the investigation, seat inclination at 55 degrees to the +Gz vector and tested system of countermeasures prevent degradation of tolerance of large (9 u.) loads following 8-hr flight simulation with the use of the modern anti-g gear, PMID:16353624

  5. Understanding the effectiveness of precursor reductions in lowering 8-hr ozone concentrations--Part II. The eastern United States.

    PubMed

    Reynolds, Steven D; Blanchard, Charles L; Ziman, Stephen D

    2004-11-01

    Analyses of ozone (O3) measurements in conjunction with photochemical modeling were used to assess the feasibility of attaining the federal 8-hr O3 standard in the eastern United States. Various combinations of volatile organic compound (VOC) and oxides of nitrogen (NOx) emission reductions were effective in lowering modeled peak 1-hr O3 concentrations. VOC emissions reductions alone had only a modest impact on modeled peak 8-hr O3 concentrations. Anthropogenic NOx emissions reductions of 46-86% of 1996 base case values were needed to reach the level of the 8-hr standard in some areas. As NOx emissions are reduced, O3 production efficiency increases, which accounts for the less than proportional response of calculated 8-hr O3 levels. Such increases in O3 production efficiency also were noted in previous modeling work for central California. O3 production in some urban core areas, such as New York City and Chicago, IL, was found to be VOC-limited. In these areas, moderate NOx emissions reductions may be accompanied by increases in peak 8-hr O3 levels. The findings help to explain differences in historical trends in 1- and 8-hr O3 levels and have serious implications for the feasibility of attaining the 8-hr O3 standard in several areas of the eastern United States.

  6. Understanding the effectiveness of precursor reductions in lowering 8-hr ozone concentrations--Part II. The eastern United States.

    PubMed

    Reynolds, Steven D; Blanchard, Charles L; Ziman, Stephen D

    2004-11-01

    Analyses of ozone (O3) measurements in conjunction with photochemical modeling were used to assess the feasibility of attaining the federal 8-hr O3 standard in the eastern United States. Various combinations of volatile organic compound (VOC) and oxides of nitrogen (NOx) emission reductions were effective in lowering modeled peak 1-hr O3 concentrations. VOC emissions reductions alone had only a modest impact on modeled peak 8-hr O3 concentrations. Anthropogenic NOx emissions reductions of 46-86% of 1996 base case values were needed to reach the level of the 8-hr standard in some areas. As NOx emissions are reduced, O3 production efficiency increases, which accounts for the less than proportional response of calculated 8-hr O3 levels. Such increases in O3 production efficiency also were noted in previous modeling work for central California. O3 production in some urban core areas, such as New York City and Chicago, IL, was found to be VOC-limited. In these areas, moderate NOx emissions reductions may be accompanied by increases in peak 8-hr O3 levels. The findings help to explain differences in historical trends in 1- and 8-hr O3 levels and have serious implications for the feasibility of attaining the 8-hr O3 standard in several areas of the eastern United States. PMID:15587557

  7. A ∼ 3.8 hr PERIODICITY FROM AN ULTRASOFT ACTIVE GALACTIC NUCLEUS CANDIDATE

    SciTech Connect

    Lin, Dacheng; Irwin, Jimmy A.; Godet, Olivier; Webb, Natalie A.; Barret, Didier

    2013-10-10

    Very few galactic nuclei are found to show significant X-ray quasi-periodic oscillations (QPOs). After carefully modeling the noise continuum, we find that the ∼3.8 hr QPO in the ultrasoft active galactic nucleus candidate 2XMM J123103.2+110648 was significantly detected (∼5σ) in two XMM-Newton observations in 2005, but not in the one in 2003. The QPO root mean square (rms) is very high and increases from ∼25% in 0.2-0.5 keV to ∼50% in 1-2 keV. The QPO probably corresponds to the low-frequency type in Galactic black hole X-ray binaries, considering its large rms and the probably low mass (∼10{sup 5} M {sub ☉}) of the black hole in the nucleus. We also fit the soft X-ray spectra from the three XMM-Newton observations and find that they can be described with either pure thermal disk emission or optically thick low-temperature Comptonization. We see no clear X-ray emission from the two Swift observations in 2013, indicating lower source fluxes than those in XMM-Newton observations.

  8. Exposure Assessment for Carbon Dioxide Gas: Full Shift Average and Short-Term Measurement Approaches.

    PubMed

    Hill, R Jedd; Smith, Philip A

    2015-01-01

    Carbon dioxide (CO2) makes up a relatively small percentage of atmospheric gases, yet when used or produced in large quantities as a gas, a liquid, or a solid (dry ice), substantial airborne exposures may occur. Exposure to elevated CO2 concentrations may elicit toxicity, even with oxygen concentrations that are not considered dangerous per se. Full-shift sampling approaches to measure 8-hr time weighted average (TWA) CO2 exposures are used in many facilities where CO2 gas may be present. The need to assess rapidly fluctuating CO2 levels that may approach immediately dangerous to life or health (IDLH) conditions should also be a concern, and several methods for doing so using fast responding measurement tools are discussed in this paper. Colorimetric detector tubes, a non-dispersive infrared (NDIR) detector, and a portable Fourier transform infrared (FTIR) spectroscopy instrument were evaluated in a laboratory environment using a flow-through standard generation system and were found to provide suitable accuracy and precision for assessing rapid fluctuations in CO2 concentration, with a possible effect related to humidity noted only for the detector tubes. These tools were used in the field to select locations and times for grab sampling and personal full-shift sampling, which provided laboratory analysis data to confirm IDLH conditions and 8-hr TWA exposure information. Fluctuating CO2 exposures are exemplified through field work results from several workplaces. In a brewery, brief CO2 exposures above the IDLH value occurred when large volumes of CO2-containing liquid were released for disposal, but 8-hr TWA exposures were not found to exceed the permissible level. In a frozen food production facility nearly constant exposure to CO2 concentrations above the permissible 8-hr TWA value were seen, as well as brief exposures above the IDLH concentration which were associated with specific tasks where liquid CO2 was used. In a poultry processing facility the use of dry

  9. Exposure Assessment for Carbon Dioxide Gas: Full Shift Average and Short-Term Measurement Approaches.

    PubMed

    Hill, R Jedd; Smith, Philip A

    2015-01-01

    Carbon dioxide (CO2) makes up a relatively small percentage of atmospheric gases, yet when used or produced in large quantities as a gas, a liquid, or a solid (dry ice), substantial airborne exposures may occur. Exposure to elevated CO2 concentrations may elicit toxicity, even with oxygen concentrations that are not considered dangerous per se. Full-shift sampling approaches to measure 8-hr time weighted average (TWA) CO2 exposures are used in many facilities where CO2 gas may be present. The need to assess rapidly fluctuating CO2 levels that may approach immediately dangerous to life or health (IDLH) conditions should also be a concern, and several methods for doing so using fast responding measurement tools are discussed in this paper. Colorimetric detector tubes, a non-dispersive infrared (NDIR) detector, and a portable Fourier transform infrared (FTIR) spectroscopy instrument were evaluated in a laboratory environment using a flow-through standard generation system and were found to provide suitable accuracy and precision for assessing rapid fluctuations in CO2 concentration, with a possible effect related to humidity noted only for the detector tubes. These tools were used in the field to select locations and times for grab sampling and personal full-shift sampling, which provided laboratory analysis data to confirm IDLH conditions and 8-hr TWA exposure information. Fluctuating CO2 exposures are exemplified through field work results from several workplaces. In a brewery, brief CO2 exposures above the IDLH value occurred when large volumes of CO2-containing liquid were released for disposal, but 8-hr TWA exposures were not found to exceed the permissible level. In a frozen food production facility nearly constant exposure to CO2 concentrations above the permissible 8-hr TWA value were seen, as well as brief exposures above the IDLH concentration which were associated with specific tasks where liquid CO2 was used. In a poultry processing facility the use of dry

  10. Time-weighted accumulations ap(. tau. ) and Kp(. tau. )

    SciTech Connect

    Wrenn, G.L. )

    1987-09-01

    The planetary geomagnetic indices Kp and ap are widely used in space geophysics. They provide an estimate of maximum magnetic perturbation within a 3-hour period. Many geophysical properties are clearly related to the indices, through energy transfer from a common disturbance source, but direct correlation is often lacking because of poor matching between the frequency of sampling and the physical response functions. The index ap({tau}) is a simple accumulation of the linear ap calculated with an attenuation factor {tau} included to take account of natural temporal relaxation. The case for ap({tau}) and the related Kp({tau}) is made using applications to the variability of the plasma environment in the ionosphere and inner magnetosphere. These examples of improved correlation suggest that time-weighted integration might profitably be applied to other indices.

  11. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  12. To Compare Time-Weighted Graphs to Evaluate the Inclination of the Acetabular Component of Patients Who Had Total Hip Replacement Surgery

    PubMed Central

    Tomak, Leman; Bek, Yuksel; Tomak, Yılmaz

    2015-01-01

    Time-weighted graphs are used to detect small shifts in statistical process control. The aim of this study is to evaluate the inclination of the acetabular component with CUmulative SUM (CUSUM) chart, Moving Average (MA) chart, and Exponentially Weighted Moving Average (EWMA) chart. The data were obtained directly from thirty patients who had undergone total hip replacement surgery at Ondokuz Mayis University, Faculty of Medicine. The inclination of the acetabular component of these people, after total hip replacement, was evaluated. CUSUM chart, Moving Average chart, and Exponentially Weighted Moving Average were used to evaluate the quality control process of acetabular component inclination. MINITAB Statistical Software 15.0 was used to generate these control charts. The assessment done with time-weighted charts revealed that the acetabular inclination angles were settled within control limits and the process was under control. It was determined that the change within the control limits had a random pattern. As a result of this study it has been obtained that time-weighted quality control charts which are used mostly in the field of industry can also be used in the field of medicine. It has provided us with a faster visual decision. PMID:26413501

  13. Development of accumulated heat stress index based on time-weighted function

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo

    2016-05-01

    Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.

  14. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  15. Paradoxes in Averages.

    ERIC Educational Resources Information Center

    Mitchem, John

    1989-01-01

    Examples used to illustrate Simpson's paradox for secondary students include probabilities, university admissions, batting averages, student-faculty ratios, and average and expected class sizes. Each result is explained. (DC)

  16. The average enzyme principle.

    PubMed

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-09-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This "average enzyme principle" provides a natural methodology for jointly studying metabolism and its regulation.

  17. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates

    USGS Publications Warehouse

    Bradley, Paul M.; Journey, Celeste A.; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.

  18. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates.

    PubMed

    Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered. PMID:22982552

  19. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  20. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  1. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  2. Effect of annealing time, weight pressure and cobalt doping on the electrical and magnetic behavior of barium titanate

    NASA Astrophysics Data System (ADS)

    Samuvel, K.; Ramachandran, K.

    2016-05-01

    BaTi0.5CO0.5O3 (BTCO) nanoparticles were prepared by the solid state reaction technique using different starting materials and the microstructure examined by XRD, FESEM, BDS and VSM. X-ray diffraction and electron diffraction patterns showed that the nanoparticles were the tetragonal BTCO phase. The BTCO nanoparticles prepared from the starting materials of as prepared titanium-oxide, Cobalt -oxide and barium carbonate have spherical grain morphology, an average size of 65 nm and a fairly narrow size distribution. The nano-scale presence and the formation of the tetragonal perovskite phase as well as the crystallinity were detected using the mentioned techniques. Dielectric properties of the samples were measured at different frequencies. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. The doped BTCO samples exhibited low loss factor at 1 kHz and 1 MHz frequencies respectively.

  3. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  4. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  5. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  6. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  7. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  8. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  9. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  10. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  11. Averaging of globally coupled oscillators

    NASA Astrophysics Data System (ADS)

    Swift, James W.; Strogatz, Steven H.; Wiesenfeld, Kurt

    1992-03-01

    We study a specific system of symmetrically coupled oscillators using the method of averaging. The equations describe a series array of Josephson junctions. We concentrate on the dynamics near the splay-phase state (also known as the antiphase state, ponies on a merry-go-round, or rotating wave). We calculate the Floquet exponents of the splay-phase periodic orbit in the weak-coupling limit, and find that all of the Floquet exponents are purely imaginary; in fact, all the Floquet exponents are zero except for a single complex conjugate pair. Thus, nested two-tori of doubly periodic solutions surround the splay-phase state in the linearized averaged equations. We numerically integrate the original system, and find startling agreement with the averaging results on two counts: The observed ratio of frequencies is very close to the prediction, and the solutions of the full equations appear to be either periodic or doubly periodic, as they are in the averaged equations. Such behavior is quite surprising from the point of view of generic dynamical systems theory-one expects higher-dimensional tori and chaotic solutions. We show that the functional form of the equations, and not just their symmetry, is responsible for this nongeneric behavior.

  12. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  13. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  14. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  15. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  16. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-01

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc. PMID:16652369

  17. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  18. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  19. Circadian Activity Rhythms and Sleep in Nurses Working Fixed 8-hr Shifts.

    PubMed

    Kang, Jiunn-Horng; Miao, Nae-Fang; Tseng, Ing-Jy; Sithole, Trevor; Chung, Min-Huey

    2015-05-01

    Shift work is associated with adverse health outcomes. The aim of this study was to explore the effects of shift work on circadian activity rhythms (CARs) and objective and subjective sleep quality in nurses. Female day-shift (n = 16), evening-shift (n = 6), and night-shift (n = 13) nurses wore a wrist actigraph to monitor the activity. We used cosinor analysis and time-frequency analysis to study CARs. Night-shift nurses exhibited the lowest values of circadian rhythm amplitude, acrophase, autocorrelation, and mean of the circadian relative power (CRP), whereas evening-shift workers exhibited the greatest standard deviation of the CRP among the three shift groups. That is, night-shift nurses had less robust CARs and evening-shift nurses had greater variations in CARs compared with nurses who worked other shifts. Our results highlight the importance of assessing CARs to prevent the adverse effects of shift work on nurses' health. PMID:25332463

  20. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  1. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  2. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  3. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  5. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  6. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  7. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  8. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  9. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  10. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  11. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  12. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  13. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  14. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  15. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  16. Spectral and parametric averaging for integrable systems

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Serota, R. A.

    2015-05-01

    We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.

  17. Spatial limitations in averaging social cues.

    PubMed

    Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  18. Chronic Moderate Sleep Restriction in Older Long Sleepers and Older Average Duration Sleepers: A Randomized Controlled Trial

    PubMed Central

    Youngstedt, Shawn D.; Jean-Louis, Girardin; Bootzin, Richard R.; Kripke, Daniel F.; Cooper, Jonnifer; Dean, Lauren R.; Catao, Fabio; James, Shelli; Vining, Caitlyn; Williams, Natasha J.; Irwin, Michael R.

    2013-01-01

    Epidemiologic studies have consistently shown that sleeping < 7 hr and ≥ 8 hr is associated with increased mortality and morbidity. The risks of short sleep may be consistent with results from experimental sleep deprivation studies. However, there has been little study of chronic moderate sleep restriction and no evaluation of older adults who might be more vulnerable to negative effects of sleep restriction, given their age-related morbidities. Moreover, the risks of long sleep have scarcely been examined experimentally. Moderate sleep restriction might benefit older long sleepers who often spend excessive time in bed (TIB), in contrast to older adults with average sleep patterns. Our aims are: (1) to examine the ability of older long sleepers and older average sleepers to adhere to 60 min TIB restriction; and (2) to contrast effects of chronic TIB restriction in older long vs. average sleepers. Older adults (n=100) (60–80 yr) who sleep 8–9 hr per night and 100 older adults who sleep 6–7.25 hr per night will be examined at 4 sites over 5 years. Following a 2-week baseline, participants will be randomized to one of two 12-week treatments: (1) a sleep restriction involving a fixed sleep-wake schedule, in which TIB is reduced 60 min below each participant’s baseline TIB; (2) a control treatment involving no sleep restriction, but a fixed sleep schedule. Sleep will be assessed with actigraphy and a diary. Measures will include glucose tolerance, sleepiness, depressive symptoms, quality of life, cognitive performance, incidence of illness or accident, and inflammation. PMID:23811325

  19. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  20. 40 CFR 1037.710 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Averaging. 1037.710 Section 1037.710 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW HEAVY-DUTY MOTOR VEHICLES Averaging, Banking, and Trading for Certification §...

  1. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  2. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  3. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…

  4. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  5. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  6. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  7. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  8. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  9. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  10. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  11. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  12. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  13. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  14. Averaged equations for distributed Josephson junction arrays

    NASA Astrophysics Data System (ADS)

    Bennett, Matthew; Wiesenfeld, Kurt

    2004-06-01

    We use an averaging method to study the dynamics of a transmission line studded by Josephson junctions. The averaged system is used as a springboard for studying experimental strategies which rely on spatial non-uniformity to achieve enhanced synchronization. A reduced model for the near resonant case elucidates in physical terms the key to achieving stable synchronized dynamics.

  15. New results on averaging theory and applications

    NASA Astrophysics Data System (ADS)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  16. The Hubble rate in averaged cosmology

    SciTech Connect

    Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com

    2011-03-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.

  17. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    PubMed

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  18. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  19. Averaging of Backscatter Intensities in Compounds

    PubMed Central

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  20. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793

  1. Average Shape of Transport-Limited Aggregates

    NASA Astrophysics Data System (ADS)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  2. Average-passage flow model development

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  3. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  5. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  6. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  7. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  8. Average Passenger Occupancy (APO) in Your Community.

    ERIC Educational Resources Information Center

    Stenstrup, Al

    1995-01-01

    Provides details of an activity in which students in grades 4-10 determine the Average Passenger Occupancy (APO) in their community and develop, administer, and analyze a survey to determine attitudes toward carpooling. (DDR)

  9. Rotational averaging of multiphoton absorption cross sections

    SciTech Connect

    Friese, Daniel H. Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  10. Averaging Sampled Sensor Outputs To Detect Failures

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.

    1990-01-01

    Fluctuating signals smoothed by taking consecutive averages. Sampling-and-averaging technique processes noisy or otherwise erratic signals from number of sensors to obtain indications of failures in complicated system containing sensors. Used under both transient and steady-state conditions. Useful in monitoring automotive engines, chemical-processing plants, powerplants, and other systems in which outputs of sensors contain noise or other fluctuations in measured quantities.

  11. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  12. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  13. Time-averaging water quality assessment

    SciTech Connect

    Reddy, L.S.; Ormsbee, L.E.; Wood, D.J.

    1995-07-01

    While reauthorization of the Safe Drinking Water Act is pending, many water utilities are preparing to monitor and regulate levels of distribution system constituents that affect water quality. Most frequently, utilities are concerned about average concentrations rather than about tracing a particular constituent`s path. Mathematical and computer models, which provide a quick estimate of average concentrations, could play an important role in this effort. Most water quality models deal primarily with isolated events, such as tracing a particular constituent through a distribution system. This article proposes a simple, time-averaging model that obtains average, maximum, and minimum constituent concentrations and ages throughout the network. It also computes percentage flow contribution and percentage constituent concentration. The model is illustrated using two water distribution systems, and results are compared with those obtained using a dynamic water quality model. Both models predict average water quality parameters with no significant deviations; the time-averaging approach is a simple and efficient alternative to the dynamic model.

  14. Eight-hour TWA personal monitoring using a diffusive sampler and short-term stain tube

    SciTech Connect

    Gentry, S.J.; Walsh, P.T.

    1987-03-01

    A simple technique is described which allows the use of readily available short-term stain tubes for 8-hr monitoring. The 8-hr sample is collected on a Perkin-Elmer diffusive sampler and subsequently thermally desorbed through the appropriate stain tube. The versatility of the basic technique is explored, and an experimental validation of the method for chlorinated hydrocarbons is reported. In order to satisfy the criterion that the system should have a detection range compatible with exposure levels around the recommended 8-hr time-weighted average limit, the uptake on the diffusive sampler should be proportional to exposure and desorption from it should occur efficiently and at a rate compatible with the performance of the available stain tube. If these criteria are met, then the technique can form the basis of a simple, versatile and low cost method of personal monitoring for over 40 gases and vapors.

  15. Average luminosity distance in inhomogeneous universes

    SciTech Connect

    Kostov, Valentin

    2010-04-01

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained

  16. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the

  17. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  18. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  19. The modulated average structure of mullite.

    PubMed

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  20. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  1. Successive averages of firmly nonexpansve mappings

    SciTech Connect

    Flam, S.

    1994-12-31

    The problem considered here is to find common fixed points of (possibly infinitely) many firmly nonexpansive selfmappings in a Hilbert space. For this purpose we use averaged relaxations of the original mappings, the averages being Bochner integrals with respect to chosen measures. Judicious choices of such measures serve to enhance the convergence towards common fixed points. Since projection operators onto closed convex sets are firmly non expansive, the methods explored are applicable for solving convex feasibility problems. In particular, by varying the measures our analysis encompasses recent developments of so-called block-iterative algorithms. We demonstrate convergence theorems which cover and extend many known results.

  2. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  3. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  4. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  5. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    SciTech Connect

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  6. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  7. Average configuration of the induced venus magnetotail

    SciTech Connect

    McComas, D.J.; Spence, H.E.; Russell, C.T.

    1985-01-01

    In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.

  8. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  9. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733

  10. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  11. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations.

  12. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  13. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  14. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  15. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  16. Average entanglement for Markovian quantum trajectories

    SciTech Connect

    Vogelsberger, S.; Spehner, D.

    2010-11-15

    We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.

  17. New applications for high average power beams

    NASA Astrophysics Data System (ADS)

    Neau, E. L.; Turman, B. N.; Patterson, E. L.

    1993-06-01

    The technology base formed by the development of high peak power simulators, laser drivers, FEL's, and ICF drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

  18. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  19. From cellular doses to average lung dose.

    PubMed

    Hofmann, W; Winkler-Heil, R

    2015-11-01

    Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions.

  20. High-average-power exciplex laser system

    NASA Astrophysics Data System (ADS)

    Sentis, M.

    The LUX high-average-power high-PRF exciplex laser (EL) system being developed at the Institut de Mecanique des Fluides de Marseille is characterized, and some preliminary results are presented. The fundamental principles and design criteria of ELs are reviewed, and the LUX components are described and illustrated, including a closed-circuit subsonic wind tunnel and a 100-kW-average power 1-kHz-PRF power pulser providing avalanche-discharge preionization by either an electron beam or an X-ray beam. Laser energy of 50 mJ has been obtained at wavelength 308 nm in the electron-beam mode (14.5 kV) using a 5300/190/10 mixture of Ne/Xe/HCl at pressure 1 bar.

  1. Apparent and average accelerations of the Universe

    SciTech Connect

    Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu

    2008-10-15

    In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

  2. Emissions averaging top option for HON compliance

    SciTech Connect

    Kapoor, S. )

    1993-05-01

    In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.

  3. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  4. Iterative methods based upon residual averaging

    NASA Technical Reports Server (NTRS)

    Neuberger, J. W.

    1980-01-01

    Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.

  5. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  6. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  7. Geomagnetic effects on the average surface temperature

    NASA Astrophysics Data System (ADS)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  8. Annual average radon concentrations in California residences.

    PubMed

    Liu, K S; Hayward, S B; Girman, J R; Moed, B A; Huang, F Y

    1991-09-01

    A study was conducted to determine the annual average radon concentrations in California residences, to determine the approximate fraction of the California population regularly exposed to radon concentrations of 4 pCi/l or greater, and to the extent possible, to identify regions of differing risk for high radon concentrations within the state. Annual average indoor radon concentrations were measured with passive (alpha track) samplers sent by mail and deployed by home occupants, who also completed questionnaires on building and occupant characteristics. For the 310 residences surveyed, concentrations ranged from 0.10 to 16 pCi/l, with a geometric mean of whole-house (bedroom and living room) average concentrations of 0.85 pCi/l and a geometric standard deviation of 1.91. A total of 88,000 California residences (0.8 percent) were estimated to have radon concentrations exceeding 4 pCi/l. When the state was divided into six zones based on geology, significant differences in geometric mean radon concentrations were found between several of the zones. Zones with high geometric means were the Sierra Nevada mountains, the valleys east of the Sierra Nevada, the central valley (especially the southern portion), and Ventura and Santa Barbara Counties. Zones with low geometric means included most coastal counties and the portion of the state from Los Angeles and San Bernardino Counties south.

  9. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  10. Model averaging, optimal inference, and habit formation

    PubMed Central

    FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

  11. Fast Optimal Transport Averaging of Neuroimaging Data.

    PubMed

    Gramfort, A; Peyré, G; Cuturi, M

    2015-01-01

    Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface. PMID:26221679

  12. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  13. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  14. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866

  15. Fluctuations of wavefunctions about their classical average

    NASA Astrophysics Data System (ADS)

    Benet, L.; Flores, J.; Hernández-Saldaña, H.; Izrailev, F. M.; Leyvraz, F.; Seligman, T. H.

    2003-02-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  16. Collimation of average multiplicity in QCD jets

    NASA Astrophysics Data System (ADS)

    Arleo, François; Pérez Ramos, Redamy

    2009-11-01

    The collimation of average multiplicity inside quark and gluon jets is investigated in perturbative QCD in the modified leading logarithmic approximation (MLLA). The role of higher order corrections accounting for energy conservation and the running of the coupling constant leads to smaller multiplicity collimation as compared to leading logarithmic approximation (LLA) results. The collimation of jets produced in heavy-ion collisions has also been explored by using medium-modified splitting functions enhanced in the infrared sector. As compared to elementary collisions, the angular distribution of the jet multiplicity is found to broaden in QCD media at all energy scales.

  17. Average characteristics of partially coherent electromagnetic beams.

    PubMed

    Seshadri, S R

    2000-04-01

    Average characteristics of partially coherent electromagnetic beams are treated with the paraxial approximation. Azimuthally or radially polarized, azimuthally symmetric beams and linearly polarized dipolar beams are used as examples. The change in the mean squared width of the beam from its value at the location of the beam waist is found to be proportional to the square of the distance in the propagation direction. The proportionality constant is obtained in terms of the cross-spectral density as well as its spatial spectrum. The use of the cross-spectral density has advantages over the use of its spatial spectrum.

  18. Auto-exploratory average reward reinforcement learning

    SciTech Connect

    Ok, DoKyeong; Tadepalli, P.

    1996-12-31

    We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.

  19. A Green's function quantum average atom model

    SciTech Connect

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  20. Average observational quantities in the timescape cosmology

    SciTech Connect

    Wiltshire, David L.

    2009-12-15

    We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

  1. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  2. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    SciTech Connect

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  3. Motional averaging in a superconducting qubit.

    PubMed

    Li, Jian; Silveri, M P; Kumar, K S; Pirkkalainen, J-M; Vepsäläinen, A; Chien, W C; Tuorila, J; Sillanpää, M A; Hakonen, P J; Thuneberg, E V; Paraoanu, G S

    2013-01-01

    Superconducting circuits with Josephson junctions are promising candidates for developing future quantum technologies. Of particular interest is to use these circuits to study effects that typically occur in complex condensed-matter systems. Here we employ a superconducting quantum bit--a transmon--to perform an analogue simulation of motional averaging, a phenomenon initially observed in nuclear magnetic resonance spectroscopy. By modulating the flux bias of a transmon with controllable pseudo-random telegraph noise we create a stochastic jump of its energy level separation between two discrete values. When the jumping is faster than a dynamical threshold set by the frequency displacement of the levels, the initially separate spectral lines merge into a single, narrow, motional-averaged line. With sinusoidal modulation a complex pattern of additional sidebands is observed. We show that the modulated system remains quantum coherent, with modified transition frequencies, Rabi couplings, and dephasing rates. These results represent the first steps towards more advanced quantum simulations using artificial atoms. PMID:23361011

  4. High average power linear induction accelerator development

    SciTech Connect

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.

  5. Average Gait Differential Image Based Human Recognition

    PubMed Central

    Chen, Jinyan; Liu, Jiansheng

    2014-01-01

    The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648

  6. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171

  7. [Quetelet, the average man and medical knowledge].

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:24141918

  8. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  9. Asymmetric network connectivity using weighted harmonic averages

    NASA Astrophysics Data System (ADS)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  10. Scaling crossover for the average avalanche shape

    NASA Astrophysics Data System (ADS)

    Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.

    2010-03-01

    Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.

  11. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  12. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  13. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  14. Average oxidation state of carbon in proteins.

    PubMed

    Dick, Jeffrey M

    2014-11-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.

  15. Global Average Brightness Temperature for April 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1

    This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.

    The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  16. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  17. A Microgenetic Analysis of Strategic Variability in Gifted and Average-Ability Children

    ERIC Educational Resources Information Center

    Steiner, Hillary Hettinger

    2006-01-01

    Many researchers have described cognitive differences between gifted and average-performing children. Regarding strategy use, the gifted advantage is often associated with differences such as greater knowledge of strategies, quicker problem solving, and the ability to use strategies more appropriately. The current study used microgenetic methods…

  18. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... and corporate pool average sulfur level determined? (a) The annual refinery or importer average and corporate pool average gasoline sulfur level is calculated as follows: ER10FE00.007 Where: Sa = The...

  19. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  20. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  1. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  2. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  3. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  4. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  5. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  6. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  7. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...

  8. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....

  9. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  10. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  11. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  12. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  13. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth...) Calculate the height above average terrain by averaging the values calculated for each radial....

  14. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  15. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  16. Kinetic energy equations for the average-passage equation system

    NASA Technical Reports Server (NTRS)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  17. Changes in average length of stay and average charges generated following institution of PSRO review.

    PubMed Central

    Westphal, M; Frazier, E; Miller, M C

    1979-01-01

    A five-year review of accounting data at a university hospital shows that immediately following institution of concurrent PSRO admission and length of stay review of Medicare-Medicaid patients, there was a significant decrease in length of stay and a fall in average charges generated per patient against the inflationary trend. Similar changes did not occur for the non-Medicare-Medicaid patients who were not reviewed. The observed changes occurred even though the review procedure rarely resulted in the denial of services to patients, suggesting an indirect effect of review. PMID:393658

  18. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment... Carbon-Related Exhaust Emissions § 600.510-12 Calculation of average fuel economy and average carbon.... (iv) (2) Average carbon-related exhaust emissions will be calculated to the nearest one gram per...

  19. Cost averaging techniques for robust control of flexible structural systems

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.

  20. Average American 15 Pounds Heavier Than 20 Years Ago

    MedlinePlus

    ... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...

  1. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  2. 20 CFR 404.220 - Average-monthly-wage method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  3. 27 CFR 19.37 - Average effective tax rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...

  4. Sample Size Bias in Judgments of Perceptual Averages

    ERIC Educational Resources Information Center

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  5. 7 CFR 1410.44 - Average adjusted gross income.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...

  6. 34 CFR 668.196 - Average rates appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...

  7. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  8. 34 CFR 668.215 - Average rates appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...

  9. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Computation of average terrain elevation. 1.959... Procedures § 1.959 Computation of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a...

  10. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Computation of average terrain elevation. 1.959... Procedures § 1.959 Computation of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a...

  11. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Computation of average terrain elevation. 1.959... of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a 30 second point or...

  12. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Computation of average terrain elevation. 1.959... Procedures § 1.959 Computation of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a...

  13. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959... of average terrain elevation. Except as otherwise specified in § 90.309(a)(4) of this chapter, average terrain elevation must be calculated by computer using elevations from a 30 second point or...

  14. 78 FR 16711 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2011 was $28,893.40. The average annual cost to confine an inmate in a Community...

  15. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-03

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an inmate in a Community Corrections...

  16. 76 FR 57081 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2010 was $28,284. The average annual cost to confine an inmate in a Community Corrections...

  17. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  18. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  19. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... 40 Protection of Environment 8 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic... convert my 1-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation...

  20. MODEL AVERAGING BASED ON KULLBACK-LEIBLER DISTANCE

    PubMed Central

    Zhang, Xinyu; Zou, Guohua; Carroll, Raymond J.

    2016-01-01

    This paper proposes a model averaging method based on Kullback-Leibler distance under a homoscedastic normal error term. The resulting model average estimator is proved to be asymptotically optimal. When combining least squares estimators, the model average estimator is shown to have the same large sample properties as the Mallows model average (MMA) estimator developed by Hansen (2007). We show via simulations that, in terms of mean squared prediction error and mean squared parameter estimation error, the proposed model average estimator is more efficient than the MMA estimator and the estimator based on model selection using the corrected Akaike information criterion in small sample situations. A modified version of the new model average estimator is further suggested for the case of heteroscedastic random errors. The method is applied to a data set from the Hong Kong real estate market.

  1. Random time averaged diffusivities for Lévy walks

    NASA Astrophysics Data System (ADS)

    Froemberg, D.; Barkai, E.

    2013-07-01

    We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ⟨x2⟩ ∝ t2, the latter to enhanced diffusion with ⟨x2⟩ ∝ tν, 1 < ν < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.

  2. Fiber-optic large area average temperature sensor

    SciTech Connect

    Looney, L.L.; Forman, P.R.

    1994-05-01

    In many instances the desired temperature measurement is only the spatial average temperature over a large area; eg. ground truth calibration for satellite imaging system, or average temperature of a farm field. By making an accurate measurement of the optical length of a long fiber-optic cable, we can determine the absolute temperature averaged over its length and hence the temperature of the material in contact with it.

  3. Thermodynamic properties of average-atom interatomic potentials for alloys

    NASA Astrophysics Data System (ADS)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  4. Aberration averaging using point spread function for scanning projection systems

    NASA Astrophysics Data System (ADS)

    Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi

    2000-07-01

    Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.

  5. Exploring Students' Conceptual Understanding of the Averaging Algorithm.

    ERIC Educational Resources Information Center

    Cai, Jinfa

    1998-01-01

    Examines 250 sixth-grade students' understanding of arithmetic average by assessing their understanding of the computational algorithm. Results indicate that the majority of the students knew the "add-them-all-up-and-divide" averaging algorithm, but only half of the students were able to correctly apply the algorithm to solve a…

  6. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  7. A procedure to average 3D anatomical structures.

    PubMed

    Subramanya, K; Dean, D

    2000-12-01

    Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.

  8. 7 CFR 701.17 - Average adjusted gross income limitation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...

  9. 27 CFR 19.613 - Average effective tax rate records.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Average effective tax rate records. 19.613 Section 19.613 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Records and Reports Tax Records § 19.613 Average effective tax rate...

  10. Path-averaged differential meter of atmospheric turbulence parameters

    NASA Astrophysics Data System (ADS)

    Antoshkin, L. V.; Botygina, N. N.; Emaleev, O. N.; Konyaev, P. A.; Lukin, V. P.

    2010-10-01

    A path-averaged differential meter of the structure constant of the atmospheric refractive index, C {/n 2}, has been developed and tested. The results of a model numerical experiment on measuring C {/n 2} and the horizontal component of average wind velocity transverse to the path are reported.

  11. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  12. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  13. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  14. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... your elapsed years.) (2) If you are a male and you reached age 62 in— (i) 1972 or earlier, we count...

  15. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    ERIC Educational Resources Information Center

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  16. 78 FR 49770 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal... annual cost to confine an inmate in a Community Corrections Center for Fiscal Year 2012 was $27,003...

  17. Hadley circulations for zonally averaged heating centered off the equator

    NASA Technical Reports Server (NTRS)

    Lindzen, Richard S.; Hou, Arthur Y.

    1988-01-01

    Consistent with observations, it is found that moving peak heating even 2 deg off the equator leads to profound asymmetries in the Hadley circulation, with the winter cell amplifying greatly and the summer cell becoming negligible. It is found that the annually averaged Hadley circulation is much larger than the circulation forced by the annually averaged heating.

  18. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548... moisture content determination. (a) Determining average moisture content of the lot is not a requirement of... drawn composite sample. Official certification shall be based on the air-oven method or other...

  19. Technical note: Revisiting the geometric theorems for volume averaging

    NASA Astrophysics Data System (ADS)

    Wood, Brian D.

    2013-12-01

    The geometric theorems reported by Quintard and Whitaker [5, Appendix B] are re-examined. We show (1) The geometrical theorems can be interpreted in terms of the raw spatial moments of the pore structure within the averaging volume. (2) For the case where the first spatial moment is aligned with the center of mass of the averaging volume, the geometric theorems can be expressed in terms of the central moments of the porous medium. (3) When the spatial moments of the pore structure are spatially stationary, the geometrical theorems allow substantial simplification of nonlocal terms arising in the averaged equations. (4) In the context of volume averaging, the geometric theorems of Quintard and Whitaker [5, Appendix B] are better interpreted as statements regarding the spatial stationarity of specific volume averaged quantities rather than an explicit statement about the media disorder.

  20. On various definitions of shadowing with average error in tracing

    NASA Astrophysics Data System (ADS)

    Wu, Xinxing; Oprocha, Piotr; Chen, Guanrong

    2016-07-01

    When computing a trajectory of a dynamical system, influence of noise can lead to large perturbations which can appear, however, with small probability. Then when calculating approximate trajectories, it makes sense to consider errors small on average, since controlling them in each iteration may be impossible. Demand to relate approximate trajectories with genuine orbits leads to various notions of shadowing (on average) which we consider in the paper. As the main tools in our studies we provide a few equivalent characterizations of the average shadowing property, which also partly apply to other notions of shadowing. We prove that almost specification on the whole space induces this property on the measure center which in turn implies the average shadowing property. Finally, we study connections among sensitivity, transitivity, equicontinuity and (average) shadowing.

  1. Average cross-responses in correlated financial markets

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  2. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  3. Some series of intuitionistic fuzzy interactive averaging aggregation operators.

    PubMed

    Garg, Harish

    2016-01-01

    In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail. PMID:27441128

  4. LANDSAT-4 horizon scanner full orbit data averages

    NASA Technical Reports Server (NTRS)

    Stanley, J. P.; Bilanow, S.

    1983-01-01

    Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.

  5. The causal meaning of Fisher’s average effect

    PubMed Central

    LEE, JAMES J.; CHOW, CARSON C.

    2013-01-01

    Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113

  6. Programmable noise bandwidth reduction by means of digital averaging

    NASA Technical Reports Server (NTRS)

    Poklemba, John J. (Inventor)

    1993-01-01

    Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.

  7. Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.

    PubMed

    Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu

    2010-05-01

    Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813

  8. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic averages into appropriate averaging times and units? 60.1755 Section 60.1755 Protection of Environment... or Before August 30, 1999 Model Rule-Continuous Emission Monitoring § 60.1755 How do I convert my...

  9. Achievement, Underachievement and Cortical Activation: A Comparative EEG Study of Adolescents of Average and Above-Average Intelligence

    ERIC Educational Resources Information Center

    Staudt, Beate; Neubauer, Aljoscha C.

    2006-01-01

    In this study the "neural efficiency" phenomenon (more efficient brain function in brighter as compared to less intelligent individuals) was investigated regarding differences in intelligence (average vs. above-average intelligence) and scholastic achievement (achievers vs. underachievers). The cortical activation (assessed by event-related…

  10. Comparison of the WISC-R and the Leiter International Performance Scale with Average and Above-Average Students.

    ERIC Educational Resources Information Center

    Mask, Nan; Bowen, Charles E.

    1984-01-01

    Compared the Wechsler Intelligence Scale for Children (Revised) (WISC-R) and the Leiter International Performance Scale with 40 average and above average students. Results indicated a curvilinear relationship between the WISC-R and the Leiter, which correlates higher at the mean and deviates as the Full Scale varies from the mean. (JAC)

  11. Structuring Collaboration in Mixed-Ability Groups to Promote Verbal Interaction, Learning, and Motivation of Average-Ability Students

    ERIC Educational Resources Information Center

    Saleh, Mohammad; Lazonder, Ard W.; Jong, Ton de

    2007-01-01

    Average-ability students often do not take full advantage of learning in mixed-ability groups because they hardly engage in the group interaction. This study examined whether structuring collaboration by group roles and ground rules for helping behavior might help overcome this participatory inequality. In a plant biology course, heterogeneously…

  12. Average waiting time in FDDI networks with local priorities

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.

  13. Bounce-averaged Kinetic Equations and Neoclassical Polarization Density

    SciTech Connect

    First Author = B.H. Fong; T.S. Hahm

    1998-07-01

    The rigorous formulation of the bounce-averaged equations is presented based upon the Poincare-Cartan one-form andLie perturbation methods. The resulting bounce-averaged Vlasov equation is Hamiltonian, thus suitable for theself-consistent simulation of low-frequency electrostatic turbulence in the trapped ion mode regime. In the bounce-kineticPoisson equation, the "neoclassical polarization density" arises from the difference between bounce-averaged banana centerand real trapped particle densities across a field line. This representation of the neoclassical polarization drift as ashielding term provides a systematic way to study the long-term behavior of the turbulence-driven E x B flow.

  14. A new approach to high-order averaging

    NASA Astrophysics Data System (ADS)

    Chartier, P.; Murua, A.; Sanz-Serna, J. M.

    2012-09-01

    We present a new approach to perform high-order averaging in oscillatory periodic or quasi-periodic dynamical systems. The averaged system is expressed in terms of (i) scalar coefficients that are universal, i.e. independent of the system under consideration and (ii) basis functions that may be written in an explicit, systematic way in terms of the derivatives of the Fourier coefficients of the vector field being averaged. The coefficients may be recursively computed in a simple fashion. This approach may be used to obtain exponentially small error estimates, as those first derived by Neishtadt for the periodic case and Simó in the quasi-periodic scenario.

  15. Averaging underwater noise levels for environmental assessment of shipping.

    PubMed

    Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

    2012-10-01

    Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575

  16. Time average vibration fringe analysis using Hilbert transformation

    SciTech Connect

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-10-20

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  17. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... adjustment. (1) Upon the development of an appropriate methodology, the national average monthly bid amount... applied an adjustment. (4) CMS does not apply any geographic adjustment until an appropriate...

  18. Determining the Average Age of School Plant Building Space.

    ERIC Educational Resources Information Center

    Uerling, Donald F.

    1984-01-01

    Presents a method for calculating the age of the space in a specific building inventory, and suggests some practical applications. A fourfold procedure is provided for finding the average age of total building space. (TE)

  19. Does subduction zone magmatism produce average continental crust

    NASA Technical Reports Server (NTRS)

    Ellam, R. M.; Hawkesworth, C. J.

    1988-01-01

    The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.

  20. The origin of consistent protein structure refinement from structural averaging.

    PubMed

    Park, Hahnbeom; DiMaio, Frank; Baker, David

    2015-06-01

    Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state.

  1. Total-pressure-tube averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.

    1973-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. The tests were performed at a pressure level of 1 bar, for Mach numbers up to near 1, and frequencies up to 3 kHz. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonances which further increased the indicated pressure were encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  2. Effects of spatial variability and scale on areal -average evapotranspiration

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, Eric F.

    1993-01-01

    This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.

  3. Average local ionization energy generalized to correlated wavefunctions

    SciTech Connect

    Ryabinkin, Ilya G.; Staroverov, Viktor N.

    2014-08-28

    The average local ionization energy function introduced by Politzer and co-workers [Can. J. Chem. 68, 1440 (1990)] as a descriptor of chemical reactivity has a limited utility because it is defined only for one-determinantal self-consistent-field methods such as the Hartree–Fock theory and the Kohn–Sham density-functional scheme. We reinterpret the negative of the average local ionization energy as the average total energy of an electron at a given point and, by rewriting this quantity in terms of reduced density matrices, arrive at its natural generalization to correlated wavefunctions. The generalized average local electron energy turns out to be the diagonal part of the coordinate representation of the generalized Fock operator divided by the electron density; it reduces to the original definition in terms of canonical orbitals and their eigenvalues for one-determinantal wavefunctions. The discussion is illustrated with calculations on selected atoms and molecules at various levels of theory.

  4. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ultimate consumers in the same covered area as was the reformulated gasoline which exceeds the average... standard, compare the actual total with the compliance total. (3) For the VOC, NOX, and toxics...

  5. Distribution of population-averaged observables in stochastic gene expression.

    PubMed

    Bhattacharyya, Bhaswati; Kalay, Ziya

    2014-01-01

    Observation of phenotypic diversity in a population of genetically identical cells is often linked to the stochastic nature of chemical reactions involved in gene regulatory networks. We investigate the distribution of population-averaged gene expression levels as a function of population, or sample, size for several stochastic gene expression models to find out to what extent population-averaged quantities reflect the underlying mechanism of gene expression. We consider three basic gene regulation networks corresponding to transcription with and without gene state switching and translation. Using analytical expressions for the probability generating function of observables and large deviation theory, we calculate the distribution and first two moments of the population-averaged mRNA and protein levels as a function of model parameters, population size, and number of measurements contained in a data set. We validate our results using stochastic simulations also report exact results on the asymptotic properties of population averages which show qualitative differences among different models. PMID:24580265

  6. Average Lorentz self-force from electric field lines

    NASA Astrophysics Data System (ADS)

    Aashish, Sandeep; Haque, Asrarul

    2015-09-01

    We generalize the derivation of electromagnetic fields of a charged particle moving with a constant acceleration Singal (2011 Am. J. Phys. 79 1036) to a variable acceleration (piecewise constants) over a small finite time interval using Coulomb's law, relativistic transformations of electromagnetic fields and Thomson's construction Thomson (1904 Electricity and Matter (New York: Charles Scribners) ch 3). We derive the average Lorentz self-force for a charged particle in arbitrary non-relativistic motion via averaging the fields at retarded time.

  7. Characterization of mirror-based modulation-averaging structures.

    PubMed

    Komljenovic, Tin; Babić, Dubravko; Sipus, Zvonimir

    2013-05-10

    Modulation-averaging reflectors have recently been proposed as a means for improving the link margin in self-seeded wavelength-division multiplexing in passive optical networks. In this work, we describe simple methods for determining key parameters of such structures and use them to predict their averaging efficiency. We characterize several reflectors built by arraying fiber-Bragg gratings along a segment of an optical fiber and show very good agreement between experiments and theoretical models. PMID:23669835

  8. Flavor Physics Data from the Heavy Flavor Averaging Group (HFAG)

    DOE Data Explorer

    The Heavy Flavor Averaging Group (HFAG) was established at the May 2002 Flavor Physics and CP Violation Conference in Philadelphia, and continues the LEP Heavy Flavor Steering Group's tradition of providing regular updates to the world averages of heavy flavor quantities. Data are provided by six subgroups that each focus on a different set of heavy flavor measurements: B lifetimes and oscillation parameters, Semi-leptonic B decays, Rare B decays, Unitarity triangle parameters, B decays to charm final states, and Charm Physics.

  9. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  10. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    PubMed

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  11. Average Soil Water Retention Curves Measured by Neutron Radiography

    SciTech Connect

    Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  12. Discrete Models of Fluids: Spatial Averaging, Closure, and Model Reduction

    SciTech Connect

    Panchenko, Alexander; Tartakovsky, Alexandre; Cooper, Kevin

    2014-03-06

    The main question addressed in the paper is how to obtain closed form continuum equations governing spatially averaged dynamics of semi-discrete ODE models of fluid flow. In the presence of multiple small scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy balance equations of mass, momentum and energy. These equations are exact, but they do not form a continuum model in the true sense of the word because calculation of stress and heat flux requires solving the underlying ODE system. To produce continuum equations that can be simulated without resolving micro-scale dynamics, we developed a closure method based on the use of regularized deconvolutions. We mostly deal with non-linear averaging suitable for Lagrangian particle solvers, but consider Eulerian linear averaging where appropriate. The results of numerical experiments show good agreement between our closed form flux approximations and their exact counterparts.

  13. Spatially averaged flow over a wavy boundary revisited

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.

  14. The average longitudinal air shower profile: exploring the shape information

    NASA Astrophysics Data System (ADS)

    Conceição, R.; Andringa, S.; Diogo, F.; Pimenta, M.

    2015-08-01

    The shape of the extensive air shower (EAS) longitudinal profile contains information about the nature of the primary cosmic ray. However, with the current detection capabilities, the assessment of this quantity in an event-by-event basis is still very challenging. In this work we show that the average longitudinal profile can be used to characterise the average behaviour of high energy cosmic rays. Using the concept of universal shower profile it is possible to describe the shape of the average profile in terms of two variables, which can be already measured by the current experiments. These variables present sensitivity to both average primary mass composition and to hadronic interaction properties in shower development. We demonstrate that the shape of the average muon production depth profile can be explored in the same way as the electromagnetic profile having a higher power of discrimination for the state of the art hadronic interaction models. The combination of the shape variables of both profiles provides a new powerful test to the existing hadronic interaction models, and may also provide important hints about multi-particle production at the highest energies.

  15. Genuine non-self-averaging and ultraslow convergence in gelation.

    PubMed

    Cho, Y S; Mazza, M G; Kahng, B; Nagler, J

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  16. How do children form impressions of persons? They average.

    PubMed

    Hendrick, C; Franz, C M; Hoving, K L

    1975-05-01

    The experiment reported was concerned with impression formation in children. Twelve subjects in each of Grades K, 2, 4, and 6 rated several sets of single trait words and trait pairs. The response scale consisted of a graded series of seven schematic faces which ranged from a deep frown to a happy smile. A basic question was whether children use an orderly integration rule in forming impressions of trait pairs. The answer was clear. At all grade levels a simple averaging model adequately accounted for pair ratings. A second question concerned how children resolve semantic inconsistencies. Responses to two highly inconsistent trait pairs suggested that subjects responded in the same fashion, essentially averaging the two traits in a pair. Overall, the data strongly supported an averaging model, and indicated that impression formation of children is similar to previous results obtained from adults. PMID:21287081

  17. A generalization to stochastic averaging in random vibration

    SciTech Connect

    Red-Horse, J.R.

    1992-06-01

    Stochastic Averaging is applied to a class of randomly excited single- degree-of-freedom oscillators possessing linear damping and nonlinear stiffness terms. The assumed excitation form involves an externally applied evolutionary Gaussian stochastic process. Special emphasis is placed on casting the problem in a more formal mathematical framework than that traditionally used in engineering applications. For the case under consideration, it is shown that a critical step involves the selection of an appropriate period of oscillation over which the temporal averaging can be performed. As an example, this averaging procedure is performed on a Duffing oscillator. The validity of the derived result is partially confirmed by reducing it is to special case, for which there is a known solution, and comparing both solutions.

  18. How do children form impressions of persons? They average.

    PubMed

    Hendrick, C; Franz, C M; Hoving, K L

    1975-05-01

    The experiment reported was concerned with impression formation in children. Twelve subjects in each of Grades K, 2, 4, and 6 rated several sets of single trait words and trait pairs. The response scale consisted of a graded series of seven schematic faces which ranged from a deep frown to a happy smile. A basic question was whether children use an orderly integration rule in forming impressions of trait pairs. The answer was clear. At all grade levels a simple averaging model adequately accounted for pair ratings. A second question concerned how children resolve semantic inconsistencies. Responses to two highly inconsistent trait pairs suggested that subjects responded in the same fashion, essentially averaging the two traits in a pair. Overall, the data strongly supported an averaging model, and indicated that impression formation of children is similar to previous results obtained from adults.

  19. The Health Effects of Income Inequality: Averages and Disparities.

    PubMed

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  20. Size and emotion averaging: costs of dividing attention after all.

    PubMed

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention. PMID:22390476

  1. Genuine non-self-averaging and ultraslow convergence in gelation.

    PubMed

    Cho, Y S; Mazza, M G; Kahng, B; Nagler, J

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation. PMID:27627355

  2. Time-averaged photon-counting digital holography.

    PubMed

    Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario

    2015-09-15

    Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907

  3. Genuine non-self-averaging and ultraslow convergence in gelation

    NASA Astrophysics Data System (ADS)

    Cho, Y. S.; Mazza, M. G.; Kahng, B.; Nagler, J.

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  4. Neutron average cross sections of {sup 237}Np

    SciTech Connect

    Noguere, G.

    2010-04-15

    This work reports {sup 237}Np neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the {sup 237}Np neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width =39.3+-1.0 meV, a neutron strength function 10{sup 4}S{sub 0}=1.02+-0.14, a mean level spacing D{sub 0}=0.60+-0.03 eV, and a potential scattering length R{sup '}=9.8+-0.1 fm.

  5. Time-average TV holography for vibration fringe analysis

    SciTech Connect

    Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2009-06-01

    Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

  6. An Advanced Time Averaging Modelling Technique for Power Electronic Circuits

    NASA Astrophysics Data System (ADS)

    Jankuloski, Goce

    For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.

  7. Optimum orientation versus orientation averaging description of cluster radioactivity

    NASA Astrophysics Data System (ADS)

    Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.

    2016-07-01

    While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel–Kramers–Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske–Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.

  8. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and

  9. Optimum orientation versus orientation averaging description of cluster radioactivity

    NASA Astrophysics Data System (ADS)

    Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.

    2016-07-01

    While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel-Kramers-Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske-Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.

  10. AMPERE AVERAGE CURRENT PHOTOINJECTOR AND ENERGY RECOVERY LINAC.

    SciTech Connect

    BEN-ZVI,I.; BURRILL,A.; CALAGA,R.; ET AL.

    2004-08-17

    High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. We describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode, an accelerator cavity, both capable of producing of the order of one ampere average current and plans for an ERL based on these units.

  11. Improving the Average Response Time in Collective I/O

    SciTech Connect

    Jin, Chen; Sehrish, Saba; Liao, Wei-keng; Choudhary, Alok; Schuchardt, Karen L.

    2011-09-21

    In collective I/O, MPI processes exchange requests so that the rearranged requests can result in the shortest file system access time. Scheduling the exchange sequence determines the response time of participating processes. Existing implementations that simply follow the increasing order of file ofsets do not necessary produce the best performance. To minimize the average response time, we propose three scheduling algorithms that consider the number of processes per file stripe and the number of accesses per process. Our experimental results demonstrate improvements of up to 50% in the average response time using two synthetic benchmarks and a high-resolution climate application.

  12. A preliminary, precise measurement of the average B hadron lifetime

    SciTech Connect

    SLD Collaboration

    1994-07-01

    The average B hadron lifetime was measured using data collected with the SLD detector at the SLC in 1993. From a sample of {approximately}50,000 Z{sup 0} events, a sample enriched in Z{sup 0} {yields} b{bar b} was selected by applying an impact parameter tag. The lifetime was extracted from the decay length distribution of inclusive vertices reconstructed in three dimensions. A binned maximum likelihood method yielded an average B hadron lifetime of {tau}{sub B} = 1.577 {plus_minus} 0.032(stat.) {plus_minus} 0.046(syst.) ps.

  13. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  14. Comparison of peak and average nitrogen dioxide concentrations inside homes

    NASA Astrophysics Data System (ADS)

    Franklin, Peter; Runnion, Tina; Farrar, Drew; Dingle, Peter

    Most health studies measuring indoor nitrogen dioxide (NO 2) concentrations have utilised long-term passive monitors. However, this method may not provide adequate information on short-term peaks, which may be important when examining health effects of this pollutant. The aims of this study were to investigate the relationship between short-term peak (peak) and long-term average (average) NO 2 concentrations in kitchens and the effect of gas cookers on this relationship. Both peak and average NO 2 levels were measured simultaneously in the kitchens of 53 homes using passive sampling techniques. All homes were non-smoking and sampling was conducted in the summer months. Geometric mean (95% confidence interval (CI)) average NO 2 concentrations for all homes were 16.2 μg m -3 (12.7-20.6 μg m -3). There was no difference between homes with and without gas cookers ( p=0.40). Geometric mean (95%CI) peak NO 2 concentrations were 45.3 μg m -3 (36.0-57.1 μg m -3). Unlike average concentrations, peak concentrations were significantly higher in homes with gas cookers (64.0 μg m -3, 48.5-82.0 μg m -3) compared to non-gas homes (25.1 μg m -3, 18.3-35.5 μg m -3) ( p<0.001). There was only a moderate correlation between the peak and average concentrations measured in all homes ( r=0.39, p=0.004). However, when the data were analysed separately based on the presence of gas cookers, the correlation between peak and average NO 2 concentrations was improved in non-gas homes ( r=0.59, p=0.005) but was not significant in homes with gas cookers ( r=0.19, p=0.33). These results suggest that average NO 2 concentrations do not adequately identify exposure to short-term peaks of NO 2 that may be caused by gas cookers. The lack of peak exposure data in many epidemiological studies may explain some of the inconsistent findings.

  15. Analytical solution of average path length for Apollonian networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongzhi; Chen, Lichao; Zhou, Shuigeng; Fang, Lujun; Guan, Jihong; Zou, Tao

    2008-01-01

    With the help of recursion relations derived from the self-similar structure, we obtain the solution of average path length, dmacr t , for Apollonian networks. In contrast to the well-known numerical result dmacr t∝(lnNt)3/4 [J. S. Andrade, Jr. , Phys. Rev. Lett. 94, 018702 (2005)], our rigorous solution shows that the average path length grows logarithmically as dmacr t∝lnNt in the infinite limit of network size Nt . The extensive numerical calculations completely agree with our closed-form solution.

  16. Method of Best Representation for Averages in Data Evaluation

    SciTech Connect

    Birch, M. Singh, B.

    2014-06-15

    A new method for averaging data for which incomplete information is available is presented. For example, this method would be applicable during data evaluation where only the final outcomes of the experiments and the associated uncertainties are known. This method is based on using the measurements to construct a mean probability density for the data set. This “expected value method” (EVM) is designed to treat asymmetric uncertainties and has distinct advantages over other methods of averaging, including giving a more realistic uncertainty, being robust to outliers and consistent under various representations of the same quantity.

  17. Bounce-averaged Fokker-Planck code for stellarator transport

    SciTech Connect

    Mynick, H.E.; Hitchon, W.N.G.

    1985-07-01

    A computer code for solving the bounce-averaged Fokker-Planck equation appropriate to stellarator transport has been developed, and its first applications made. The code is much faster than the bounce-averaged Monte-Carlo codes, which up to now have provided the most efficient numerical means for studying stellarator transport. Moreover, because the connection to analytic kinetic theory of the Fokker-Planck approach is more direct than for the Monte-Carlo approach, a comparison of theory and numerical experiment is now possible at a considerably more detailed level than previously.

  18. Electronic structure of substitutionally disordered alloys: Direct configurational averaging

    SciTech Connect

    Wolverton, C.; de Fontaine, D.; Dreysse, H.; Ceder, G.

    1992-04-01

    The method of direct configurational averaging (DCA) has been proposed to study the electronic structure of disordered alloys. Local density of states and band structure energies are obtained by averaging over a small number of configrations within a tight-binding Hamiltonian. Effective cluster interactions, the driving quantities for ordering in solids, are computed for various alloys using a tight-binding form of the linearized muffin-tin orbital method (TB-LMTO). The DCA calculations are used to determine various energetic and thermodynamic quantities for binary and ternasy alloys. (Pd, Rh, V).

  19. High average power scaleable thin-disk laser

    DOEpatents

    Beach, Raymond J.; Honea, Eric C.; Bibeau, Camille; Payne, Stephen A.; Powell, Howard; Krupke, William F.; Sutton, Steven B.

    2002-01-01

    Using a thin disk laser gain element with an undoped cap layer enables the scaling of lasers to extremely high average output power values. Ordinarily, the power scaling of such thin disk lasers is limited by the deleterious effects of amplified spontaneous emission. By using an undoped cap layer diffusion bonded to the thin disk, the onset of amplified spontaneous emission does not occur as readily as if no cap layer is used, and much larger transverse thin disks can be effectively used as laser gain elements. This invention can be used as a high average power laser for material processing applications as well as for weapon and air defense applications.

  20. Collision and average velocity effects on the ratchet pinch

    SciTech Connect

    Vlad, M.; Benkadda, S.

    2008-03-15

    A ratchet-type average velocity V{sup R} appears for test particles moving in a stochastic potential and a magnetic field that is space dependent. This model is developed by including particle collisions and an average velocity. We show that these components of the motion can destroy the ratchet velocity but they also can produce significant increase of V{sup R}, depending on the parameters. The amplification of the ratchet pinch is a nonlinear effect that appears in the presence of trajectory eddying.

  1. Average wave function method for gas-surface scattering

    NASA Astrophysics Data System (ADS)

    Singh, Harjinder; Dacol, Dalcio K.; Rabitz, Herschel

    1986-02-01

    The average wave function method (AWM) is applied to scattering of a gas off a solid surface. The formalism is developed for both periodic as well as disordered surfaces. For an ordered lattice an explicit relation is derived for the Bragg peaks along with a numerical illustration. Numerical results are presented for atomic clusters on a flat hard wall with a Gaussian-like potential at each atomic scattering site. The effect of relative lateral displacement of two clusters upon the scattering pattern is shown. The ability of AWM to accommodate disorder through statistical averaging over cluster configurations is illustrated. Enhanced uniform backscattering is observed with increasing roughness on the surface.

  2. A preliminary measurement of the average B hadron lifetime

    SciTech Connect

    Manly, S.L.; SLD Collaboration

    1994-09-01

    The average B hadron lifetime was measured using data collected with the SLD detector at the SLC in 1993. From a sample of {approximately}50,000 Z{sup 0} events, a sample enriched in Z{sup 0} {yields} b{bar b} was selected by applying an impact parameter tag. The lifetime was extracted from the decay length distribution of inclusive vertices reconstructed in three dimensions. A binned maximum likelihood method yielded an average B hadron lifetime of {tau}{sub B} = 1.577{plus_minus}0.032(stat.){plus_minus}0.046(syst.) ps.

  3. Averaging processes in granular flows driven by gravity

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  4. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  5. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  6. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  7. Evaluating Methods for Constructing Average High-Density Electrode Positions

    PubMed Central

    Richards, John E.; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M.C.

    2014-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel “Geodesic Sensor Net” (GSN; EGI, Inc.), 38 participants with the 128 channel “Hydrocel Geodesic Sensor Net” (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants’ original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants) PMID:25234713

  8. Analysis of Finger Pulse by Standard Deviation Using Moving Average

    NASA Astrophysics Data System (ADS)

    Asakawa, Takashi; Nishihara, Kazue; Yoshidome, Tadashi

    We propose a method of analyzing a finger pulse by standard deviation using moving average for measuring mental load. Frequency analysis, Lorentz plot and Lyapnov exponent have been carried out to present measurement. However, this technique is analyzable in a shorter time than the existing technique.

  9. HIGH AVERAGE POWER UV FREE ELECTRON LASER EXPERIMENTS AT JLAB

    SciTech Connect

    Douglas, David; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle D; Tennant, Christopher; Williams, Gwyn

    2012-07-01

    Having produced 14 kW of average power at {approx}2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  10. AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN

    EPA Science Inventory

    The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...

  11. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... annual credits and debits in the Periodic Reports as specified in § 63.655(g)(8). Every fourth Periodic... reported in the next Periodic Report. (iii) The following procedures and equations shall be used to..., dimensionless (see table 33 of subpart G). P=Weighted average rack partial pressure of organic HAP's...

  12. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... annual credits and debits in the Periodic Reports as specified in § 63.655(g)(8). Every fourth Periodic... reported in the next Periodic Report. (iii) The following procedures and equations shall be used to..., dimensionless (see table 33 of subpart G). P=Weighted average rack partial pressure of organic HAP's...

  13. All above Average: Secondary School Improvement as an Impossible Endeavour

    ERIC Educational Resources Information Center

    Taylor, Phil

    2015-01-01

    This article argues that secondary school improvement in England, when viewed as a system, has become an impossible endeavour. This arises from the conflation of improvement with effectiveness, judged by a narrow range of outcome measures and driven by demands that all schools should somehow be above average. The expectation of comparable…

  14. 27 CFR 19.249 - Average effective tax rate.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Average effective tax rate. 19.249 Section 19.249 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits Taxes Effective Tax Rates §...

  15. Grade Point Average and Changes in (Great) Grade Expectations.

    ERIC Educational Resources Information Center

    Wendorf, Craig A.

    2002-01-01

    Examines student grade expectations throughout a semester in which students offered their expectations three times during the course: (1) within the first week; (2) midway through the semester; and (3) the week before the final examination. Finds that their expectations decreased stating that their cumulative grade point average was related to the…

  16. Touching Epistemologies: Meanings of Average and Variation in Nursing Practice.

    ERIC Educational Resources Information Center

    Noss, Richard; Pozzi, Stefano; Hoyles, Celia

    1999-01-01

    Presents a study on the meanings of average and variation displayed by pediatric nurses. Traces how these meanings shape and are shaped by nurses' interpretations of trends in patient and population data. Suggests a theoretical framework for making sense of the data that compares and contrasts nurses' epistemology with that of official…

  17. 40 CFR 63.503 - Emissions averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.503 Emissions averaging... limited to twenty. This number may be increased by up to five additional points if pollution prevention... pollution prevention measures are used to control five or more of the emission points included in...

  18. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1332 Emissions averaging... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the...

  19. Robustness of spatial average equalization: a statistical reverberation model approach.

    PubMed

    Bharitkar, Sunil; Hilmes, Philip; Kyriakakis, Chris

    2004-12-01

    Traditionally, multiple listener room equalization is performed to improve sound quality at all listeners, during audio playback, in a multiple listener environment (e.g., movie theaters, automobiles, etc.). A typical way of doing multiple listener equalization is through spatial averaging, where the room responses are averaged spatially between positions and an inverse equalization filter is found from the spatially averaged result. However, the equalization performance, will be affected if there is a mismatch between the position of the microphones (which are used for measuring the room responses for designing the equalization filter) and the actual center of listener head position (during playback). In this paper, we will present results on the effects of microphone-listener mismatch on spatial average equalization performance. The results indicate that, for the analyzed rectangular configuration, the region of effective equalization depends on (i) the distance of a listener from the source, (ii) the amount of mismatch between the responses, and (iii) the frequency of the audio signal. We also present some convergence analysis to interpret the results.

  20. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1332 Emissions averaging... based on either organic HAP or TOC. (3) For the purposes of these provisions, whenever Method 18, 40 CFR... through provisions outside this section, Method 18 or Method 25A, 40 CFR part 60, appendix A, may be...

  1. 40 CFR 63.503 - Emissions averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...; housekeeping measures; and in-process recycling that returns waste materials directly to production as raw... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.503 Emissions averaging... TOC. (3) For the purposes of the provisions in this section, whenever Method 18, 40 CFR part...

  2. Discrete Averaging Relations for Micro to Macro Transition

    NASA Astrophysics Data System (ADS)

    Liu, Chenchen; Reina, Celia

    2016-05-01

    The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.

  3. Reducing Noise by Repetition: Introduction to Signal Averaging

    ERIC Educational Resources Information Center

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  4. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    PubMed

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  5. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging

    PubMed Central

    Brezis, Noam; Bronfman, Zohar Z.; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  6. State-Variable Representations For Moving-Average Sampling

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1991-01-01

    Two state-variable representations derived for continuous-time plant driven by control algorithm including zero-order hold and measurements sampled at mutliple rates by multiple-input/multiple-output moving-average processes. New representations enhance observability and controllability of plant. Applications include mathematical modeling of navigation systems including star trackers, gyroscopes, and accelerometers.

  7. Speckle averaging system for laser raster-scan image projection

    DOEpatents

    Tiszauer, Detlev H.; Hackel, Lloyd A.

    1998-03-17

    The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.

  8. Speckle averaging system for laser raster-scan image projection

    DOEpatents

    Tiszauer, D.H.; Hackel, L.A.

    1998-03-17

    The viewers` perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts. 5 figs.

  9. Synthesizing average 3D anatomical shapes using deformable templates

    NASA Astrophysics Data System (ADS)

    Christensen, Gary E.; Johnson, Hans J.; Haller, John W.; Melloy, Jenny; Vannier, Michael W.; Marsh, Jeffrey L.

    1999-05-01

    A major task in diagnostic medicine is to determine whether or not an individual has a normal or abnormal anatomy by examining medical images such as MRI, CT, etc. Unfortunately, there are few quantitative measures that a physician can use to discriminate between normal and abnormal besides a couple of length, width, height, and volume measurements. In fact, there is no definition/picture of what normal anatomical structures--such as the brain-- look like let alone normal anatomical variation. The goal of this work is to synthesize average 3D anatomical shapes using deformable templates. We present a method for empirically estimating the average shape and variation of a set of 3D medical image data sets collected from a homogeneous population of topologically similar anatomies. Results are shown for synthesizing the average brain image volume from a set of six normal adults and synthesizing the average skull/head image volume from a set of five 3 - 4 month old infants with sagittal synostosis.

  10. The method of averages applied to the KS differential equations

    NASA Technical Reports Server (NTRS)

    Graf, O. F., Jr.; Mueller, A. C.; Starke, S. E.

    1977-01-01

    A new approach for the solution of artificial satellite trajectory problems is proposed. The basic idea is to apply an analytical solution method (the method of averages) to an appropriate formulation of the orbital mechanics equations of motion (the KS-element differential equations). The result is a set of transformed equations of motion that are more amenable to numerical solution.

  11. Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works

    NASA Astrophysics Data System (ADS)

    Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha

    2015-04-01

    Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (φ'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the

  12. High average power diode pumped solid state lasers for CALIOPE

    SciTech Connect

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

  13. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  14. Optimum Low Thrust Elliptic Orbit Transfer Using Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Tarzi, Zahi Bassem

    Low-thrust electric propulsion is increasingly being used for spacecraft missions primarily due to its high propellant efficiency. Since analytical solutions for general low-thrust transfers are not available, a simple and fast method for low-thrust trajectory optimization is of great value for preliminary mission planning. However, few low-thrust trajectory tools are appropriate for preliminary mission design studies. The method presented in this paper provides quick and accurate solutions for a wide range of transfers by using numerical orbital averaging to improve solution convergence and include orbital perturbations. Thus allowing preliminary trajectories to be obtained for transfers which involve many revolutions about the primary body. This method considers minimum fuel transfers using first order averaging to obtain the fuel optimum rates of change of the equinoctial orbital elements in terms of each other and the Lagrange multipliers. Constraints on thrust and power, as well as minimum periapsis, are implemented and the equations are averaged numerically using a Gaussian quadrature. The use of numerical averaging allows for more complex orbital perturbations to be added without great difficulty. Orbital perturbations due to solar radiation pressure, atmospheric drag, a non-spherical central body, and third body gravitational effects have been included. These perturbations have not been considered by previous methods using analytical averaging. Thrust limitations due to shadowing have also been considered in this study. To allow for faster convergence of a wider range of problems, the solution to a transfer which minimizes the square of the thrust magnitude is used as a preliminary guess for the minimum fuel problem. Thus, this method can be quickly applied to many different types of transfers which may include various perturbations. Results from this model are shown to provide a reduction in propellant mass required over previous minimum fuel solutions

  15. High-average-power diode-pumped Yb: YAG lasers

    SciTech Connect

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-10-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.

  16. Discrete models of fluids: spatial averaging, closure and model reduction

    SciTech Connect

    Panchenko, Alexander; Tartakovsky, Alexandre M.; Cooper, Kevin

    2014-04-15

    We consider semidiscrete ODE models of single-phase fluids and two-fluid mixtures. In the presence of multiple fine-scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy exact balance equations of mass, momentum, and energy. These equations do not form a satisfactory continuum model because evaluation of stress and heat flux requires solving the underlying ODEs. To produce continuum equations that can be simulated without resolving microscale dynamics, we recently proposed a closure method based on the use of regularized deconvolution. Here we continue the investigation of deconvolution closure with the long term objective of developing consistent computational upscaling for multiphase particle methods. The structure of the fine-scale particle solvers is reminiscent of molecular dynamics. For this reason we use nonlinear averaging introduced for atomistic systems by Noll, Hardy, and Murdoch-Bedeaux. We also consider a simpler linear averaging originally developed in large eddy simulation of turbulence. We present several simple but representative examples of spatially averaged ODEs, where the closure error can be analyzed. Based on this analysis we suggest a general strategy for reducing the relative error of approximate closure. For problems with periodic highly oscillatory material parameters we propose a spectral boosting technique that augments the standard deconvolution and helps to correctly account for dispersion effects. We also conduct several numerical experiments, one of which is a complete mesoscale simulation of a stratified two-fluid flow in a channel. In this simulation, the operation count per coarse time step scales sublinearly with the number of particles.

  17. Averaged universe confronted with cosmological observations: A fully covariant approach

    NASA Astrophysics Data System (ADS)

    Wijenayake, Tharake; Lin, Weikang; Ishak, Mustapha

    2016-10-01

    One of the outstanding problems in general relativistic cosmology is that of the averaging, that is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaître-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-known question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of macroscopic gravity. We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted ΩA . We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full cosmic microwave background analysis from Planck temperature anisotropy and polarization data, the supernova data from Union 2.1, the galaxy power spectrum from WiggleZ, the weak lensing tomography shear-shear cross correlations from the CFHTLenS survey, and the baryonic acoustic oscillation data from 6Df, SDSS DR7, and BOSS DR9. We find that -0.0155 ≤ΩA≤0 (at the 68% C.L.), thus providing a tight upper bound on the backreaction term. We also find that the term is strongly correlated with cosmological parameters, such ΩΛ, σ8, and H0. While small, a backreaction density parameter of a few percent should be kept in consideration along with other systematics for precision cosmology.

  18. The Average Quality Factors by TEPC for Charged Particles

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.

    2004-01-01

    The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.

  19. Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2005-01-01

    Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.

  20. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must meet the minimum driving range requirements established by the Secretary of Transportation (49 CFR... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS...

  1. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  2. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  3. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  4. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  5. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  6. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  7. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  8. Ampere Average Current Photoinjector and Energy Recovery Linac

    SciTech Connect

    Ilan Ben-Zvi; A. Burrill; R. Calaga; P. Cameron; X. Chang; D. Gassner; H. Hahn; A. Hershcovitch; H.C. Hseuh; P. Johnson; D. Kayran; J. Kewisch; R. Lambiase; Vladimir N. Litvinenko; G. McIntyre; A. Nicoletti; J. Rank; T. Roser; J. Scaduto; K. Smith; T. Srinivasan-Rao; K.-C. Wu; A. Zaltsman; Y. Zhao; H. Bluem; A. Burger; Mike Cole; A. Favale; D. Holmes; John Rathke; Tom Schultheiss; A. Todd; J. Delayen; W. Funk; L. Phillips; Joe Preble

    2004-08-01

    High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode, as demonstrated by the spectacular success of the Jefferson Laboratory IR-Demo. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. BNL's Collider-Accelerator Department is pursuing some of these technologies for a different application, that of electron cooling of high-energy hadron beams. I will describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode and an accelerator cavity, both capable of producing of the order of one ampere average current.

  9. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  10. Detrending moving average algorithm: Frequency response and scaling performances.

    PubMed

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed. PMID:27415389

  11. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  12. Model Independent Constraints of the Averaged Neutrino Masses Revisited

    NASA Astrophysics Data System (ADS)

    Fukuyama, Takeshi; Nishiura, Hiroyuki

    2013-11-01

    Averaged neutrino masses defined by <{m}ν >ab≡ (\\vert∑_ {j = 1}3UajUbjmj) \\vert (a, b = e, μ , τ ) are reanalyzed using up-to-date observed MNS parameters and neutrino masses by the neutrino oscillation experiments together with the cosmological constraint on neutrino masses. The values of ab are model-independently evaluated in terms of effective neutrino mass defined by /line{mν }≡ √ {∑ \\vert Uej\\vert2mj^2} which is observable in the single beta decay. We obtain lower bound for ee in the inverted hierarchy (IH) case, 17 meV ≤ee and one for τμ in the normal hierarchy (NH) case, 5 meV≤τμ. We also obtain that all the averaged masses ab have upper bounds which are at most 80 meV.

  13. High average power supercontinuum generation in a fluoroindate fiber

    NASA Astrophysics Data System (ADS)

    Swiderski, J.; Théberge, F.; Michalska, M.; Mathieu, P.; Vincent, D.

    2014-01-01

    We report the first demonstration of Watt-level supercontinuum (SC) generation in a step-index fluoroindate (InF3) fiber pumped by a 1.55 μm fiber master-oscillator power amplifier (MOPA) system. The SC is generated in two steps: first ˜1 ns amplified laser diode pulses are broken up into soliton-like sub-pulses leading to initial spectrum extension and then launched into a fluoride fiber to obtain further spectral broadening. The pump MOPA system can operate at a changeable repetition frequency delivering up to 19.2 W of average power at 2 MHz. When the 8-m long InF3 fiber was pumped with 7.54 W at 420 kHz, output average SC power as high as 2.09 W with 27.8% of slope efficiency was recorded. The achieved SC spectrum spread from 1 to 3.05 μm.

  14. A K-fold Averaging Cross-validation Procedure

    PubMed Central

    Jung, Yoonsuh; Hu, Jianhua

    2015-01-01

    Cross-validation type of methods have been widely used to facilitate model estimation and variable selection. In this work, we suggest a new K-fold cross validation procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate ‘optimal’ models to obtain the ultimate model. Due to the averaging effect, the variance of the proposed estimates can be significantly reduced. This new procedure results in more stable and efficient parameter estimation than the classical K-fold cross validation procedure. In addition, we show the asymptotic equivalence between the proposed and classical cross validation procedures in the linear regression setting. We also demonstrate the broad applicability of the proposed procedure via two examples of parameter sparsity regularization and quantile smoothing splines modeling. We illustrate the promise of the proposed method through simulations and a real data example.

  15. Averaged model for momentum and dispersion in hierarchical porous media

    NASA Astrophysics Data System (ADS)

    Chabanon, Morgan; David, Bertrand; Goyeau, Benoît.

    2015-08-01

    Hierarchical porous media are multiscale systems, where different characteristic pore sizes and structures are encountered at each scale. Focusing the analysis to three pore scales, an upscaling procedure based on the volume-averaging method is applied twice, in order to obtain a macroscopic model for momentum and diffusion-dispersion. The effective transport properties at the macroscopic scale (permeability and dispersion tensors) are found to be explicitly dependent on the mesoscopic ones. Closure problems associated to these averaged properties are numerically solved at the different scales for two types of bidisperse porous media. Results show a strong influence of the lower-scale porous structures and flow intensity on the macroscopic effective transport properties.

  16. Detrending moving average algorithm: Frequency response and scaling performances

    NASA Astrophysics Data System (ADS)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed.

  17. Inferring average generation via division-linked labeling.

    PubMed

    Weber, Tom S; Perié, Leïla; Duffy, Ken R

    2016-08-01

    For proliferating cells subject to both division and death, how can one estimate the average generation number of the living population without continuous observation or a division-diluting dye? In this paper we provide a method for cell systems such that at each division there is an unlikely, heritable one-way label change that has no impact other than to serve as a distinguishing marker. If the probability of label change per cell generation can be determined and the proportion of labeled cells at a given time point can be measured, we establish that the average generation number of living cells can be estimated. Crucially, the estimator does not depend on knowledge of the statistics of cell cycle, death rates or total cell numbers. We explore the estimator's features through comparison with physiologically parameterized stochastic simulations and extrapolations from published data, using it to suggest new experimental designs.

  18. A database of age-appropriate average MRI templates.

    PubMed

    Richards, John E; Sanchez, Carmen; Phillips-Meek, Michelle; Xie, Wanze

    2016-01-01

    This article summarizes a life-span neurodevelopmental MRI database. The study of neurostructural development or neurofunctional development has been hampered by the lack of age-appropriate MRI reference volumes. This causes misspecification of segmented data, irregular registrations, and the absence of appropriate stereotaxic volumes. We have created the "Neurodevelopmental MRI Database" that provides age-specific reference data from 2 weeks through 89 years of age. The data are presented in fine-grained ages (e.g., 3 months intervals through 1 year; 6 months intervals through 19.5 years; 5 year intervals from 20 through 89 years). The base component of the database at each age is an age-specific average MRI template. The average MRI templates are accompanied by segmented partial volume estimates for segmenting priors, and a common stereotaxic atlas for infant, pediatric, and adult participants. The database is available online (http://jerlab.psych.sc.edu/NeurodevelopmentalMRIDatabase/).

  19. Robust myelin water quantification: averaging vs. spatial filtering.

    PubMed

    Jones, Craig K; Whittall, Kenneth P; MacKay, Alex L

    2003-07-01

    The myelin water fraction is calculated, voxel-by-voxel, by fitting decay curves from a multi-echo data acquisition. Curve-fitting algorithms require a high signal-to-noise ratio to separate T(2) components in the T(2) distribution. This work compared the effect of averaging, during acquisition, to data postprocessed with a noise reduction filter. Forty regions, from five volunteers, were analyzed. A consistent decrease in the myelin water fraction variability with no bias in the mean was found for all 40 regions. Images of the myelin water fraction of white matter were more contiguous and had fewer "holes" than images of myelin water fractions from unfiltered echoes. Spatial filtering was effective for decreasing the variability in myelin water fraction calculated from 4-average multi-echo data.

  20. Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction

    PubMed Central

    Ahlfors, Seppo P.; Hinrichs, Hermann

    2016-01-01

    Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196

  1. Pulsar average waveforms and hollow cone beam models

    NASA Technical Reports Server (NTRS)

    Backer, D. C.

    1975-01-01

    An analysis of pulsar average waveforms at radio frequencies from 40 MHz to 15 GHz is presented. The analysis is based on the hypothesis that the observer sees one cut of a hollow-cone beam pattern and that stationary properties of the emission vary over the cone. The distributions of apparent cone widths for different observed forms of the average pulse profiles (single, double/unresolved, double/resolved, triple and multiple) are in modest agreement with a model of a circular hollow-cone beam with random observer-spin axis orientation, a random cone axis-spin axis alignment, and a small range of physical hollow-cone parameters for all objects.

  2. Averaging of nuclear modulation artefacts in RIDME experiments

    NASA Astrophysics Data System (ADS)

    Keller, Katharina; Doll, Andrin; Qi, Mian; Godt, Adelheid; Jeschke, Gunnar; Yulikov, Maxim

    2016-11-01

    The presence of artefacts due to Electron Spin Echo Envelope Modulation (ESEEM) complicates the analysis of dipolar evolution data in Relaxation Induced Dipolar Modulation Enhancement (RIDME) experiments. Here we demonstrate that averaging over the two delay times in the refocused RIDME experiment allows for nearly quantitative removal of the ESEEM artefacts, resulting in potentially much better performance than the so far used methods. The analytical equations are presented and analyzed for the case of electron and nuclear spins S = 1 / 2, I = 1 / 2 . The presented analysis is also relevant for Double Electron Electron Resonance (DEER) and Chirp-Induced Dipolar Modulation Enhancement (CIDME) techniques. The applicability of the ESEEM averaging approach is demonstrated on a Gd(III)-Gd(III) rigid ruler compound in deuterated frozen solution at Q band (35 GHz).

  3. Thermal effects in high average power optical parametric amplifiers.

    PubMed

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  4. Thermal effects in high average power optical parametric amplifiers.

    PubMed

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given. PMID:23455291

  5. [Average values of electrocardiograph parameters in healthy, adult Wistar rats].

    PubMed

    Zaciragić, Asija; Nakas-ićindić, Emina; Hadzović, Almira; Avdagić, Nesina

    2004-01-01

    Average values of heart rate (HR) and the average duration of electrocardiograph parameters were investigated (RR interval, P wave, PQ interval, QRS complex and QT interval) in healthy, adult Wistar rats of both sexes (n=86). Electrocardiogram (ECG) was recorded by Shiller Resting ECG, and for analysis of recordings SEMA-200 Vet computer program was used. Prior to registration animals were exposed to light ether anesthesia. Mean value of HR was 203.03+/-3.09 beats/min in whole sample. Observed differences in mean values of heart rate and duration of followed ECG parameters between sexes were not statistically significant. Results gathered in our study could serve as standard values for electrocardiograph parameters in future research where will be used Wistar rats in conditions of registration and analysis of ECG that are described in our paper.

  6. The B-dot Earth Average Magnetic Field

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  7. Correct averaging in transmission radiography: Analysis of the inverse problem

    NASA Astrophysics Data System (ADS)

    Wagner, Michael; Hampel, Uwe; Bieberle, Martina

    2016-05-01

    Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.

  8. Light-cone averages in a Swiss-cheese universe

    SciTech Connect

    Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino

    2008-01-15

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the {lambda}CDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w{sub 0} and w{sub a} follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.

  9. Non-self-averaging in Ising spin glasses and hyperuniversality.

    PubMed

    Lundow, P H; Campbell, I A

    2016-01-01

    Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U_{22}(T,L) for the spin glass susceptibility [and for higher moments U_{nn}(T,L)] is reported for dimensions 2,3,4,5, and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ(T,L) as U_{nn}(β,L)=[K_{d}ξ(T,L)/L]^{d} and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996)PRLTAO0031-900710.1103/PhysRevLett.77.3700]. Empirically, it is found that the K_{d} values are independent of d to within the statistics. The maximum values [U_{nn}(T,L)]_{max} are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [U_{nn}(T,L)]_{max} peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and XY spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario. PMID:26871035

  10. Light-cone averages in a Swiss-cheese universe

    NASA Astrophysics Data System (ADS)

    Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino

    2008-01-01

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaître-Tolman-Bondi solution of Einstein’s equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w0 and wa follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.

  11. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.

  12. Average diagonal entropy in nonequilibrium isolated quantum systems.

    PubMed

    Giraud, Olivier; García-Mata, Ignacio

    2016-07-01

    The diagonal entropy was introduced as a good entropy candidate especially for isolated quantum systems out of equilibrium. Here we present an analytical calculation of the average diagonal entropy for systems undergoing unitary evolution and an external perturbation in the form of a cyclic quench. We compare our analytical findings with numerical simulations of various quantum systems. Our calculations elucidate various heuristic relations proposed recently in the literature. PMID:27575092

  13. Non-self-averaging in Ising spin glasses and hyperuniversality

    NASA Astrophysics Data System (ADS)

    Lundow, P. H.; Campbell, I. A.

    2016-01-01

    Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U22(T ,L ) for the spin glass susceptibility [and for higher moments Un n(T ,L ) ] is reported for dimensions 2 ,3 ,4 ,5 , and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ (T ,L ) as Un n(β ,L ) =[Kdξ (T ,L ) /L ] d and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996), 10.1103/PhysRevLett.77.3700]. Empirically, it is found that the Kd values are independent of d to within the statistics. The maximum values [Unn(T,L ) ] max are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [Unn(T,L ) ] max peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and X Y spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.

  14. Characterizing individual painDETECT symptoms by average pain severity

    PubMed Central

    Sadosky, Alesia; Koduru, Vijaya; Bienen, E Jay; Cappelleri, Joseph C

    2016-01-01

    Background painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure), a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe), but their ability to discriminate individual item severity has not been evaluated. Methods Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624). Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level. Results A probability >50% for a better outcome (less severe pain) was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain) and highest probability was 76.4% (on cold/heat for mild vs severe pain). The pain radiation item was significant (P<0.05) and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ. Conclusion painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain-severity levels can serve as proxies to determine treatment effects, thus indicating probabilities for more favorable outcomes on pain symptoms. PMID:27555789

  15. Average diagonal entropy in nonequilibrium isolated quantum systems

    NASA Astrophysics Data System (ADS)

    Giraud, Olivier; García-Mata, Ignacio

    2016-07-01

    The diagonal entropy was introduced as a good entropy candidate especially for isolated quantum systems out of equilibrium. Here we present an analytical calculation of the average diagonal entropy for systems undergoing unitary evolution and an external perturbation in the form of a cyclic quench. We compare our analytical findings with numerical simulations of various quantum systems. Our calculations elucidate various heuristic relations proposed recently in the literature.

  16. Separability criteria with angular and Hilbert space averages

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo; Oh, C. H.; Umetsu, Koichiro; Yu, Sixia

    2016-05-01

    The practically useful criteria of separable states ρ =∑kwkρk in d = 2 × 2 are discussed. The equality G(a , b) = 4 [ < ψ | P(a) ⊗ P(b) | ψ > - < ψ | P(a) ⊗ 1 | ψ > < ψ | 1 ⊗ P(b) | ψ > ] = 0 for any two projection operators P(a) and P(b) provides a necessary and sufficient separability criterion in the case of a separable pure state ρ = | ψ > < ψ | . We propose the separability criteria of mixed states, which are given by Tr ρ { a ṡ σ ⊗ b ṡ σ } =(1 / 3) C cos φ for two spin 1 / 2 systems and 4 Tr ρ { P(a) ⊗ P(b) } = 1 +(1 / 2) C cos 2 φ for two photon systems, respectively, after taking a geometrical angular average of a and b with fixed cos φ = a ṡ b. Here - 1 ≤ C ≤ 1, and the difference in the numerical coefficients 1 / 2 and 1 / 3 arises from the different rotational properties of the spinor and the transverse photon. If one instead takes an average over the states in the d = 2 Hilbert space, the criterion for two photon systems is replaced by 4 Tr ρ { P(a) ⊗ P(b) } = 1 +(1 / 3) C cos 2 φ. Those separability criteria are shown to be very efficient using the existing experimental data of Aspect et al. in 1981 and Sakai et al. in 2006. When the Werner state is applied to two photon systems, it is shown that the Hilbert space average can judge its inseparability but not the geometrical angular average.

  17. Targeted Cancer Screening in Average-Risk Individuals.

    PubMed

    Marcus, Pamela M; Freedman, Andrew N; Khoury, Muin J

    2015-11-01

    Targeted cancer screening refers to use of disease risk information to identify those most likely to benefit from screening. Researchers have begun to explore the possibility of refining screening regimens for average-risk individuals using genetic and non-genetic risk factors and previous screening experience. Average-risk individuals are those not known to be at substantially elevated risk, including those without known inherited predisposition, without comorbidities known to increase cancer risk, and without previous diagnosis of cancer or pre-cancer. In this paper, we describe the goals of targeted cancer screening in average-risk individuals, present factors on which cancer screening has been targeted, discuss inclusion of targeting in screening guidelines issued by major U.S. professional organizations, and present evidence to support or question such inclusion. Screening guidelines for average-risk individuals currently target age; smoking (lung cancer only); and, in some instances, race; family history of cancer; and previous negative screening history (cervical cancer only). No guidelines include common genomic polymorphisms. RCTs suggest that targeting certain ages and smoking histories reduces disease-specific cancer mortality, although some guidelines extend ages and smoking histories based on statistical modeling. Guidelines that are based on modestly elevated disease risk typically have either no or little evidence of an ability to affect a mortality benefit. In time, targeted cancer screening is likely to include genetic factors and past screening experience as well as non-genetic factors other than age, smoking, and race, but it is of utmost importance that clinical implementation be evidence-based.

  18. High average power solid state laser power conditioning system

    SciTech Connect

    Steinkraus, R.F.

    1987-03-03

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers.

  19. Line broadening estimate from averaged energy differences of coupled states

    NASA Astrophysics Data System (ADS)

    Lavrentieva, Nina N.; Dudaryonok, Anna S.; Ma, Qiancheng

    2014-11-01

    The method to the calculation of rotation-vibrational line half-width of asymmetric top molecules is proposed. The influence of the buffer gas on the internal state of the absorbing molecule is emphasized in this method. The basic expressions of present approach are given. The averaged energy differences method was used for the calculation of H2O and HDO lines broadening. Comparisons of the calculated line shape parameters with the experimental values in different absorption bands are made.

  20. Snapshots of Anderson localization beyond the ensemble average

    NASA Astrophysics Data System (ADS)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2012-09-01

    We study (1+1)D transverse localization of electromagnetic radiation at microwave frequencies directly by two-dimensional spatial scans. Since the longitudinal direction can be mapped onto time, our experiments provide unique snapshots of the buildup of localized waves. The evolution of the wave functions is compared with semianalytical calculations. Studies beyond ensemble averages reveal counterintuitive surprises. Oscillations of the wave functions are observed in space and explained in terms of a beating between the eigenstates.

  1. Averaging cross section data so we can fit it

    SciTech Connect

    Brown, D.

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  2. Effects of velocity averaging on the shapes of absorption lines

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.

    1980-01-01

    The velocity averaging of collision cross sections produces non-Lorentz line shapes, even at densities where Doppler broadening is not apparent. The magnitude of the effects will be described using a model in which the collision broadening depends on a simple velocity power law. The effect of the modified profile on experimental measures of linewidth, shift and amplitude will be examined and an improved approximate line shape will be derived.

  3. Average dynamics of a finite set of coupled phase oscillators

    SciTech Connect

    Dima, Germán C. Mindlin, Gabriel B.

    2014-06-15

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  4. Self-averaging in complex brain neuron signals

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.

    2002-12-01

    Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.

  5. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    PubMed

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  6. Aerodynamic surface stress intermittency and conditionally averaged turbulence statistics

    NASA Astrophysics Data System (ADS)

    Anderson, William; Lanigan, David

    2015-11-01

    Aeolian erosion is induced by aerodynamic stress imposed by atmospheric winds. Erosion models prescribe that sediment flux, Q, scales with aerodynamic stress raised to exponent, n, where n > 1 . Since stress (in fully rough, inertia-dominated flows) scales with incoming velocity squared, u2, it follows that q ~u2n (where u is some relevant component of the flow). Thus, even small (turbulent) deviations of u from its time-mean may be important for aeolian activity. This rationale is augmented given that surface layer turbulence exhibits maximum Reynolds stresses in the fluid immediately above the landscape. To illustrate the importance of stress intermittency, we have used conditional averaging predicated on stress during large-eddy simulation of atmospheric boundary layer flow over an arid, bare landscape. Conditional averaging provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. We characterize geometric attributes of such structures and explore streamwise and vertical vorticity distribution within the conditionally averaged flow field. This work was supported by the National Sci. Foundation, Phys. and Dynamic Meteorology Program (PM: Drs. N. Anderson, C. Lu, and E. Bensman) under Grant # 1500224. Computational resources were provided by the Texas Adv. Comp. Center at the Univ. of Texas.

  7. Temporal averaging of atmospheric turbulence-induced optical scintillation.

    PubMed

    Yura, H T; Beck, S M

    2015-08-24

    Based on the Rytov approximation we have developed for weak scintillation conditions a general expression for the temporal averaged variance of irradiance. The present analysis provides, for what we believe is the first time, a firm theoretical basis for the often-observed reduction of irradiance fluctuations of an optical beam due to atmospheric turbulence. Accurate elementary analytic approximations are presented here for plane, spherical and beam waves for predicting the averaging times required to obtain an arbitrary value of the ratio of the standard deviation to the mean of an optical beam propagating through an arbitrary path in the atmosphere. In particular, a novel application of differential absorption measurement for the purpose of measuring column-integrated concentrations of various so-called greenhouse gas (GHG) atmospheric components is considered where the results of our analysis indicates that relatively short averaging times, on the order of a few seconds, are required to reduce the irradiance fluctuations to a value precise enough for GHG measurements of value to climate related studies.

  8. Local and average behaviour in inhomogeneous superdiffusive media

    NASA Astrophysics Data System (ADS)

    Vezzani, Alessandro; Burioni, Raffaella; Caniparoli, Luca; Lepri, Stefano

    2011-05-01

    We consider a random walk on one-dimensional inhomogeneous graphs built from Cantor fractals. Our study is motivated by recent experiments that demonstrated superdiffusion of light in complex disordered materials, thereby termed Lévy glasses. We introduce a geometric parameter α which plays a role analogous to the exponent characterising the step length distribution in random systems. We study the large-time behaviour of both local and average observables; for the latter case, we distinguish two different types of averages, respectively over the set of all initial sites and over the scattering sites only. The 'single long-jump approximation" is applied to analytically determine the different asymptotic behaviour as a function of α and to understand their origin. We also discuss the possibility that the root of the mean square displacement and the characteristic length of the walker distribution may grow according to different power laws; this anomalous behaviour is typical of processes characterised by Lévy statistics and here, in particular, it is shown to influence average quantities.

  9. H∞ control of switched delayed systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Li, Zhicheng; Gao, Huijun; Agarwal, Ramesh; Kaynak, Okyay

    2013-12-01

    This paper considers the problems of stability analysis and H∞ controller design of time-delay switched systems with average dwell time. In order to obtain less conservative results than what is seen in the literature, a tighter bound for the state delay term is estimated. Based on the scaled small gain theorem and the model transformation method, an improved exponential stability criterion for time-delay switched systems with average dwell time is formulated in the form of convex matrix inequalities. The aim of the proposed approach is to reduce the minimal average dwell time of the systems, which is made possible by a new Lyapunov-Krasovskii functional combined with the scaled small gain theorem. It is shown that this approach is able to tolerate a smaller dwell time or a larger admissible delay bound for the given conditions than most of the approaches seen in the literature. Moreover, the exponential H∞ controller can be constructed by solving a set of conditions, which is developed on the basis of the exponential stability criterion. Simulation examples illustrate the effectiveness of the proposed method.

  10. Kilowatt average-power laser for subpicosecond materials processing

    NASA Astrophysics Data System (ADS)

    Benson, Stephen V.; Neil, George R.; Bohn, Courtlandt L.; Biallas, George; Douglas, David; Dylla, H. Frederick; Fugitt, Jock; Jordan, Kevin; Krafft, Geoffrey; Merminga, Lia; Preble, Joe; Shinn, Michelle D.; Siggins, Tim; Walker, Richard; Yunn, Byung

    2000-04-01

    The performance of laser pulses in the sub-picosecond range for materials processing is substantially enhanced over similar fluences delivered in longer pulses. Recent advances in the development of solid state lasers have progressed significantly toward the higher average powers potentially useful for many applications. Nonetheless, prospects remain distant for multi-kilowatt sub-picosecond solid state systems such as would be required for industrial scale surface processing of metals and polymers. We present operation results from the world's first kilowatt scale ultra-fast materials processing laser. A Free Electron Laser (FEL) called the IR Demo is operational as a User Facility at Thomas Jefferson National Accelerator Facility in Newport News, Virginia, USA. In its initial operation at high average power it is capable of wavelengths in the 2 to 6 micron range and can produce approximately 0.7 ps pulses in a continuous train at approximately 75 MHz. This pulse length has been shown to be nearly optimal for deposition of energy in materials at the surface. Upgrades in the near future will extend operation beyond 10 kW CW average power in the near IR and kilowatt levels of power at wavelengths from 0.3 to 60 microns. This paper will cover the design and performance of this groundbreaking laser and operational aspects of the User Facility.

  11. Cortical evoked potentials recorded from the guinea pig without averaging.

    PubMed

    Walloch, R A

    1975-01-01

    Potentials evoked by tonal pulses and recorded with a monopolar electrode on the pial surface over the auditory cortex of the guinea pig are presented. These potentials are compared with average potentials recorded in previous studies with an electrode on the dura. The potentials recorded by these two techniques have similar waveforms, peak latencies and thresholds. They appear to be generated within the same region of the cerebral cortex. As can be expected, the amplitude of the evoked potentials recorded from the pial surface is larger than that recorded from the dura. Consequently, averaging is not needed to extract the evoked potential once the dura is removed. The thresholds for the evoked cortical potential are similar to behavioral thresholds for the guinea pig at high frequencies; however, evoked potential thresholds are eleveate over behavioral thresholds at low frequencies. The removal of the dura and the direct recording of the evoked potential appears most appropriate for acute experiments. The recording of an evoked potential with dura electrodes empploying averaging procedures appears most appropriate for chronic studies.

  12. High Average Power, High Energy Short Pulse Fiber Laser System

    SciTech Connect

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  13. Development of over 300-watts average power excimer laser

    NASA Astrophysics Data System (ADS)

    Hirata, Kazuhiro; Kawamura, Joichi; Katou, Hiroyuki; Sajiki, Kazuaki; Okada, Makoto

    2004-05-01

    The high power excimer laser was developed. We have supplied the 240 watts (800 mJ, 300 Hz) average power excimer laser for industrial use, mainly for TFT LCD annealing. We are going to add the 300 watts (1 J, 300 Hz) average power laser for our line-up. This 300 watts new laser is based on the 240 watts laser, but improved some points. The electrodes size is longer and the electrical power circuit is reinforcement. Laser gas recipe is changed to be good for new system. In our test, we could oscillate over 300 watts average power operation. 310 watts servo operation is able to oscillate over 40 million pulses with less than 1.0 per cent for σ output stability. 330 watts servo operation is able to oscillate over 30 million pulses with almost less than 1.0 per cent for σ output stability. Experimental and theoretical studies of various parameters influencing the laser performance will be continued with further investigations and future improvements. We have confidence that it will be possible for this laser to produce higher power with long gas life.

  14. Noise reduction of video imagery through simple averaging

    NASA Astrophysics Data System (ADS)

    Vorder Bruegge, Richard W.

    1999-02-01

    Examiners in the Special Photographic Unit of the Federal Bureau of Investigation Laboratory Division conduct examinations of questioned photographic evidence of all types, including surveillance imagery recorded on film and video tape. A primary type of examination includes side-by- side comparisons, in which unknown objects or people depicted in the questioned images are compared with known objects recovered from suspects or with photographs of suspects themselves. Most imagery received in the SPU for such comparisons originate from time-lapse video or film systems. In such circumstances, the delay between sequential images is so great that standard image summing and/or averaging techniques are useless as a means of improving image detail in questioned subjects or objects without also resorting to processing-intensive pattern reconstruction algorithms. Occasionally, however, the receipt of real-time video imagery will include a questioned object at rest. In such cases, it is possible to use relatively simple image averaging techniques as a means of reducing transient noise in the images, without further compromising the already-poor resolution inherent in most video surveillance images. This paper presents an example of one such case in which multiple images were averaged to reduce the transient noise to a sufficient degree to permit the positive identification of a vehicle based upon the presence of scrape marks and dents on the side of the vehicle.

  15. CD SEM metrology macro CD technology: beyond the average

    NASA Astrophysics Data System (ADS)

    Bunday, Benjamin D.; Michelson, Di K.; Allgair, John A.; Tam, Aviram; Chase-Colin, David; Dajczman, Asaf; Adan, Ofer; Har-Zvi, Michael

    2005-05-01

    Downscaling of semiconductor fabrication technology requires an ever-tighter control of the production process. CD-SEM, being the major image-based critical dimension metrology tool, is constantly being improved in order to fulfill these requirements. One of the methods used for increasing precision is averaging over several or many (ideally identical) features, usually referred to as "Macro CD". In this paper, we show that there is much more to Macro CD technology- metrics characterizing an arbitrary array of similar features within a single SEM image-than just the average. A large amount of data is accumulated from a single scan of a SEM image, providing informative and statistically valid local process characterization. As opposed to other technologies, Macro CD not only provides extremely precise average metrics, but also allows for the reporting of full information on each of the measured features and of various statistics (such as the variability) on all currently reported CD SEM metrics. We present the mathematical background behind Macro CD technology and the opportunity for reducing number of sites for SPC, along with providing enhanced-sensitivity CD metrics.

  16. Average wavefunction method for multiple scattering theory and applications

    SciTech Connect

    Singh, H.

    1985-01-01

    A general approximation scheme, the average wavefunction approximation (AWM), applicable to scattering of atoms and molecules off multi-center targets, is proposed. The total potential is replaced by a sum of nonlocal, separable interactions. Each term in the sum projects the wave function onto a weighted average in the vicinity of a given scattering center. The resultant solution is an infinite order approximation to the true solution, and choosing the weighting function as the zeroth order solution guarantees agreement with the Born approximation to second order. In addition, the approximation also becomes increasingly more accurate in the low energy long wave length limit. A nonlinear, nonperturbative literature scheme for the wave function is proposed. An extension of the scheme to multichannel scattering suitable for treating inelastic scattering is also presented. The method is applied to elastic scattering of a gas off a solid surface. The formalism is developed for both periodic as well as disordered surfaces. Numerical results are presented for atomic clusters on a flat hard wall with a Gaussian like potential at each atomic scattering site. The effect of relative lateral displacement of two clusters upon the scattering pattern is shown. The ability of AWM to accommodate disorder through statistical averaging over cluster configuration is illustrated. Enhanced uniform back scattering is observed with increasing roughness on the surface. Finally, the AWM is applied to atom-molecule scattering.

  17. Average energy gap of AIBIIIC2VI optoelectronic materials

    NASA Astrophysics Data System (ADS)

    Kumar, Virendra; Chandra, Dinesh

    1991-03-01

    (In this paper we propose a model based on plasma oscillations theory of solids for the calculatio f th average energy gap of optoelectronic materials having A B-''- 1C2 chalcopyrite stru eture. In the present calculation special care of delectrons in the case of noble and transition metal compounds has been taken into account. Our calculated values are in excellent agreement with the reported values). The dielectric theory of Phillips1 Van Vechten2''3 and Levine has been widely used in a varity of physicochemical problems relating to crystal structures nonlinear optical susceptibilit ies dielectric constant cohesive energies heats of formation average energy gaps etc. Using the concept of these theories the author 5 has recently developed a model based on plasma oscill ations theory of solids for the calculation of the covalent (Eh)afld ionic (C) energy gaps of several semiconductors having different crystal structures. . In the present paper we extend the calculation of the average energy gap in the case of AIBIIIC2VI semiconductors. The expressons for the Eh and C in terms of plasmon energy can be written as7 Eh K1 (tw)L6533 eV (1) C K2b (!iw)2" ex [ K3(hw)" 3 ] eV . (2) If delectrons are present in the crystal following relationhas been developed for the ionic energy gap while the covalent energy gap remains the same. C Kb (w)2" exp [K5 (''hw) (hw)2" 3J eV () where K''s

  18. Target frequency influences antisaccade endpoint bias: evidence for perceptual averaging.

    PubMed

    Gillen, Caitlin; Heath, Matthew

    2014-12-01

    Perceptual judgments related to stimulus-sets are represented computationally different than individual items. In particular, the perceptual averaging hypothesis contends that the visual system represents target properties (e.g., eccentricity) via a statistical summary of the individual targets included within a stimulus-set. Here we sought to determine whether perceptual averaging governs the visual information mediating an oculomotor task requiring top-down control (i.e., antisaccade). To that end, participants completed antisaccades (i.e., saccade mirror-symmetrical to a target) – and complementary prosaccades (i.e., saccade to veridical target location) – to different target eccentricities (10.5°, 15.5° and 20.5°) located left and right of a common fixation. Importantly, trials were completed in blocks wherein eccentricities were presented with equal frequency (i.e., control condition) and when the ‘proximal’ (10.5°: i.e., proximal-weighting condition) and ‘distal’ (20.5°: i.e., distal-weighting condition) targets were respectively presented five times as often as the other eccentricities. If antisaccades are governed by a statistical summary then amplitudes should be biased in the direction of the most frequently presented target within a block. As expected, pro- and antisaccade across each target eccentricity were associated with an undershooting bias and prosaccades were refractory to the manipulation of target frequency. Most notably, antisaccades in the proximal-weighting condition had a larger undershooting bias than the control condition, whereas the converse was true for the distal-weighing condition; that is, antisaccades were biased in the direction of the most frequently presented target. Thus, we propose that perceptual averaging extends to motor tasks requiring top-down cognitive control.

  19. BeppoSAX Average Spectra of Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Malizia, A.; Bassani, L.; Stephen, J. B.; Di Cocco, G.; Fiore, F.; Dean, A. J.

    2003-05-01

    We have studied the average 3-200 keV spectra of Seyfert galaxies of type 1 and 2, using data obtained with BeppoSAX. The average Seyfert 1 spectrum is well fitted by a power-law continuum with photon spectral index Γ~1.9, a Compton reflection component R~0.6-1 (depending on the inclination angle between the line of sight and the reflecting material), and a high-energy cutoff at around 200 keV; there is also an iron line at 6.4 keV characterized by an equivalent width of 120 eV. Seyfert 2 galaxies, on the other hand, show stronger neutral absorption [NH=(3-4)×1022 atoms cm-2], as expected, but are also characterized by an X-ray power law that is substantially harder (Γ~1.75) and with a cutoff at lower energies (Ec~130 keV); the iron line parameters are instead substantially similar to those measured in type 1 objects. There are only two possible solutions to this problem: to assume more reflection in Seyfert 2 galaxies than observed in Seyfert 1 galaxies or more complex absorption than estimated in the first instance. The first possibility is ruled out by the Seyfert 2 to Seyfert 1 ratio, while the second provides an average Seyfert 2 intrinsic spectrum very similar to that of the Seyfert 1. The extra absorber is likely an artifact due to summing spectra with different amounts of absorption, although we cannot exclude its presence in at least some individual sources. Our result argues strongly for a very similar central engine in both types of galaxies, as expected under the unified theory.

  20. New model of the average neutron and proton pairing gaps

    NASA Astrophysics Data System (ADS)

    Madland, David G.; Nix, J. Rayford

    1988-01-01

    By use of the BCS approximation applied to a distribution of dense, equally spaced levels, we derive new expressions for the average neutron pairing gap ¯gD n and average proton pairing gap ¯gD p. These expressions, which contain exponential terms, take into account the dependencies of ¯gD n and ¯gD p upon both the relative neutron excess and shape of the nucleus. The three constants that appear are determined by a least-squares adjustment to experimental pairing gaps obtained by use of fourth-order differences of measured masses. For this purpose we use the 1986 Audi-Wapstra mid-stream mass evaluation and take into account experimental uncertainties. Our new model explains not only the dependencies of ¯gD n and ¯gD p upon relative neutron excess and nuclear shape, but also the experimental result that for medium and heavy nuclei ¯gD n is generally smaller than ¯gD p. We also introduce a new expression for the average residual neutron-proton interaction energy ¯gd that appears in the masses of odd-odd nuclei, and determine the constant that appears by an analogous least-squares adjustment to experimental mass differences. Our new expressions for ¯gD n, ¯gD p and ¯gd should permit extrapolation of these quantities to heavier nuclei and to nuclei farther removed from the valley of β stability than do previous parameterizations.

  1. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  2. Fingerprinting Codes for Multimedia Data against Averaging Attack

    NASA Astrophysics Data System (ADS)

    Yagi, Hideki; Matsushima, Toshiyasu; Hirasawa, Shigeichi

    Code construction for digital fingerprinting, which is a copyright protection technique for multimedia, is considered. Digital fingerprinting should deter collusion attacks, where several fingerprinted copies of the same content are mixed to disturb their fingerprints. In this paper, we consider the averaging attack, which is known to be effective for multimedia fingerprinting with the spread spectrum technique. We propose new methods for constructing fingerprinting codes to increase the coding rate of conventional fingerprinting codes, while they guarantee to identify the same number of colluders. Due to the new fingerprinting codes, the system can deal with a larger number of users to supply digital contents.

  3. Glycogen with short average chain length enhances bacterial durability

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Wise, Michael J.

    2011-09-01

    Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

  4. Studies into the averaging problem: Macroscopic gravity and precision cosmology

    NASA Astrophysics Data System (ADS)

    Wijenayake, Tharake S.

    With the tremendous improvement in the precision of available astrophysical data in the recent past, it becomes increasingly important to examine some of the underlying assumptions behind the standard model of cosmology and take into consideration nonlinear and relativistic corrections which may affect it at percent precision level. Due to its mathematical rigor and fully covariant and exact nature, Zalaletdinov's macroscopic gravity (MG) is arguably one of the most promising frameworks to explore nonlinearities due to inhomogeneities in the real Universe. We study the application of MG to precision cosmology, focusing on developing a self-consistent cosmology model built on the averaging framework that adequately describes the large-scale Universe and can be used to study real data sets. We first implement an algorithmic procedure using computer algebra systems to explore new exact solutions to the MG field equations. After validating the process with an existing isotropic solution, we derive a new homogeneous, anisotropic and exact solution. Next, we use the simplest (and currently only) solvable homogeneous and isotropic model of MG and obtain an observable function for cosmological expansion using some reasonable assumptions on light propagation. We find that the principal modification to the angular diameter distance is through the change in the expansion history. We then linearize the MG field equations and derive a framework that contains large-scale structure, but the small scale inhomogeneities have been smoothed out and encapsulated into an additional cosmological parameter representing the averaging effect. We derive an expression for the evolution of the density contrast and peculiar velocities and integrate them to study the growth rate of large-scale structure. We find that increasing the magnitude of the averaging term leads to enhanced growth at late times. Thus, for the same matter content, the growth rate of large scale structure in the MG model

  5. A QCD Analysis of Average Transverse Momentum in Jet Fragmentation

    NASA Astrophysics Data System (ADS)

    Iguchi, K.; Nakkagawa, H.; Niégawa, A.

    1981-08-01

    The generalized Altarelli-Parisi equations for the full fragmentation functions of partons are solved within the LLA. The analysis of the average transverse momentum ⪉ngle kT ranglez of hadrons produced inside a jet in e+e- annihilation shows that the LLA calculations of QCD give satisfactory description of the data if we take into account correctly the kinematical restrictions to the evolution of jets. Discussions on the use of the LLA in the phenomenological analyses are also given.

  6. Status of Average-x from Lattice QCD

    SciTech Connect

    Dru Renner

    2011-09-01

    As algorithms and computing power have advanced, lattice QCD has become a precision technique for many QCD observables. However, the calculation of nucleon matrix elements remains an open challenge. I summarize the status of the lattice effort by examining one observable that has come to represent this challenge, average-x: the fraction of the nucleon's momentum carried by its quark constituents. Recent results confirm a long standing tendency to overshoot the experimentally measured value. Understanding this puzzle is essential to not only the lattice calculation of nucleon properties but also the broader effort to determine hadron structure from QCD.

  7. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    NASA Astrophysics Data System (ADS)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of

  8. Constructing the Average Natural History of HIV-1 Infection

    NASA Astrophysics Data System (ADS)

    Diambra, L.; Capurro, A.; Malta, C. P.

    2007-05-01

    Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.

  9. Averaged particle dose conversion coefficients in air crew dosimetry.

    PubMed

    Mares, V; Roesler, S; Schraube, H

    2004-01-01

    The MCNPX Monte Carlo code was used to calculate energy-dependent fluence-to-effective dose conversion coefficients for neutrons, protons, electrons, photons, charged pions and muons. The FLUKA Monte Carlo code was used to calculate the spectral particle fluences of secondary cosmic rays for different altitudes, and for different combinations of solar modulation and vertical cut-off rigidity parameters. The energy-averaged fluence-to-dose conversion coefficients were obtained by folding the particle fluence spectra with the conversion coefficients for effective dose and ambient dose equivalent. They show a slight dependence on altitude, solar activity and location in the geomagnetic field.

  10. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  11. Analytical network-averaging of the tube model:. Rubber elasticity

    NASA Astrophysics Data System (ADS)

    Khiêm, Vu Ngoc; Itskov, Mikhail

    2016-10-01

    In this paper, a micromechanical model for rubber elasticity is proposed on the basis of analytical network-averaging of the tube model and by applying a closed-form of the Rayleigh exact distribution function for non-Gaussian chains. This closed-form is derived by considering the polymer chain as a coarse-grained model on the basis of the quantum mechanical solution for finitely extensible dumbbells (Ilg et al., 2000). The proposed model includes very few physically motivated material constants and demonstrates good agreement with experimental data on biaxial tension as well as simple shear tests.

  12. Averaged particle dose conversion coefficients in air crew dosimetry.

    PubMed

    Mares, V; Roesler, S; Schraube, H

    2004-01-01

    The MCNPX Monte Carlo code was used to calculate energy-dependent fluence-to-effective dose conversion coefficients for neutrons, protons, electrons, photons, charged pions and muons. The FLUKA Monte Carlo code was used to calculate the spectral particle fluences of secondary cosmic rays for different altitudes, and for different combinations of solar modulation and vertical cut-off rigidity parameters. The energy-averaged fluence-to-dose conversion coefficients were obtained by folding the particle fluence spectra with the conversion coefficients for effective dose and ambient dose equivalent. They show a slight dependence on altitude, solar activity and location in the geomagnetic field. PMID:15353676

  13. Femtosecond fiber CPA system emitting 830 W average output power.

    PubMed

    Eidam, Tino; Hanf, Stefan; Seise, Enrico; Andersen, Thomas V; Gabler, Thomas; Wirth, Christian; Schreiber, Thomas; Limpert, Jens; Tünnermann, Andreas

    2010-01-15

    In this Letter we report on the generation of 830 W compressed average power from a femtosecond fiber chirped pulse amplification (CPA) system. In the high-power operation we achieved a compressor throughput of about 90% by using high-efficiency dielectric gratings. The output pulse duration of 640 fs at 78 MHz repetition rate results in a peak power of 12 MW. Additionally, we discuss further a scaling potential toward and beyond the kilowatt level by overcoming the current scaling limitations imposed by the transversal spatial hole burning.

  14. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  15. Atom-molecule scattering with the average wavefunction method

    NASA Astrophysics Data System (ADS)

    Singh, Harjinder; Dacol, Dalcio K.; Rabitz, Herschel

    1987-08-01

    The average wavefunction method (AWM) is applied to atom-molecule scattering. In its simplest form the labor involved in solving the AWM equations is equivalent to that involved for elastic scattering in the same formulation. As an initial illustration, explicit expressions for the T-matrix are derived for the scattering of an atom and a rigid rotor. Results are presented for low-energy scattering and corrections to the Born approximation are clearly evident. In general, the AWM is particularly suited to polyatom scattering due to its reduction of the potential in terms of a separable atom-atom potential.

  16. Vibrationally averaged dipole moments of methane and benzene isotopologues.

    PubMed

    Arapiraca, A F C; Mohallem, J R

    2016-04-14

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C6H3D3 is about twice as large as the measured dipole moment of C6H5D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments. PMID:27083715

  17. Constraints on Average Radial Anisotropy in the Lower Mantle

    NASA Astrophysics Data System (ADS)

    Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.

    2014-12-01

    Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.

  18. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  19. Seismicity and average velocities beneath the Argentine Puna Plateau

    NASA Astrophysics Data System (ADS)

    Schurr, B.; Asch, G.; Rietbrock, A.; Kind, R.; Pardo, M.; Heit, B.; Monfret, T.

    A network of 60 seismographs was deployed across the Andes at ∼23.5°S. The array was centered in the backarc, atop the Puna high plateau in NW Argentina. P and S arrival times of 426 intermediate depth earthquakes were inverted for 1-D velocity structure and hypocentral coordinates. Average velocities and υp/υs in the crust are low. Average mantle velocities are high but difficult to interpret because of the presence of a fast velocity slab at depth. Although the hypocenters sharply define a 35° dipping Benioff zone, seismicity in the slab is not continuous. The spatial clustering of earthquakes is thought to reflect inherited heterogeneties of the subducted oceanic lithosphere. Additionally, 57 crustal earthquakes were located. Seismicity concentrates in the fold and thrust belt of the foreland and Eastern Cordillera, and along and south of the El Toro-Olacapato-Calama Lineament (TOCL). Focal mechanisms of two earthquakes at this structure exhibit left lateral strike-slip mechanisms similar to the suggested kinematics of the TOCL. We believe that the Puna north of the TOCL behaves like a rigid block with little internal deformation, whereas the area south of the TOCL is weaker and currently deforming.

  20. Numerical Study of Fractional Ensemble Average Transport Equations

    NASA Astrophysics Data System (ADS)

    Kim, S.; Park, Y.; Gyeong, C. B.; Lee, O.

    2014-12-01

    In this presentation, a newly developed theory is applied to the case of stationary and non-stationary stochastic advective flow field, and a numerical solution method is presented for the resulting fractional Fokker-Planck equation (fFPE), which describes the evolution of the probability density function (PDF) of contaminant concentration. The derived fFPE is evaluated for three different form: 1) purely advective form, 2) second-order moment form and 3) second-order cumulant form. The Monte Carlo analysis of the fractional governing equation is then performed in a stochastic flow field, generated by a fractional Brownian motion for the stationary and non-stationary stochastic advection, in order to provide a benchmark for the results obtained from the fFPEs. When compared to the Monte Carlo simulation based PDFs and their ensemble average, the second-order cumulant form gives a good fit in terms of the shape and mode of the PDF of the contaminant concentration. Therefore, it is quite promising that the non-Fickian transport behavior can be modeled by the derived fractional ensemble average transport equations either by means of the long memory in the underlying stochastic flow, or by means of the time-space non-stationarity of the underlying stochastic flow, or by means of the time and space fractional derivatives of the transport equations. This subject is supported by Korea Ministry of Environment as "The Eco Innovation Project : Non-point source pollution control research group"

  1. Averaging expectancies and perceptual experiences in the assessment of quality.

    PubMed

    Dougherty, M R; Shanteau, J

    1999-03-01

    This study examines whether people integrate expectancy information with perceptual experiences when evaluating the quality of consumer products. In particular, we investigate the following three questions: (1) Are expectancy effects observed in the evaluation of consumer products? (2) Can these effects be viewed in cognitive processing terms? (3) Can a mathematical model based on the averaging of attribute information describe the effects? Participants in two experiments blindly evaluated (with the product names removed) consumer products from six sensory modalities: vision (computer printer output), tactile (paper towels), olfaction (men's cologne), taste (corn chips), auditory (audio cassette tapes), and tactile/medicinal (hand lotion). Participants in both experiments were asked to: (1) rate the overall quality of the product given arbitrary quality labels (High Quality, Medium Quality, or Low Quality); (2) rate the overall quality of the product without the labels, and (3) estimate the scale values for the quality labels alone. Group results revealed main effects of the quality labels in all product categories. The pattern of results could be described by an averaging model based on Information Integration Theory. These results have implications for placebo effects in consumer behavior and decision making.

  2. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  3. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    PubMed

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed.

  4. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary.

    PubMed

    Tinmouth, Jill; Vella, Emily T; Baxter, Nancy N; Dubé, Catherine; Gould, Michael; Hey, Amanda; Ismaila, Nofisat; McCurdy, Bronwen R; Paszat, Lawrence

    2016-01-01

    Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS) prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs). There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial) to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program. PMID:27597935

  5. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  6. Cause of the exceptionally high AE average for 2003

    NASA Astrophysics Data System (ADS)

    Prestes, A.

    2012-04-01

    In this work we focus on the year of 2003 when the AE index was extremely high (AE=341nT, with peak intensity more than 2200nT), this value is almost 100 nT higher when compared with others years of the cycle 23. Interplanetary magnetic field (IMF) and plasma data are compared with geomagnetic AE and Dst indices to determine the causes of exceptionally high AE average value. Analyzing the solar wind parameters we found that the annual average speed value was extremely high, approximately 542 km/s (peak value ~1074 km/s). These values were due to recurrent high-speed solar streams from large coronal holes, which stretch to the solar equator, and low-latitude coronal holes, which exist for many solar rotations. AE was found to increase with increasing solar wind speed and decrease when solar wind speed decrease. The cause of the high AE activity during 2003 is the presence of the high-speed corotating streams that contain large-amplitude Alfvén waves throughout the streams, which resulted in a large number of HILDCAAs events. When plasma and field of solar wind impinge on Earth's magnetosphere, the southward field turnings associated with the wave fluctuations cause magnetic reconnection and consequential high levels of AE activity and very long recovery phases on Dst, sometimes lasting until the next stream arrives.

  7. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary

    PubMed Central

    Baxter, Nancy N.; Dubé, Catherine; Hey, Amanda

    2016-01-01

    Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS) prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs). There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial) to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program.

  8. Yearly average performance of the principal solar collector types

    SciTech Connect

    Rabl, A.

    1981-01-01

    The results of hour-by-hour simulations for 26 meteorological stations are used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types: flat plate, evacuated tubes, CPC, single- and dual-axis tracking collectors, and central receiver. The correlations are first- and second-order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour-by-hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators. This method can be applied to collectors that operate year-around in such a way that no collected energy is discarded, including photovoltaic systems, solar-augmented industrial process heat systems, and solar thermal power systems. The method is also recommended for rating collectors of different type or manufacturer by yearly average performance, evaluating the effects of collector degradation, the benefits of collector cleaning, and the gains from collector improvements (due to enhanced optical efficiency or decreased heat loss per absorber surface). For most of these applications, the method is accurate enough to replace a system simulation.

  9. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary

    PubMed Central

    Baxter, Nancy N.; Dubé, Catherine; Hey, Amanda

    2016-01-01

    Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS) prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs). There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial) to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program. PMID:27597935

  10. Thermal management in high average power pulsed compression systems

    SciTech Connect

    Wavrik, R.W.; Reed, K.W.; Harjes, H.C.; Weber, G.J.; Butler, M.; Penn, K.J.; Neau, E.L.

    1992-08-01

    High average power repetitively pulsed compression systems offer a potential source of electron beams which may be applied to sterilization of wastes, treatment of food products, and other environmental and consumer applications. At Sandia National Laboratory, the Repetitive High Energy Pulsed Power (RHEPP) program is developing a 7 stage magnetic pulse compressor driving a linear induction voltage adder with an electron beam diode load. The RHEPP machine is being design to deliver 350 kW of average power to the diode in 60 ns FWHM, 2.5 MV, 3 kJ pulses at a repetition rate of 120 Hz. In addition to the electrical design considerations, the repetition rate requires thermal management of the electrical losses. Steady state temperatures must be kept below the material degradation temperatures to maximize reliability and component life. The optimum design is a trade off between thermal management, maximizing overall electrical performance of the system, reliability, and cost effectiveness. Cooling requirements and configurations were developed for each of the subsystems of RHEPP. Finite element models that combine fluid flow and heat transfer were used to screen design concepts. The analysis includes one, two, and three dimensional heat transfer using surface heat transfer coefficients and boundary layer models. Experiments were conducted to verify the models as well as to evaluate cooling channel fabrication materials and techniques in Metglas wound cores. 10 refs.

  11. Thermal management in high average power pulsed compression systems

    SciTech Connect

    Wavrik, R.W.; Reed, K.W.; Harjes, H.C.; Weber, G.J.; Butler, M.; Penn, K.J.; Neau, E.L.

    1992-01-01

    High average power repetitively pulsed compression systems offer a potential source of electron beams which may be applied to sterilization of wastes, treatment of food products, and other environmental and consumer applications. At Sandia National Laboratory, the Repetitive High Energy Pulsed Power (RHEPP) program is developing a 7 stage magnetic pulse compressor driving a linear induction voltage adder with an electron beam diode load. The RHEPP machine is being design to deliver 350 kW of average power to the diode in 60 ns FWHM, 2.5 MV, 3 kJ pulses at a repetition rate of 120 Hz. In addition to the electrical design considerations, the repetition rate requires thermal management of the electrical losses. Steady state temperatures must be kept below the material degradation temperatures to maximize reliability and component life. The optimum design is a trade off between thermal management, maximizing overall electrical performance of the system, reliability, and cost effectiveness. Cooling requirements and configurations were developed for each of the subsystems of RHEPP. Finite element models that combine fluid flow and heat transfer were used to screen design concepts. The analysis includes one, two, and three dimensional heat transfer using surface heat transfer coefficients and boundary layer models. Experiments were conducted to verify the models as well as to evaluate cooling channel fabrication materials and techniques in Metglas wound cores. 10 refs.

  12. A local average distance descriptor for flexible protein structure comparison

    PubMed Central

    2014-01-01

    Background Protein structures are flexible and often show conformational changes upon binding to other molecules to exert biological functions. As protein structures correlate with characteristic functions, structure comparison allows classification and prediction of proteins of undefined functions. However, most comparison methods treat proteins as rigid bodies and cannot retrieve similarities of proteins with large conformational changes effectively. Results In this paper, we propose a novel descriptor, local average distance (LAD), based on either the geodesic distances (GDs) or Euclidean distances (EDs) for pairwise flexible protein structure comparison. The proposed method was compared with 7 structural alignment methods and 7 shape descriptors on two datasets comprising hinge bending motions from the MolMovDB, and the results have shown that our method outperformed all other methods regarding retrieving similar structures in terms of precision-recall curve, retrieval success rate, R-precision, mean average precision and F1-measure. Conclusions Both ED- and GD-based LAD descriptors are effective to search deformed structures and overcome the problems of self-connection caused by a large bending motion. We have also demonstrated that the ED-based LAD is more robust than the GD-based descriptor. The proposed algorithm provides an alternative approach for blasting structure database, discovering previously unknown conformational relationships, and reorganizing protein structure classification. PMID:24694083

  13. Winding Numbers and Average Frequencies in Phase Oscillator Networks

    NASA Astrophysics Data System (ADS)

    Golubitsky, M.; Josic, K.; Shea-Brown, E.

    2006-06-01

    We study networks of coupled phase oscillators and show that network architecture can force relations between average frequencies of the oscillators. The main tool of our analysis is the coupled cell theory developed by Stewart, Golubitsky, Pivato, and Torok, which provides precise relations between network architecture and the corresponding class of ODEs in RM and gives conditions for the flow-invariance of certain polydiagonal subspaces for all coupled systems with a given network architecture. The theory generalizes the notion of fixed-point subspaces for subgroups of network symmetries and directly extends to networks of coupled phase oscillators. For systems of coupled phase oscillators (but not generally for ODEs in RM, where M ≥ 2), invariant polydiagonal subsets of codimension one arise naturally and strongly restrict the network dynamics. We say that two oscillators i and j coevolve if the polydiagonal θi = θj is flow-invariant, and show that the average frequencies of these oscillators must be equal. Given a network architecture, it is shown that coupled cell theory provides a direct way of testing how coevolving oscillators form collections with closely related dynamics. We give a generalization of these results to synchronous clusters of phase oscillators using quotient networks, and discuss implications for networks of spiking cells and those connected through buffers that implement coupling dynamics.

  14. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    PubMed

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  15. Interpreting multiple risk scales for sex offenders: evidence for averaging.

    PubMed

    Lehmann, Robert J B; Hanson, R Karl; Babchishin, Kelly M; Gallasch-Nemitz, Franziska; Biedermann, Jürgen; Dahle, Klaus-Peter

    2013-09-01

    This study tested 3 decision rules for combining actuarial risk instruments for sex offenders into an overall evaluation of risk. Based on a 9-year follow-up of 940 adult male sex offenders, we found that Rapid Risk Assessment for Sex Offender Recidivism (RRASOR), Static-99R, and Static-2002R predicted sexual, violent, and general recidivism and provided incremental information for the prediction of all 3 outcomes. Consistent with previous findings, the incremental effect of RRASOR was positive for sexual recidivism but negative for violent and general recidivism. Averaging risk ratios was a promising approach to combining these risk scales, showing good calibration between predicted (E) and observed (O) recidivism rates (E/O index = 0.93, 95% CI [0.79, 1.09]) and good discrimination (area under the curve = 0.73, 95% CI [0.69, 0.77]) for sexual recidivism. As expected, choosing the lowest (least risky) risk tool resulted in underestimated sexual recidivism rates (E/O = 0.67, 95% CI [0.57, 0.79]) and choosing the highest (riskiest) resulted in overestimated risk (E/O = 1.37, 95% CI [1.17, 1.60]). For the prediction of violent and general recidivism, the combination rules provided similar or lower discrimination compared with relying solely on the Static-99R or Static-2002R. The current results support an averaging approach and underscore the importance of understanding the constructs assessed by violence risk measures. PMID:23730829

  16. Using Bayes Model Averaging for Wind Power Forecasts

    NASA Astrophysics Data System (ADS)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  17. Runoff and leaching of metolachlor from Mississippi River alluvial soil during seasons of average and below-average rainfall.

    PubMed

    Southwick, Lloyd M; Appelboom, Timothy W; Fouss, James L

    2009-02-25

    The movement of the herbicide metolachlor [2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide] via runoff and leaching from 0.21 ha plots planted to corn on Mississippi River alluvial soil (Commerce silt loam) was measured for a 6-year period, 1995-2000. The first three years received normal rainfall (30 year average); the second three years experienced reduced rainfall. The 4-month periods prior to application plus the following 4 months after application were characterized by 1039 +/- 148 mm of rainfall for 1995-1997 and by 674 +/- 108 mm for 1998-2000. During the normal rainfall years 216 +/- 150 mm of runoff occurred during the study seasons (4 months following herbicide application), accompanied by 76.9 +/- 38.9 mm of leachate. For the low-rainfall years these amounts were 16.2 +/- 18.2 mm of runoff (92% less than the normal years) and 45.1 +/- 25.5 mm of leachate (41% less than the normal seasons). Runoff of metolachlor during the normal-rainfall seasons was 4.5-6.1% of application, whereas leaching was 0.10-0.18%. For the below-normal periods, these losses were 0.07-0.37% of application in runoff and 0.22-0.27% in leachate. When averages over the three normal and the three less-than-normal seasons were taken, a 35% reduction in rainfall was characterized by a 97% reduction in runoff loss and a 71% increase in leachate loss of metolachlor on a percent of application basis. The data indicate an increase in preferential flow in the leaching movement of metolachlor from the surface soil layer during the reduced rainfall periods. Even with increased preferential flow through the soil during the below-average rainfall seasons, leachate loss (percent of application) of the herbicide remained below 0.3%. Compared to the average rainfall seasons of 1995-1997, the below-normal seasons of 1998-2000 were characterized by a 79% reduction in total runoff and leachate flow and by a 93% reduction in corresponding metolachlor movement via these routes

  18. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation

  19. Ultra-low noise miniaturized neural amplifier with hardware averaging

    NASA Astrophysics Data System (ADS)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the

  20. Particle filtration: An analysis using the method of volume averaging

    SciTech Connect

    Quintard, M.; Whitaker, S.

    1994-12-01

    The process of filtration of non-charged, submicron particles is analyzed using the method of volume averaging. The particle continuity equation is represented in terms of the first correction to the Smoluchowski equation that takes into account particle inertia effects for small Stokes numbers. This leads to a cellular efficiency that contains a minimum in the efficiency as a function of the particle size, and this allows us to identify the most penetrating particle size. Comparison of the theory with results from Brownian dynamics indicates that the first correction to the Smoluchowski equation gives reasonable results in terms of both the cellular efficiency and the most penetrating particle size. However, the results for larger particles clearly indicate the need to extend the Smoluchowski equation to include higher order corrections. Comparison of the theory with laboratory experiments, in the absence of adjustable parameters, provides interesting agreement for particle diameters that are equal to or less than the diameter of the most penetrating particle.

  1. Averaged Eigenvalue Spectrum of Large Symmetric Random Matrix

    NASA Astrophysics Data System (ADS)

    Takano, Fumihiko; Takano, Hiroshi

    1984-09-01

    The averaged eigenvalue spectrum of a large symmetric random matrix, in which each element is an independent random variable with the Gaussian distribution, is calculated by using the diagram technique. Compared with the methods used by Edwards and Jones and by Mehta, the present method is very simple and can be used in other calculations. Leading terms in N (the dimension of the matrix) and next leading terms are calculated exactly. It is shown that the term proportional to N gives the semi-circular law obtained by Edwards and Jones. Next leading terms, which is of the order of unity, give three δ-functions as well as the corrections to the semi-circular law. One of the three δ-functions is the same as that of Edwards and Jones and other two are located at the band edges of the semi-circular law. The physical meanings of these results are discussed.

  2. A Multichannel Averaging Phasemeter for Picometer Precision Laser Metrology

    NASA Technical Reports Server (NTRS)

    Halverson, Peter G.; Johnson, Donald R.; Kuhnert, Andreas; Shaklan, Stuart B.; Sero, Robert

    1999-01-01

    The Micro-Arcsecond Metrology (MAM) team at the Jet Propulsion Laboratory has developed a precision phasemeter for the Space Interferometry Mission (SIM). The current version of the phasemeter is well-suited for picometer accuracy distance measurements and tracks at speeds up to 50 cm/sec, when coupled to SIM's 1.3 micron wavelength heterodyne laser metrology gauges. Since the phasemeter is implemented with industry standard FPGA chips, other accuracy/speed trade-off points can be programmed for applications such as metrology for earth-based long-baseline astronomical interferometry (planet finding), and industrial applications such as translation stage and machine tool positioning. The phasemeter is a standard VME module, supports 6 metrology gauges, a 128 MHz clock, has programmable hardware averaging, and a maximum range of 232 cycles (2000 meters at 1.3 microns).

  3. Average vertical and zonal F region plasma drifts over Jicamarca

    SciTech Connect

    Fejer, B.G.; Gonzalez, S.A. ); de Paula, E.R. Utah State Univ., Logan ); Woodman, R.F. )

    1991-08-01

    The seasonal averages of the equatorial F region vertical and zonal plasma drifts are determined using extensive incoherent scatter radar observations from Jicamarca during 1968-1988. The late afternoon and nighttime vertical and zonal drifts are strongly dependent on the 10.7-cm solar flux. The authors show that the evening prereversal enhancement of vertical drifts increases linearly with solar flux during equinox but tends to saturate for large fluxes during southern hemisphere winter. They examine in detail, for the first time, the seasonal variation of the zonal plasma drifts and their dependence on solar flux and magnetic activity. The seasonal effects on the zonal drifts are most pronounced in the midnight-morning sector. The nighttime eastward drifts increase with solar flux for all seasons but decrease slightly with magnetic activity. The daytime westward drifts are essentially independent of season, solar cycle, and magnetic activity.

  4. Rapidity dependence of the average transverse momentum in hadronic collisions

    NASA Astrophysics Data System (ADS)

    Durães, F. O.; Giannini, A. V.; Gonçalves, V. P.; Navarra, F. S.

    2016-08-01

    The energy and rapidity dependence of the average transverse momentum in p p and p A collisions at energies currently available at the BNL Relativistic Heavy Ion Collider (RHIC) and CERN Large Hadron Collider (LHC) are estimated using the color glass condensate (CGC) formalism. We update previous predictions for the pT spectra using the hybrid formalism of the CGC approach and two phenomenological models for the dipole-target scattering amplitude. We demonstrate that these models are able to describe the RHIC and LHC data for hadron production in p p , d Au , and p Pb collisions at pT≤20 GeV. Moreover, we present our predictions for and demonstrate that the ratio / decreases with the rapidity and has a behavior similar to that predicted by hydrodynamical calculations.

  5. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, David, Jr. (Inventor)

    2014-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  6. Average System Cost Methodology : Administrator's Record of Decision.

    SciTech Connect

    United States. Bonneville Power Administration.

    1984-06-01

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  7. A generalization of averaging theorems for porous medium analysis

    NASA Astrophysics Data System (ADS)

    Gray, William G.; Miller, Cass T.

    2013-12-01

    The contributions of Stephen Whitaker to the rigorous analysis of porous medium flow and transport are built on the use of temporal and spatial averaging theorems applied to phases in representative elementary volumes. Here, these theorems are revisited, common point theorems are considered, extensions of existing theorems are developed to include the effects of lower dimensional entities represented as singularities, and a unified form of the theorems for phases, interfaces, common curves, and common points is established for both macroscale and mixed macroscale-megascale systems. The availability of the full set of theorems facilitates detailed analysis of a variety of porous medium systems. Explicit modeling of the physical processes associated with interfaces, common curves, and common points, as well as the kinematics of these entities, can be undertaken at both the macroscale and megascale based on these theorems.

  8. Model selection versus model averaging in dose finding studies.

    PubMed

    Schorning, Kirsten; Bornkamp, Björn; Bretz, Frank; Dette, Holger

    2016-09-30

    A key objective of Phase II dose finding studies in clinical drug development is to adequately characterize the dose response relationship of a new drug. An important decision is then on the choice of a suitable dose response function to support dose selection for the subsequent Phase III studies. In this paper, we compare different approaches for model selection and model averaging using mathematical properties as well as simulations. We review and illustrate asymptotic properties of model selection criteria and investigate their behavior when changing the sample size but keeping the effect size constant. In a simulation study, we investigate how the various approaches perform in realistically chosen settings. Finally, the different methods are illustrated with a recently conducted Phase II dose finding study in patients with chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27226147

  9. Average electric field behavior in the ionosphere above Arecibo

    NASA Technical Reports Server (NTRS)

    Ganguly, Suman; Behnke, Richard A.; Emery, Barbara A.

    1987-01-01

    Plasma drift measurements taken at Arecibo during the solar minimum period of 1974-1977 are examined to determine their average behavior in the E, F2, and F regions. The drifts are generally diurnal in the E region and semidiurnal in the F1 region. These lower thermospheric drifts are set up by polarization fields generated by propagating and in situ atmospheric tides. In the F region the diurnal component is more pronounced, especially in the zonal direction. The magnitude of the drifts is of the order of 25-30 m/s (or 1 mV/m). Enhanced geomagnetic activity appears to increase the westward component of the drift in agreement with the theory of the ionospheric disturbance dynamo (Blanc and Richmond, 1980). Nighttime drifts appear to be at least partly explained in terms of polarization fields.

  10. Averaged-null-energy condition for electromagnetism in Minkowski spacetime

    SciTech Connect

    Folacci, A. )

    1992-09-15

    We show, on four-dimensional Minkowski spacetime, that {l angle}{psi}{vert bar}{ital T}{sub {mu}{nu}}{vert bar}{psi}{r angle}, the renormalized expectation value in a general quantum state {vert bar}{psi}{r angle} of the stress-energy tensor for electromagnetism, satisfies the averaged-null-energy condition, i.e., that {integral}{ital d}{lambda}{l angle}{psi}{vert bar}{ital T}{sub {mu}{nu}}{vert bar}{psi}{r angle}{ital t}{sup {mu}}{ital t{nu}}{ge}0 where this integral is along complete null geodesics with an affine parameter {lambda} and tangent vector {ital t}{sup {mu}}.

  11. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  12. Higher in status, (Even) better-than-average

    PubMed Central

    Varnum, Michael E. W.

    2015-01-01

    In 5 studies (total N = 1357) conducted online using Amazon's MTurk the relationship between socioeconomic status (SES) and the better-than-average effect (BTAE) was tested. Across the studies subjective measures of SES were positively correlated with magnitude of BTAE. Effects of objective measures (income and education) were weaker and less consistent. Measures of childhood SES (both objective and subjective) were positively correlated with BTAE magnitude, though less strongly and less consistently than measures of current subjective SES. Meta-analysis revealed all measures of chronic SES (with the exception of education) were significantly correlated with BTAE. However, manipulations of SES in terms of subjective status (Study 2), power (Study 3), and dominance (Study 4) did not have strong effects on BTAE magnitude (d's ranging from −0.04 to −0.14). Taken together the results suggest that chronic, but not temporary, status may be linked with a stronger tendency to overestimate one's abilities and positive traits. PMID:25972824

  13. A coefficient average approximation towards Gutzwiller wavefunction formalism

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming

    2015-06-01

    Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.

  14. A coefficient average approximation towards Gutzwiller wavefunction formalism.

    PubMed

    Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming

    2015-06-24

    Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.

  15. THE FIRST LUNAR MAP OF THE AVERAGE SOIL ATOMIC MASS

    SciTech Connect

    O. GASNAULT; W. FELDMAN; ET AL

    2001-01-01

    Measurements of indexes of lunar surface composition were successfully made during Lunar Prospector (LP) mission, using the Neutron Spectrometers (NS) [1]. This capability is demonstrated for fast neutrons in Plates 1 of Maurice et al. [2] (similar to Figure 2 here). Inspection shows a clear distinction between mare basalt (bright) and highland terranes [2]. Fast neutron simulations demonstrate the sensitivity of the fast neutron leakage flux to the presence of iron and titanium in the soil [3]. The dependence of the flux to a third element (calcium or aluminum) was also suspected [4]. We expand our previous work in this study by estimating fast neutron leakage fluxes for a more comprehensive set of assumed lunar compositions. We find a strong relationship between the fast neutron fluxes and the average soil atomic mass: . This relation can be inverted to provide a map of from the measured map of fast neutrons from the Moon.

  16. Note on scaling arguments in the effective average action formalism

    NASA Astrophysics Data System (ADS)

    Pagani, Carlo

    2016-08-01

    The effective average action (EAA) is a scale-dependent effective action where a scale k is introduced via an infrared regulator. The k dependence of the EAA is governed by an exact flow equation to which one associates a boundary condition at a scale μ . We show that the μ dependence of the EAA is controlled by an equation fully analogous to the Callan-Symanzik equation which allows one to define scaling quantities straightforwardly. Particular attention is paid to composite operators which are introduced along with new sources. We discuss some simple solutions to the flow equation for composite operators and comment on their implications in the case of a local potential approximation.

  17. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  18. REVISITING THE SOLAR TACHOCLINE: AVERAGE PROPERTIES AND TEMPORAL VARIATIONS

    SciTech Connect

    Antia, H. M.; Basu, Sarbani E-mail: sarbani.basu@yale.edu

    2011-07-10

    The tachocline is believed to be the region where the solar dynamo operates. With over a solar cycle's worth of data available from the Michelson Doppler Imager and Global Oscillation Network Group instruments, we are in a position to investigate not merely the average structure of the solar tachocline, but also its time variations. We determine the properties of the tachocline as a function of time by fitting a two-dimensional model that takes latitudinal variations of the tachocline properties into account. We confirm that if we consider the central position of the tachocline, it is prolate. Our results show that the tachocline is thicker at latitudes higher than the equator, making the overall shape of the tachocline more complex. Of the tachocline properties examined, the transition of the rotation rate across the tachocline, and to some extent the position of the tachocline, show some temporal variations.

  19. A vertically averaged spectral model for tidal circulation in estuaries

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    A frequency dependent computer model based on the two-dimensional vertically averaged shallow-water equations is described for general purpose application in tidally dominated embayments. This model simulates the response of both tides and tidal currents to user-specified geometries and boundary conditions. The mathematical formulation and practical application of the model are discussed in detail. Salient features of the model include the ability to specify: (1) stage at the open boundaries as well as within the model grid, (2) velocities on open boundaries (river inflows and so forth), (3) spatially variable wind stress, and (4) spatially variable bottom friction. Using harmonically analyzed field data as boundary conditions, this model can be used to make real time predictions of tides and tidal currents. (USGS)

  20. Forecasts of time averages with a numerical weather prediction model

    NASA Technical Reports Server (NTRS)

    Roads, J. O.

    1986-01-01

    Forecasts of time averages of 1-10 days in duration by an operational numerical weather prediction model are documented for the global 500 mb height field in spectral space. Error growth in very idealized models is described in order to anticipate various features of these forecasts and in order to anticipate what the results might be if forecasts longer than 10 days were carried out by present day numerical weather prediction models. The data set for this study is described, and the equilibrium spectra and error spectra are documented; then, the total error is documented. It is shown how forecasts can immediately be improved by removing the systematic error, by using statistical filters, and by ignoring forecasts beyond about a week. Temporal variations in the error field are also documented.

  1. Weighted average finite difference methods for fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.

    2006-07-01

    A class of finite difference methods for solving fractional diffusion equations is considered. These methods are an extension of the weighted average methods for ordinary (non-fractional) diffusion equations. Their accuracy is of order (Δ x) 2 and Δ t, except for the fractional version of the Crank-Nicholson method, where the accuracy with respect to the timestep is of order (Δ t) 2 if a second-order approximation to the fractional time-derivative is used. Their stability is analyzed by means of a recently proposed procedure akin to the standard von Neumann stability analysis. A simple and accurate stability criterion valid for different discretization schemes of the fractional derivative, arbitrary weight factor, and arbitrary order of the fractional derivative, is found and checked numerically. Some examples are provided in which the new methods' numerical solutions are obtained and compared against exact solutions.

  2. A high-average-power FEL for industrial applications

    SciTech Connect

    Dylla, H.F.; Benson, S.; Bisognano, J.

    1995-12-31

    CEBAF has developed a comprehensive conceptual design of an industrial user facility based on a kilowatt UV (150-1000 nm) and IR (2-25 micron) FEL driven by a recirculating, energy-recovering 200 MeV superconducting radio-frequency (SRF) accelerator. FEL users{endash}CEBAF`s partners in the Laser Processing Consortium, including AT&T, DuPont, IBM, Northrop-Grumman, 3M, and Xerox{endash}plan to develop applications such as polymer surface processing, metals and ceramics micromachining, and metal surface processing, with the overall effort leading to later scale-up to industrial systems at 50-100 kW. Representative applications are described. The proposed high-average-power FEL overcomes limitations of conventional laser sources in available power, cost-effectiveness, tunability and pulse structure. 4 refs., 3 figs., 2 tabs.

  3. TIDAL AND TIDALLY AVERAGED CIRCULATION CHARACTERISTICS OF SUISUN BAY, CALIFORNIA.

    USGS Publications Warehouse

    Smith, Lawrence H.; Cheng, Ralph T.

    1987-01-01

    Availability of extensive field data permitted realistic calibration and validation of a hydrodynamic model of tidal circulation and salt transport for Suisun Bay, California. Suisun Bay is a partially mixed embayment of northern San Francisco Bay located just seaward of the Sacramento-San Joaquin Delta. The model employs a variant of an alternating direction implicit finite-difference method to solve the hydrodynamic equations and an Eulerian-Lagrangian method to solve the salt transport equation. An upwind formulation of the advective acceleration terms of the momentum equations was employed to avoid oscillations in the tidally averaged velocity field produced by central spatial differencing of these terms. Simulation results of tidal circulation and salt transport demonstrate that tides and the complex bathymetry determine the patterns of tidal velocities and that net changes in the salinity distribution over a few tidal cycles are small despite large changes during each cycle.

  4. STREMR: Numerical model for depth-averaged incompressible flow

    NASA Astrophysics Data System (ADS)

    Roberts, Bernard

    1993-09-01

    The STREMR computer code is a two-dimensional model for depth-averaged incompressible flow. It accommodates irregular boundaries and nonuniform bathymetry, and it includes empirical corrections for turbulence and secondary flow. Although STREMR uses a rigid-lid surface approximation, the resulting pressure is equivalent to the displacement of a free surface. Thus, the code can be used to model free-surface flow wherever the local Froude number is 0.5 or less. STREMR uses a finite-volume scheme to discretize and solve the governing equations for primary flow, secondary flow, and turbulence energy and dissipation rate. The turbulence equations are taken from the standard k-Epsilon turbulence model, and the equation for secondary flow is developed herein. Appendices to this report summarize the principal equations, as well as the procedures used for their discrete solution.

  5. Topology, delocalization via average symmetry and the symplectic Anderson transition.

    PubMed

    Fu, Liang; Kane, C L

    2012-12-14

    A field theory of the Anderson transition in two-dimensional disordered systems with spin-orbit interactions and time-reversal symmetry is developed, in which the proliferation of vortexlike topological defects is essential for localization. The sign of vortex fugacity determines the Z(2) topological class of the localized phase. There are two distinct fixed points with the same critical exponents, corresponding to transitions from a metal to an insulator and a topological insulator, respectively. The critical conductivity and correlation length exponent of these transitions are computed in an N=1-[symbol: see text] expansion in the number of replicas, where for small [symbol: see text] the critical points are perturbatively connected to the Kosterlitz-Thouless critical point. Delocalized states, which arise at the surface of weak topological insulators and topological crystalline insulators, occur because vortex proliferation is forbidden due to the presence of symmetries that are violated by disorder, but are restored by disorder averaging.

  6. Unbiased Average Age-Appropriate Atlases for Pediatric Studies

    PubMed Central

    Fonov, Vladimir; Evans, Alan C.; Botteron, Kelly; Almli, C. Robert; McKinstry, Robert C.; Collins, D. Louis

    2010-01-01

    Spatial normalization, registration, and segmentation techniques for Magnetic Resonance Imaging (MRI) often use a target or template volume to facilitate processing, take advantage of prior information, and define a common coordinate system for analysis. In the neuroimaging literature, the MNI305 Talairach-like coordinate system is often used as a standard template. However, when studying pediatric populations, variation from the adult brain makes the MNI305 suboptimal for processing brain images of children. Morphological changes occurring during development render the use of age-appropriate templates desirable to reduce potential errors and minimize bias during processing of pediatric data. This paper presents the methods used to create unbiased, age-appropriate MRI atlas templates for pediatric studies that represent the average anatomy for the age range of 4.5–18.5 years, while maintaining a high level of anatomical detail and contrast. The creation of anatomical T1-weighted, T2-weighted, and proton density-weighted templates for specific developmentally important age-ranges, used data derived from the largest epidemiological, representative (healthy and normal) sample of the U.S. population, where each subject was carefully screened for medical and psychiatric factors and characterized using established neuropsychological and behavioral assessments. . Use of these age-specific templates was evaluated by computing average tissue maps for gray matter, white matter, and cerebrospinal fluid for each specific age range, and by conducting an exemplar voxel-wise deformation-based morphometry study using 66 young (4.5–6.9 years) participants to demonstrate the benefits of using the age-appropriate templates. The public availability of these atlases/templates will facilitate analysis of pediatric MRI data and enable comparison of results between studies in a common standardized space specific to pediatric research. PMID:20656036

  7. Assessing the total uncertainty on average sediment export measurements

    NASA Astrophysics Data System (ADS)

    Vanmaercke, Matthias

    2015-04-01

    Sediment export measurements from rivers are usually subjected to large uncertainties. Although many case studies have focussed on specific aspects influencing these uncertainties (e.g. the sampling procedure, laboratory analyses, sampling frequency, load calculation method, duration of the measuring method), very few studies provide an integrated assessment of the total uncertainty resulting from these different sources of errors. Moreover, the findings of these studies are commonly difficult to apply, as they require specific details on the applied measuring method that are often unreported. As a result, the overall uncertainty on reported average sediment export measurements remains difficult to assess. This study aims to address this gap. Based on Monte Carlo simulations on a large dataset of daily sediment export measurements (> 100 catchments and > 2000 catchment-years of observations), the most dominant sources of uncertainties are explored. Results show that uncertainties on average sediment-export values (over multiple years) are mainly controlled by the sampling frequency and the duration of the measuring period. Measuring errors on individual sediment concentration or runoff discharge samples have an overall smaller influence. Depending on the sampling strategy used (e.g. uniform or flow-proportional), also the load calculation procedure can cause significant biases in the obtained results. A simple method is proposed that allows estimating the total uncertainty on sediment export values, based on commonly reported information (e.g. the catchment area, measuring period, number of samples taken, load calculation procedure used). An application of this method shows that total uncertainties on annual sediment export measurements can easily exceed 200%. It is shown that this has important consequences for the calibration and validation of sediment export models.

  8. Estimation of the average visibility in central Europe

    NASA Astrophysics Data System (ADS)

    Horvath, Helmuth

    Visibility has been obtained from spectral extinction coefficients measured with the University of Vienna Telephotometer or size distributions determined with an Aerosol Spectrometer. By measuring the extinction coefficient in different directions, possible influences of local sources could be determined easily. A region, undisturbed by local sources usually had a variation of extinction coefficient of less than 10% in different directions. Generally good visibility outside population centers in Europe is considered as 40-50 km. These values have been found independent of the location in central Europe, thus this represents the average European "clean" air. Under rare occasions (normally rapid change of air mass) the visibility can be 100-150 km. In towns, the visibility is a factor of approximately 2 lower. In comparison to this the visibility in remote regions of North and South America is larger by a factor of 2-4. Obviously the lower visibility in Europe is caused by its higher population density. Since the majority of visibility reducing particulate emissions come from small sources such as cars or heating, the emissions per unit area can be considered proportional to the population density. Using a simple box model and the visibility measured in central Europe and in Vienna, the difference in visibility inside and outside the town can be explained quantitatively. It thus is confirmed, that the generally low visibility in central Europe is a consequence of the emissions in connection with human activities and the low visibility (compared, e.g. to North or South America) in remote location such as the Alps is caused by the average European pollution.

  9. Molecular dynamics averaging of Xe chemical shifts in liquids

    NASA Astrophysics Data System (ADS)

    Jameson, Cynthia J.; Sears, Devin N.; Murad, Sohail

    2004-11-01

    The Xe nuclear magnetic resonance chemical shift differences that afford the discrimination between various biological environments are of current interest for biosensor applications and medical diagnostic purposes. In many such environments the Xe signal appears close to that in water. We calculate average Xe chemical shifts (relative to the free Xe atom) in solution in eleven liquids: water, isobutane, perfluoro-isobutane, n-butane, n-pentane, neopentane, perfluoroneopentane, n-hexane, n-octane, n-perfluorooctane, and perfluorooctyl bromide. The latter is a liquid used for intravenous Xe delivery. We calculate quantum mechanically the Xe shielding response in Xe-molecule van der Waals complexes, from which calculations we develop Xe (atomic site) interpolating functions that reproduce the ab initio Xe shielding response in the complex. By assuming additivity, these Xe-site shielding functions can be used to calculate the shielding for any configuration of such molecules around Xe. The averaging over configurations is done via molecular dynamics (MD). The simulations were carried out using a MD technique that one of us had developed previously for the simulation of Henry's constants of gases dissolved in liquids. It is based on separating a gaseous compartment in the MD system from the solvent using a semipermeable membrane that is permeable only to the gas molecules. We reproduce the experimental trends in the Xe chemical shifts in n-alkanes with increasing number of carbons and the large chemical shift difference between Xe in water and in perfluorooctyl bromide. We also reproduce the trend for a given solvent of decreasing Xe chemical shift with increasing temperature. We predict chemical shift differences between Xe in alkanes vs their perfluoro counterparts.

  10. Gauge and averaging in gravitational self-force

    SciTech Connect

    Gralla, Samuel E.

    2011-10-15

    A difficulty with previous treatments of the gravitational self-force is that an explicit formula for the force is available only in a particular gauge (Lorenz gauge), where the force in other gauges must be found through a transformation law once the Lorenz-gauge force is known. For a class of gauges satisfying a 'parity condition' ensuring that the Hamiltonian center of mass of the particle is well-defined, I show that the gravitational self-force is always given by the angle average of the bare gravitational force. To derive this result I replace the computational strategy of previous work with a new approach, wherein the form of the force is first fixed up to a gauge-invariant piece by simple manipulations, and then that piece is determined by working in a gauge designed specifically to simplify the computation. This offers significant computational savings over the Lorenz gauge, since the Hadamard expansion is avoided entirely and the metric perturbation takes a very simple form. I also show that the rest mass of the particle does not evolve due to first-order self-force effects. Finally, I consider the 'mode sum regularization' scheme for computing the self-force in black hole background spacetimes, and use the angle-average form of the force to show that the same mode-by-mode subtraction may be performed in all parity-regular gauges. It appears plausible that suitably modified versions of the Regge-Wheeler and radiation gauges (convenient to Schwarzschild and Kerr, respectively) are in this class.

  11. Evaluation of soft x-ray average recombination coefficient and average charge for metallic impurities in beam-heated plasmas

    SciTech Connect

    Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.

    1986-05-01

    The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti ..gamma.., even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti ..gamma.. can be as much as one-half to two-thirds. We calculate the parametric dependence of anti ..gamma.. and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti ..gamma.. and anti Z in different TFTR discharges.

  12. The importance of ensemble averaging in enzyme kinetics.

    PubMed

    Masgrau, Laura; Truhlar, Donald G

    2015-02-17

    CONSPECTUS: The active site of an enzyme is surrounded by a fluctuating environment of protein and solvent conformational states, and a realistic calculation of chemical reaction rates and kinetic isotope effects of enzyme-catalyzed reactions must take account of this environmental diversity. Ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT) was developed as a way to carry out such calculations. This theory incorporates ensemble averaging, quantized vibrational energies, energy, tunneling, and recrossing of transition state dividing surfaces in a systematic way. It has been applied successfully to a number of hydrogen-, proton-, and hydride-transfer reactions. The theory also exposes the set of effects that should be considered in reliable rate constants calculations. We first review the basic theory and the steps in the calculation. A key role is played by the generalized free energy of activation profile, which is obtained by quantizing the classical potential of mean force as a function of a reaction coordinate because the one-way flux through the transition state dividing surface can be written in terms of the generalized free energy of activation. A recrossing transmission coefficient accounts for the difference between the one-way flux through the chosen transition state dividing surface and the net flux, and a tunneling transmission coefficient converts classical motion along the reaction coordinate to quantum mechanical motion. The tunneling calculation is multidimensional, accounting for the change in vibrational frequencies along the tunneling path and shortening of the tunneling path with respect to the minimum energy path (MEP), as promoted by reaction-path curvature. The generalized free energy of activation and the transmission coefficients both involve averaging over an ensemble of reaction paths and conformations, and this includes the coupling of protein motions to the rearrangement of chemical bonds

  13. Potential of high-average-power solid state lasers

    SciTech Connect

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  14. S-index: Measuring significant, not average, citation performance

    NASA Astrophysics Data System (ADS)

    Antonoyiannakis, Manolis

    2009-03-01

    We recently [1] introduced the ``citation density curve'' (or cumulative impact factor curve) that captures the full citation performance of a journal: its size, impact factor, the maximum number of citations per paper, the relative size of the different-cited portions of the journal, etc. The citation density curve displays a universal behavior across journals. We exploit this universality to extract a simple metric (the ``S-index'') to characterize the citation impact of ``significant'' papers in each journal. In doing so, we go beyond the journal impact factor, which only measures the impact of the average paper. The conventional wisdom of ranking journals according to their impact factors is thus challenged. Having shown the utility and robustness of the S-index in comparing and ranking journals of different sizes but within the same field, we explore the concept further, going beyond a single field, and beyond journals. Can we compare different scientific fields, departments, or universities? And how should one generalize the citation density curve and the S-index to address these questions? [1] M. Antonoyiannakis and S. Mitra, ``Is PRL too large to have an `impact'?'', Editorial, Physical Review Letters, December 2008.

  15. Gains in accuracy from averaging ratings of abnormality

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.

    1999-05-01

    Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.

  16. Metal deep engraving with high average power femtosecond lasers

    NASA Astrophysics Data System (ADS)

    Faucon, M.; Mincuzzi, G.; Morin, F.; Hönninger, C.; Mottay, E.; Kling, R.

    2015-03-01

    Deep engraving of 3D textures is a very demanding process for the creation of master tool e. g molds, forming tools or coining dies. As these masters are uses for reproduction of 3D patterns the materials for the tools are typically hard and brittle and thus difficult to machine. The new generation of industrial femtosecond lasers provides both high accuracy engraving results and high ablation rates at the same time. Operation at pulse energies of typically 40 μJ and repetition rates in the Mhz range the detrimental effect of heat accumulation has to be avoided. Therefore high scanning speeds are required to reduce the pulse overlap below 90%. As a consequence scan speeds in the range of 25-50 m/s a needed, which is beyond the capability of galvo scanners. In this paper we present results using a combination of a polygon scanner with a high average power femtosecond laser and compare this to results with conventional scanners. The effects of pulse energy and scan speed of the head on geometrical accuracy are discussed. The quality of the obtained structures is analyzed by means of 3D surface metrology microscope as well as SEM images.

  17. An averaging theorem for a perturbed KdV equation

    NASA Astrophysics Data System (ADS)

    Guan, Huang

    2013-06-01

    We consider a perturbed KdV equation: \\begin{equation*} \\fl \\dot{u}+u_{xxx}-6uu_x=\\epsilon f(x,u(\\cdot)),\\quad x\\in {T},\\tqs\\int_{{T}} u \\,\\rmd x=0. \\end{equation*} For any periodic function u(x), let I(u)=(I_1(u),I_2(u),\\ldots)\\in{R}_+^{\\infty} be the vector, formed by the KdV integrals of motion, calculated for the potential u(x). Assuming that the perturbation ɛf(x, u(x)) defines a smoothing mapping u(x) ↦ f(x, u(x)) (e.g. it is a smooth function ɛ f(x), independent from u), and that solutions of the perturbed equation satisfy some mild a priori assumptions, we prove that for solutions u(t, x) with typical initial data and for 0 ⩽ t ≲ ɛ-1, the vector I(u (t)) may be well approximated by a solution of the averaged equation.

  18. A simple depth-averaged model for dry granular flow

    NASA Astrophysics Data System (ADS)

    Hung, Chi-Yao; Stark, Colin P.; Capart, Herve

    Granular flow over an erodible bed is an important phenomenon in both industrial and geophysical settings. Here we develop a depth-averaged theory for dry erosive flows using balance equations for mass, momentum and (crucially) kinetic energy. We assume a linearized GDR-Midi rheology for granular deformation and Coulomb friction along the sidewalls. The theory predicts the kinematic behavior of channelized flows under a variety of conditions, which we test in two sets of experiments: (1) a linear chute, where abrupt changes in tilt drive unsteady uniform flows; (2) a rotating drum, to explore steady non-uniform flow. The theoretical predictions match the experimental results well in all cases, without the need to tune parameters or invoke an ad hoc equation for entrainment at the base of the flow. Here we focus on the drum problem. A dimensionless rotation rate (related to Froude number) characterizes flow geometry and accounts not just for spin rate, drum radius and gravity, but also for grain size, wall friction and channel width. By incorporating Coriolis force the theory can treat behavior under centrifuge-induced enhanced gravity. We identify asymptotic flow regimes at low and high dimensionless rotation rates that exhibit distinct power-law scaling behaviors.

  19. Domain-averaged Fermi-hole analysis for solids.

    PubMed

    Baranov, Alexey I; Ponec, Robert; Kohout, Miroslav

    2012-12-01

    The domain-averaged Fermi hole (DAFH) orbitals provide highly visual representation of bonding in terms of orbital-like functions with attributed occupation numbers. It was successfully applied on many molecular systems including those with non-trivial bonding patterns. This article reports for the first time the extension of the DAFH analysis to the realm of extended periodic systems. Simple analytical model of DAFH orbital for single-band solids is introduced which allows to rationalize typical features that DAFH orbitals for extended systems may possess. In particular, a connection between Wannier and DAFH orbitals has been analyzed. The analysis of DAFH orbitals on the basis of DFT calculations is applied to hydrogen lattices of different dimensions as well as to the solids diamond, graphite, Na, Cu and NaCl. In case of hydrogen lattices, remarkable similarity is found between the DAFH orbitals evaluated with both the analytical approach and DFT. In case of the selected ionic and covalent solids the DAFH orbitals deliver bonding descriptions, which are compatible with classical orbital interpretation. For metals the DAFH analysis shows essential multicenter nature of bonding.

  20. THEORY OF SINGLE-MOLECULE SPECTROSCOPY: Beyond the Ensemble Average

    NASA Astrophysics Data System (ADS)

    Barkai, Eli; Jung, Younjoon; Silbey, Robert

    2004-01-01

    Single-molecule spectroscopy (SMS) is a powerful experimental technique used to investigate a wide range of physical, chemical, and biophysical phenomena. The merit of SMS is that it does not require ensemble averaging, which is found in standard spectroscopic techniques. Thus SMS yields insight into complex fluctuation phenomena that cannot be observed using standard ensemble techniques. We investigate theoretical aspects of SMS, emphasizing (a) dynamical fluctuations (e.g., spectral diffusion, photon-counting statistics, antibunching, quantum jumps, triplet blinking, and nonergodic blinking) and (b) single-molecule fluctuations in disordered systems, specifically distribution of line shapes of single molecules in low-temperature glasses. Special emphasis is given to single-molecule systems that reveal surprising connections to Levy statistics (i.e., blinking of quantum dots and single molecules in glasses). We compare theory with experiment and mention open problems. Our work demonstrates that the theory of SMS is a complementary field of research for describing optical spectroscopy in the condensed phase.

  1. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    SciTech Connect

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-05

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  2. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    NASA Astrophysics Data System (ADS)

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-01

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  3. Resolution improvement by 3D particle averaging in localization microscopy

    PubMed Central

    Broeken, Jordi; Johnson, Hannah; Lidke, Diane S.; Liu, Sheng; Nieuwenhuizen, Robert P.J.; Stallinga, Sjoerd; Lidke, Keith A.; Rieger, Bernd

    2015-01-01

    Inspired by recent developments in localization microscopy that applied averaging of identical particles in 2D for increasing the resolution even further, we discuss considerations for alignment (registration) methods for particles in general and for 3D in particular. We detail that traditional techniques for particle registration from cryo electron microscopy based on cross-correlation are not suitable, as the underlying image formation process is fundamentally different. We argue that only localizations, i.e. a set of coordinates with associated uncertainties, are recorded and not a continuous intensity distribution. We present a method that owes to this fact and that is inspired by the field of statistical pattern recognition. In particular we suggest to use an adapted version of the Bhattacharyya distance as a merit function for registration. We evaluate the method in simulations and demonstrate it on three-dimensional super-resolution data of Alexa 647 labelled to the Nup133 protein in the nuclear pore complex of Hela cells. From the simulations we find suggestions that for successful registration the localization uncertainty must be smaller than the distance between labeling sites on a particle. These suggestions are supported by theoretical considerations concerning the attainable resolution in localization microscopy and its scaling behavior as a function of labeling density and localization precision. PMID:25866640

  4. Multifractal detrended moving average analysis for texture representation

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wang, Lin; Zou, Rui-Biao

    2014-09-01

    Multifractal detrended moving average analysis (MF-DMA) is recently employed to detect long-range correlation and multifractal nature in stationary and non-stationary time series. In this paper, we propose a method to calculate the generalized Hurst exponent for each pixel of a surface based on MF-DMA, which we call the MF-DMA-based local generalized Hurst exponent. These exponents form a matrix, which we denote by LHq. These exponents are similar to the multifractal detrended fluctuation analysis (MF-DFA)-based local generalized Hurst exponent. The performance of the calculated LHq is tested for two synthetic multifractal surfaces and ten randomly chosen natural textures with analytical solutions under three cases, namely, backward (θ = 0), centered (θ = 0.5), and forward (θ = 1) with different q values and different sub-image sizes. Two sets of comparison segmentation experiments between the three cases of the MF-DMA-based LHq and the MF-DFA-based LHq show that the MF-DMA-based LHq is superior to the MF-DFA-based LHq. In addition, the backward MF-DMA algorithm is more efficient than the centered and forward algorithms. An interest finding is that the LHq with q < 0 outperforms the LHq with q > 0 in characterizing the image features of natural textures for both the MF-DMA and MF-DFA algorithms.

  5. Face Averages Enhance User Recognition for Smartphone Security

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  6. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  7. Coherent and stochastic averaging in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Nevzorov, Alexander A.

    2014-12-01

    A new approach for calculating solid-state NMR lineshapes of uniaxially rotating membrane proteins under the magic-angle spinning conditions is presented. The use of stochastic Liouville equation (SLE) allows one to account for both coherent sample rotation and stochastic motional averaging of the spherical dipolar powder patterns by uniaxial diffusion of the spin-bearing molecules. The method is illustrated via simulations of the dipolar powder patterns of rigid samples under the MAS conditions, as well as the recent method of rotational alignment in the presence of both MAS and rotational diffusion under the conditions of dipolar recoupling. It has been found that it is computationally more advantageous to employ direct integration over a spherical grid rather than to use a full angular basis set for the SLE solution. Accuracy estimates for the bond angles measured from the recoupled amide 1H-15N dipolar powder patterns have been obtained at various rotational diffusion coefficients. It has been shown that the rotational alignment method is applicable to membrane proteins approximated as cylinders with radii of approximately 20 Å, for which uniaxial rotational diffusion within the bilayer is sufficiently fast and exceeds the rate 2 × 105 s-1.

  8. The average size and temperature profile of quasar accretion disks

    SciTech Connect

    Jiménez-Vicente, J.; Mediavilla, E.; Muñoz, J. A.; Motta, V.; Falco, E.

    2014-03-01

    We use multi-wavelength microlensing measurements of a sample of 10 image pairs from 8 lensed quasars to study the structure of their accretion disks. By using spectroscopy or narrowband photometry, we have been able to remove contamination from the weakly microlensed broad emission lines, extinction, and any uncertainties in the large-scale macro magnification of the lens model. We determine a maximum likelihood estimate for the exponent of the size versus wavelength scaling (r{sub s} ∝λ {sup p}, corresponding to a disk temperature profile of T∝r {sup –1/p}) of p=0.75{sub −0.2}{sup +0.2} and a Bayesian estimate of p = 0.8 ± 0.2, which are significantly smaller than the prediction of the thin disk theory (p = 4/3). We have also obtained a maximum likelihood estimate for the average quasar accretion disk size of r{sub s}=4.5{sub −1.2}{sup +1.5} lt-day at a rest frame wavelength of λ = 1026 Å for microlenses with a mean mass of M = 1 M {sub ☉}, in agreement with previous results, and larger than expected from thin disk theory.

  9. Declining average daily census. Part 2: Possible solutions.

    PubMed

    Weil, T P

    1986-01-01

    Several possible solutions are available to hospitals experiencing a declining average daily census, including: Closure of some U.S. hospitals; Joint ventures between physicians and hospitals; Development of integrated and coordinated medical-fiscal-management information systems; Improvements in the hospital's short-term marketing strategy; Reduction of the facility's internal operation expenses; Vertical more than horizontal diversification to develop a multilevel (acute through home care) regional health care system with an alternative health care payment system that is a joint venture with the medical staff(s); Acquisition or management by a not-for-profit or investor-owned multihospital system (emphasis on horizontal versus vertical integration). Many reasons exist for an institution to choose the solution of developing a regional multilevel health care system rather than being part of a large, geographically scattered, multihospital system. Geographic proximity, lenders' preferences, service integration, management recruitment, and local remedies to a declining census all favor the regional system. More answers lie in emphasizing the basics of health care regionalization and focusing on vertical integration, including a prepayment plan, rather than stressing large multihospital systems with institutions in several states or selling out to the investor-owned groups.

  10. Reach-averaged sediment routing model of a canyon river

    USGS Publications Warehouse

    Wiele, S.M.; Wilcock, P.R.; Grams, P.E.

    2007-01-01

    Spatial complexity in channel geometry indicates that accurate prediction of sediment transport requires modeling in at least two dimensions. However, a one-dimensional model may be the only practical or possible alternative, especially for longer river reaches of practical concern in river management or landscape modeling. We have developed a one-dimensional model of the Colorado River through upper Grand Canyon that addresses this problem by reach averaging the channel properties and predicting changes in sand storage using separate source and sink functions coupled to the sand routing model. The model incorporates results from the application of a two-dimensional model of flow, sand transport, and bed evolution, and a new algorithm for setting the near-bed sand boundary condition for sand transported over an exposed bouldery bed. Model predictions were compared to measurements of sand discharge during intermittent tributary inputs and varying discharges controlled by dam releases. The model predictions generally agree well with the timing and magnitude of measured sand discharges but tend to overpredict sand discharge during the early stages of a high release designed to redistribute sand to higher-elevation deposits.

  11. Average annual precipitation classes to characterize watersheds in North Carolina

    USGS Publications Warehouse

    Terziotti, Silvia; Eimers, Jo Leslie

    2001-01-01

    This web site contains the Federal Geographic Data Committee-compliant metadata (documentation) for digital data produced for the North Carolina, Department of Environment and Natural Resources, Public Water Supply Section, Source Water Assessment Program. The metadata are for 11 individual Geographic Information System data sets. An overlay and indexing method was used with the data to derive a rating for unsaturated zone and watershed characteristics for use by the State of North Carolina in assessing more than 11,000 public water-supply wells and approximately 245 public surface-water intakes for susceptibility to contamination. For ground-water supplies, the digital data sets used in the assessment included unsaturated zone rating, vertical series hydraulic conductance, land-surface slope, and land cover. For assessment of public surface-water intakes, the data sets included watershed characteristics rating, average annual precipitation, land-surface slope, land cover, and ground-water contribution. Documentation for the land-use data set applies to both the unsaturated zone and watershed characteristics ratings. Documentation for the estimated depth-to-water map used in the calculation of the vertical series hydraulic conductance also is included.

  12. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time. PMID:26093410

  13. Average snowcover density values in Eastern Alps mountain

    NASA Astrophysics Data System (ADS)

    Valt, M.; Moro, D.

    2009-04-01

    The Italian Avalanche Warning Services monitor the snow cover characteristics through networks evenly distributed all over the alpine chain. Measurements of snow stratigraphy and density are very frequently performed with sampling rates of 1 -2 times per week. Snow cover density values are used to compute the dimensions of the building roofs as well as to design avalanche barriers. Based on the measured snow densities the Electricity Board can predict the amount of water resources deriving from snow melt in high relieves drainage basins. In this work it was possible to compute characteristic density values of the snow cover in the Eastern Alps using the information contained in the database from the ARPA (Agenzia Regionale Protezione Ambiente)-Centro Valanghe di Arabba, and Ufficio Valanghe- Udine. Among the other things, this database includes 15 years of stratigraphic measurements. More than 6,000 snow stratigraphic logs were analysed, in order to derive typical values as for geographical area, altitude, exposure, snow cover thickness and season. Computed values were compared to those established by the current Italian laws. Eventually, experts identified and evaluated the correlations between the seasonal variations of the average snow density and the variations related to the snowfall rate in the period 1994-2008 in the Eastern Alps mountain range

  14. Plate with a hole obeys the averaged null energy condition

    SciTech Connect

    Graham, Noah; Olum, Ken D.

    2005-07-15

    The negative energy density of Casimir systems appears to violate general relativity energy conditions. However, one cannot test the averaged null energy condition (ANEC) using standard calculations for perfectly reflecting plates, because the null geodesic would have to pass through the plates, where the calculation breaks down. To avoid this problem, we compute the contribution to ANEC for a geodesic that passes through a hole in a single plate. We consider both Dirichlet and Neumann boundary conditions in two and three space dimensions. We use a Babinet's principle argument to reduce the problem to a complementary finite disk correction to the perfect mirror result, which we then compute using scattering theory in elliptical and spheroidal coordinates. In the Dirichlet case, we find that the positive correction due to the hole overwhelms the negative contribution of the infinite plate. In the Neumann case, where the infinite plate gives a positive contribution, the hole contribution is smaller in magnitude, so again ANEC is obeyed. These results can be extended to the case of two plates in the limits of large and small hole radii. This system thus provides another example of a situation where ANEC turns out to be obeyed when one might expect it to be violated.

  15. Studying average electron drift velocity in pHEMT structures

    NASA Astrophysics Data System (ADS)

    Borisov, A. A.; Zhuravlev, K. S.; Zyrin, S. S.; Lapin, V. G.; Lukashin, V. M.; Makovetskaya, A. A.; Novoselets, V. I.; Pashkovskii, A. B.; Toropov, A. I.; Ursulyak, N. D.; Shcherbakov, S. V.

    2016-08-01

    Small-signal characteristics of pseudomorphic high-electron-mobility transistors based on donor-acceptor doped heterostructures (DA-pHEMTs) are compared to those of analogous transistors (pHEMTs) based on traditional heterostructures without acceptor doping. It is established that DA-pHEMTs, under otherwise equal conditions, exhibit (despite lower values of the low-field mobility of electrons) a much higher gain compared to that of usual pHEMTs. This behavior is related to the fact that the average electron drift velocity under the gate in DA-pHEMTs is significantly (1.4-1.6 times) higher than that in pHEMTs. This increase in the electron drift velocity is explained by two main factors of comparable influence: (i) decreasing role of transverse spatial transfer, which is caused by enhanced localization of hot electrons in the channel, and (ii) reduced scattering of hot electrons, which is caused by their strong confinement (dimensional quantization) in the potential well of DA-pHEMT heterostructures.

  16. Understanding Stokes forces in the wave-averaged equations

    NASA Astrophysics Data System (ADS)

    Suzuki, Nobuhiro; Fox-Kemper, Baylor

    2016-05-01

    The wave-averaged, or Craik-Leibovich, equations describe the dynamics of upper ocean flow interacting with nonbreaking, not steep, surface gravity waves. This paper formulates the wave effects in these equations in terms of three contributions to momentum: Stokes advection, Stokes Coriolis force, and Stokes shear force. Each contribution scales with a distinctive parameter. Moreover, these contributions affect the turbulence energetics differently from each other such that the classification of instabilities is possible accordingly. Stokes advection transfers energy between turbulence and Eulerian mean-flow kinetic energy, and its form also parallels the advection of tracers such as salinity, buoyancy, and potential vorticity. Stokes shear force transfers energy between turbulence and surface waves. The Stokes Coriolis force can also transfer energy between turbulence and waves, but this occurs only if the Stokes drift fluctuates. Furthermore, this formulation elucidates the unique nature of Stokes shear force and also allows direct comparison of Stokes shear force with buoyancy. As a result, the classic Langmuir instabilities of Craik and Leibovich, wave-balanced fronts and filaments, Stokes perturbations of symmetric and geostrophic instabilities, the wavy Ekman layer, and the wavy hydrostatic balance are framed in terms of intuitive physical balances.

  17. Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.

    2016-06-01

    Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.

  18. Measurement of the average lifetime of hadrons containing bottom quarks

    SciTech Connect

    Klem, D.E.

    1986-06-01

    This thesis reports a measurement of the average lifetime of hadrons containing bottom quarks. It is based on data taken with the DELCO detector at the PEP e/sup +/e/sup -/ storage ring at a center of mass energy of 29 GeV. The decays of hadrons containing bottom quarks are tagged in hadronic events by the presence of electrons with a large component of momentum transverse to the event axis. Such electrons are identified in the DELCO detector by an atmospheric pressure Cherenkov counter assisted by a lead/scintillator electromagnetic shower counter. The lifetime measured is 1.17 psec, consistent with previous measurements. This measurement, in conjunction with a limit on the non-charm branching ratio in b-decay obtained by other experiments, can be used to constrain the magnitude of the V/sub cb/ element of the Kobayashi-Maskawa matrix to the range 0.042 (+0.005 or -0.004 (stat.), +0.004 or -0.002 (sys.)), where the errors reflect the uncertainty on tau/sub b/ only and not the uncertainties in the calculations which relate the b-lifetime and the element of the Kobayashi-Maskawa matrix.

  19. The Lake Wobegon Effect: Are All Cancer Patients above Average?

    PubMed Central

    Wolf, Jacqueline H; Wolf, Kevin S

    2013-01-01

    Context When elderly patients face a terminal illness such as lung cancer, most are unaware that what we term in this article “the Lake Wobegon effect” taints the treatment advice imparted to them by their oncologists. In framing treatment plans, cancer specialists tend to intimate that elderly patients are like the children living in Garrison Keillor's mythical Lake Wobegon: above average and thus likely to exceed expectations. In this article, we use the story of our mother's death from lung cancer to investigate the consequences of elderly people's inability to reconcile the grave reality of their illness with the overly optimistic predictions of their physicians. Methods In this narrative analysis, we examine the routine treatment of elderly, terminally ill cancer patients through alternating lenses: the lens of a historian of medicine who also teaches ethics to medical students and the lens of an actuary who is able to assess physicians’ claims for the outcome of medical treatments. Findings We recognize that a desire to instill hope in patients shapes physicians’ messages. We argue, however, that the automatic optimism conveyed to elderly, dying patients by cancer specialists prompts those patients to choose treatment that is ineffective and debilitating. Rather than primarily prolong life, treatments most notably diminish patients’ quality of life, weaken the ability of patients and their families to prepare for their deaths, and contribute significantly to the unsustainable costs of the U.S. health care system. Conclusions The case described in this article suggests how physicians can better help elderly, terminally ill patients make medical decisions that are less damaging to them and less costly to the health care system. PMID:24320166

  20. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  1. Averaging auditory evoked magnetoencephalographic and electroencephalographic responses: a critical discussion.

    PubMed

    König, Reinhard; Matysiak, Artur; Kordecki, Wojciech; Sielużycki, Cezary; Zacharias, Norman; Heil, Peter

    2015-03-01

    In the analysis of data from magnetoencephalography (MEG) and electroencephalography (EEG), it is common practice to arithmetically average event-related magnetic fields (ERFs) or event-related electric potentials (ERPs) across single trials and subsequently across subjects to obtain the so-called grand mean. Comparisons of grand means, e.g. between conditions, are then often performed by subtraction. These operations, and their statistical evaluation with parametric tests such as ANOVA, tacitly rely on the assumption that the data follow the additive model, have a normal distribution, and have a homogeneous variance. This may be true for single trials, but these conditions are rarely met when ERFs/ERPs are compared between subjects, meaning that the additive model is seldom the correct model for computing grand mean waveforms. Here, we summarize some of our recent work and present new evidence, from auditory-evoked MEG and EEG results, that the non-normal distributions and the heteroscedasticity observed instead result because ERFs/ERPs follow a mixed model with additive and multiplicative components. For peak amplitudes, such as the auditory M100 and N100, the multiplicative component dominates. These findings emphasize that the common practice of simply subtracting arithmetic means of auditory-evoked ERFs or ERPs is problematic without prior adequate transformation of the data. Application of the area sinus hyperbolicus (asinh) transform to data following the mixed model transforms them into the requested additive model with its normal distribution and homogeneous variance. We therefore advise checking the data for compliance with the additive model and using the asinh transform if required. PMID:25728181

  2. 40 CFR 1054.710 - How do I average emission credits?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I average emission credits... Averaging, Banking, and Trading for Certification § 1054.710 How do I average emission credits? (a) Averaging is the exchange of emission credits among your families. You may average emission credits...

  3. Microbes make average 2 nanometer diameter crystalline UO2 particles.

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.; Kelly, S. D.; Kemner, K. M.; Banfield, J. F.

    2001-12-01

    It is well known that phylogenetically diverse groups of microorganisms are capable of catalyzing the reduction of highly soluble U(VI) to highly insoluble U(IV), which rapidly precipitates as uraninite (UO2). Because biological uraninite is highly insoluble, microbial uranyl reduction is being intensively studied as the basis for a cost-effective in-situ bioremediation strategy. Previous studies have described UO2 biomineralization products as amorphous or poorly crystalline. The objective of this study is to characterize the nanocrystalline uraninite in detail in order to determine the particle size, crystallinity, and size-related structural characteristics, and to examine the implications of these for reoxidation and transport. In this study, we obtained U-contaminated sediment and water from an inactive U mine and incubated them anaerobically with nutrients to stimulate reductive precipitation of UO2 by indigenous anaerobic bacteria, mainly Gram-positive spore-forming Desulfosporosinus and Clostridium spp. as revealed by RNA-based phylogenetic analysis. Desulfosporosinus sp. was isolated from the sediment and UO2 was precipitated by this isolate from a simple solution that contains only U and electron donors. We characterized UO2 formed in both of the experiments by high resolution-TEM (HRTEM) and X-ray absorption fine structure analysis (XAFS). The results from HRTEM showed that both the pure and the mixed cultures of microorganisms precipitated around 1.5 - 3 nm crystalline UO2 particles. Some particles as small as around 1 nm could be imaged. Rare particles around 10 nm in diameter were also present. Particles adhere to cells and form colloidal aggregates with low fractal dimension. In some cases, coarsening by oriented attachment on \\{111\\} is evident. Our preliminary results from XAFS for the incubated U-contaminated sample also indicated an average diameter of UO2 of 2 nm. In nanoparticles, the U-U distance obtained by XAFS was 0.373 nm, 0.012 nm

  4. Average glandular dose and phantom image quality in mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, M.; Nogueira, M. S.; Guedes, E.; Andrade, M. C.; Peixoto, J. E.; Joana, G. S.; Castro, J. G.

    2007-09-01

    Doses in mammography should be maintained as low as possible without reducing the high image quality needed for early detection of the breast cancer. The breast is composed of tissues with very close composition and densities. It increases the difficulty to detect small changes in the normal anatomical structures which may be associated with breast cancer. To achieve the standards of definition and contrast for mammography, the quality and intensity of the X-ray beam, the breast positioning and compression, the film-screen system, and the film processing have to be in optimal operational conditions. This study sought to evaluate average glandular dose (AGD) and image quality on a standard phantom in 134 mammography units in the state of Minas Gerais, Brazil, between December 2004 and May 2006. AGDs were obtained by means of entrance kerma measured with TL LiF100 dosimeters on phantom surface. Phantom images were obtained with automatic exposure technique, fixed 28 kV and molybdenum anode-filter combination. The phantom used contained structures simulating tumoral masses, microcalcifications, fibers and low contrast areas. High-resolution metallic meshes to assess image definition and a stepwedge to measure image contrast index were also inserted in the phantom. The visualization of simulated structures, the mean optical density and the contrast index allowed to classify the phantom image quality in a seven-point scale. The results showed that 54.5% of the facilities did not achieve the minimum performance level for image quality. It is mainly due to insufficient film processing observed in 61.2% of the units. AGD varied from 0.41 to 2.73 mGy with a mean value of 1.32±0.44 mGy. In all optimal quality phantom images, AGDs were in this range. Additionally, in 7.3% of the mammography units, the AGD constraint of 2 mGy was exceeded. One may conclude that dose level to patient and image quality are not in conformity to regulations in most of the facilities. This

  5. Side chain conformational averaging in human dihydrofolate reductase.

    PubMed

    Tuttle, Lisa M; Dyson, H Jane; Wright, Peter E

    2014-02-25

    The three-dimensional structures of the dihydrofolate reductase enzymes from Escherichia coli (ecDHFR or ecE) and Homo sapiens (hDHFR or hE) are very similar, despite a rather low level of sequence identity. Whereas the active site loops of ecDHFR undergo major conformational rearrangements during progression through the reaction cycle, hDHFR remains fixed in a closed loop conformation in all of its catalytic intermediates. To elucidate the structural and dynamic differences between the human and E. coli enzymes, we conducted a comprehensive analysis of side chain flexibility and dynamics in complexes of hDHFR that represent intermediates in the major catalytic cycle. Nuclear magnetic resonance relaxation dispersion experiments show that, in marked contrast to the functionally important motions that feature prominently in the catalytic intermediates of ecDHFR, millisecond time scale fluctuations cannot be detected for hDHFR side chains. Ligand flux in hDHFR is thought to be mediated by conformational changes between a hinge-open state when the substrate/product-binding pocket is vacant and a hinge-closed state when this pocket is occupied. Comparison of X-ray structures of hinge-open and hinge-closed states shows that helix αF changes position by sliding between the two states. Analysis of χ1 rotamer populations derived from measurements of (3)JCγCO and (3)JCγN couplings indicates that many of the side chains that contact helix αF exhibit rotamer averaging that may facilitate the conformational change. The χ1 rotamer adopted by the Phe31 side chain depends upon whether the active site contains the substrate or product. In the holoenzyme (the binary complex of hDHFR with reduced nicotinamide adenine dinucleotide phosphate), a combination of hinge opening and a change in the Phe31 χ1 rotamer opens the active site to facilitate entry of the substrate. Overall, the data suggest that, unlike ecDHFR, hDHFR requires minimal backbone conformational rearrangement as

  6. Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration

    SciTech Connect

    Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F.

    2008-09-15

    for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.

  7. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? 60.1265 Section 60.1265 Protection of Environment... Continuous Emission Monitoring § 60.1265 How do I convert my 1-hour arithmetic averages into the...

  8. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non

  9. Forecasting of Average Monthly River Flows in Colombia

    NASA Astrophysics Data System (ADS)

    Mesa, O. J.; Poveda, G.

    2006-05-01

    The last two decades have witnessed a marked increase in our knowledge of the causes of interannual hydroclimatic variability and our ability to make predictions. Colombia, located near the seat of the ENSO phenomenon, has been shown to experience negative (positive) anomalies in precipitation in concert with El Niño (La Niña). In general besides the Pacific Ocean, Colombia has climatic influences from the Atlantic Ocean and the Caribbean Sea through the tropical forest of the Amazon basin and the savannas of the Orinoco River, in top of the orographic and hydro-climatic effects introduced by the Andes. As in various other countries of the region, hydro-electric power contributes a large proportion (75 %) of the total electricity generation in Colombia. Also, most agriculture is rain-fed dependant, and domestic water supply relies mainly on surface waters from creeks and rivers. Besides, various vector borne tropical diseases intensify in response to rain and temperature changes. Therefore, there is a direct connection between climatic fluctuations and national and regional economies. This talk specifically presents different forecasts of average monthly stream flows for the inflow into the largest reservoir used for hydropower generation in Colombia, and illustrates the potential economic savings of such forecasts. Because of planning of the reservoir operation, the most appropriated time scale for this application is the annual to interannual. Fortunately, this corresponds to the scale at which hydroclimate variability understanding has improved significantly. Among the different possibilities we have explored: traditional statistical ARIMA models, multiple linear regression, natural and constructed analogue models, the linear inverse model, neural network models, the non-parametric regression splines (MARS) model, regime dependant Markovian models and one we termed PREBEO, which is based on spectral bands decomposition using wavelets. Most of the methods make

  10. 40 CFR 1051.720 - How do I calculate my average emission level or emission credits?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I calculate my average emission... Averaging, Banking, and Trading for Certification § 1051.720 How do I calculate my average emission level or emission credits? (a) Calculate your average emission level for each type of recreational vehicle or...

  11. 40 CFR 1039.710 - How do I average emission credits?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I average emission credits..., Banking, and Trading for Certification § 1039.710 How do I average emission credits? (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only within...

  12. 40 CFR 1051.705 - How do I average emission levels?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I average emission levels? 1051... Trading for Certification § 1051.705 How do I average emission levels? (a) As specified in subpart B of...) Calculate a preliminary average emission level according to § 1051.720 for each averaging set...

  13. 47 CFR 65.305 - Calculation of the weighted average cost of capital.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation of the weighted average cost of... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted average... Commission determines to the contrary in a prescription proceeding, the composite weighted average cost...

  14. 40 CFR 1045.710 - How do I average emission credits?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I average emission credits..., Banking, and Trading for Certification § 1045.710 How do I average emission credits? (a) Averaging is the exchange of emission credits among your families. You may average emission credits only within the...

  15. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...

  16. 47 CFR 24.53 - Calculation of height above average terrain (HAAT).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Calculation of height above average terrain... average terrain (HAAT). (a) HAAT is determined by subtracting average terrain elevation from antenna height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data...

  17. 47 CFR 24.53 - Calculation of height above average terrain (HAAT).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Calculation of height above average terrain... average terrain (HAAT). (a) HAAT is determined by subtracting average terrain elevation from antenna height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data...

  18. 47 CFR 24.53 - Calculation of height above average terrain (HAAT).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Calculation of height above average terrain... average terrain (HAAT). (a) HAAT is determined by subtracting average terrain elevation from antenna height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data...

  19. 47 CFR 24.53 - Calculation of height above average terrain (HAAT).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Calculation of height above average terrain... average terrain (HAAT). (a) HAAT is determined by subtracting average terrain elevation from antenna height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data...

  20. 47 CFR 24.53 - Calculation of height above average terrain (HAAT).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Calculation of height above average terrain... average terrain (HAAT). (a) HAAT is determined by subtracting average terrain elevation from antenna height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data...