Science.gov

Sample records for 8-hr time-weighted average

  1. Time-weighted average water sampling in Lake Ontario with solid-phase microextraction passive samplers.

    PubMed

    Ouyang, Gangfeng; Zhao, Wennan; Bragg, Leslie; Qin, Zhipei; Alaee, Mehran; Pawliszyn, Janusz

    2007-06-01

    In this study, three types of solid-phase microextraction (SPME) passive samplers, including a fiber-retracted device, a polydimethylsiloxane (PDMS)-rod and a PDMS-membrane, were evaluated to determine the time weighted average (TWA) concentrations of polycyclic aromatic hydrocarbons (PAHs) in Hamilton Harbor (the western tip of Lake Ontario, ON, Canada). Field trials demonstrated that these types of SPME samplers are suitable for the long-term monitoring of organic pollutants in water. These samplers possess all of the advantages of SPME: they are solvent-free, sampling, extraction and concentration are combined into one step, and they can be directly injected into a gas chromatograph (GC) for analysis without further treatment. These samplers also address the additional needs of a passive sampling technique: they are economical, easy to deploy, and the TWA concentrations of target analytes can be obtained with one sampler. Moreover, the mass uptake of these samplers is independent of the face velocity, or the effect can be calibrated, which is desirable for long-term field sampling, especially when the convection conditions of the sampling environment are difficult to measure and calibrate. Among the three types of SPME samplers that were tested, the PDMS-membrane possesses the highest surface-to-volume ratio, which results in the highest sensitivity and mass uptake and the lowest detection level.

  2. Time weighted average concentration monitoring based on thin film solid phase microextraction.

    PubMed

    Ahmadi, Fardin; Sparham, Chris; Boyaci, Ezel; Pawliszyn, Janusz

    2017-03-02

    Time weighted average (TWA) passive sampling with thin film solid phase microextraction (TF-SPME) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for collection, identification, and quantification of benzophenone-3, benzophenone-4, 2-phenylbenzimidazole-5-sulphonic acid, octocrylene, and triclosan in the aquatic environment. Two types of TF-SPME passive samplers, including a retracted thin film device using a hydrophilic lipophilic balance (HLB) coating, and an open bed configuration with an octadecyl silica-based (C18) coating, were evaluated in an aqueous standard generation (ASG) system. Laboratory calibration results indicated that the thin film retracted device using HLB coating is suitable to determine TWA concentrations of polar analytes in water, with an uptake that was linear up to 70 days. In open bed form, a one-calibrant kinetic calibration technique was accomplished by loading benzophenone3-d5 as calibrant on the C18 coating to quantify all non-polar compounds. The experimental results showed that the one-calibrant kinetic calibration technique can be used for determination of classes of compounds in cases where deuterated counterparts are either not available or expensive. The developed passive samplers were deployed in wastewater-dominated reaches of the Grand River (Kitchener, ON) to verify their feasibility for determination of TWA concentrations in on-site applications. Field trials results indicated that these devices are suitable for long-term and short-term monitoring of compounds varying in polarity, such as UV blockers and biocide compounds in water, and the data were in good agreement with literature data.

  3. Uncertainty and variability in historical time-weighted average exposure data.

    PubMed

    Davis, Adam J; Strom, Daniel J

    2008-02-01

    Beginning around 1940, private companies began processing of uranium and thorium ore, compounds, and metals for the Manhattan Engineer District and later the U.S. Atomic Energy Commission (AEC). Personnel from the AEC's Health and Safety Laboratory (HASL) visited many of the plants to assess worker exposures to radiation and radioactive materials. They developed a time-and-task approach to estimating "daily weighted average" (DWA) concentrations of airborne uranium, thorium, radon, and radon decay products. While short-term exposures greater than 10(5) dpm m(-3) of uranium and greater than 10(5) pCi L(-1) of radon were observed, DWA concentrations were much lower. The HASL-reported DWA values may be used as inputs for dose reconstruction in support of compensation decisions, but they have no numerical uncertainties associated with them. In this work, Monte Carlo methods are used retrospectively to assess the uncertainty and variability in the DWA values for 63 job titles from five different facilities that processed U, U ore, Th, or 226Ra-222Rn between 1948 and 1955. Most groups of repeated air samples are well described by lognormal distributions. Combining samples associated with different tasks often results in a reduction of the geometric standard deviation (GSD) of the DWA to less than those GSD values typical of individual tasks. Results support the assumption of a GSD value of 5 when information on uncertainty in DWA exposures is unavailable. Blunders involving arithmetic, transposition, and transcription are found in many of the HASL reports. In 5 out of the 63 cases, these mistakes result in overestimates of DWA values by a factor of 2 to 2.5, and in 2 cases DWA values are underestimated by factors of 3 to 10.

  4. Analysis of trace contaminants in hot gas streams using time-weighted average solid-phase microextraction: proof of concept.

    PubMed

    Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C

    2013-03-15

    Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low

  5. Time-weighted average sampling of airborne propylene glycol ethers by a solid-phase microextraction device.

    PubMed

    Shih, H C; Tsai, S W; Kuo, C H

    2012-01-01

    A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler

  6. New universal, portable and cryogenic sampler for time weighted average monitoring of H2S, NH3, benzene, toluene, ethylbenzene, xylenes and dimethylethylamine.

    PubMed

    Juarez-Galan, Juan M; Valor, Ignacio

    2009-04-10

    A new cryogenic integrative air sampler (patent application number 08/00669), able to overcome many of the limitations in current volatile organic compounds and odour sampling methodologies is presented. The sample is spontaneously collected in a universal way at 15 mL/min, selectively dried (reaching up to 95% of moisture removal) and stored under cryogenic conditions. The sampler performance was tested under time weighted average (TWA) conditions, sampling 100L of air over 5 days for determination of NH(3), H(2)S, and benzene, toluene, ethylbenzene and xylenes (BTEX) in the ppm(v) range. Recovery was 100% (statistically) for all compounds, with a concentration factor of 5.5. Furthermore, an in-field evaluation was done by monitoring the TWA inmission levels of BTEX and dimethylethylamine (ppb(v) range) in an urban area with the developed technology and comparing the results with those monitored with a commercial graphitised charcoal diffusive sampler. The results obtained showed a good statistical agreement between the two techniques.

  7. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    PubMed

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method.

  8. Understanding the effectiveness of precursor reductions in lowering 8-hr ozone concentrations--Part II. The eastern United States.

    PubMed

    Reynolds, Steven D; Blanchard, Charles L; Ziman, Stephen D

    2004-11-01

    Analyses of ozone (O3) measurements in conjunction with photochemical modeling were used to assess the feasibility of attaining the federal 8-hr O3 standard in the eastern United States. Various combinations of volatile organic compound (VOC) and oxides of nitrogen (NOx) emission reductions were effective in lowering modeled peak 1-hr O3 concentrations. VOC emissions reductions alone had only a modest impact on modeled peak 8-hr O3 concentrations. Anthropogenic NOx emissions reductions of 46-86% of 1996 base case values were needed to reach the level of the 8-hr standard in some areas. As NOx emissions are reduced, O3 production efficiency increases, which accounts for the less than proportional response of calculated 8-hr O3 levels. Such increases in O3 production efficiency also were noted in previous modeling work for central California. O3 production in some urban core areas, such as New York City and Chicago, IL, was found to be VOC-limited. In these areas, moderate NOx emissions reductions may be accompanied by increases in peak 8-hr O3 levels. The findings help to explain differences in historical trends in 1- and 8-hr O3 levels and have serious implications for the feasibility of attaining the 8-hr O3 standard in several areas of the eastern United States.

  9. A ∼ 3.8 hr PERIODICITY FROM AN ULTRASOFT ACTIVE GALACTIC NUCLEUS CANDIDATE

    SciTech Connect

    Lin, Dacheng; Irwin, Jimmy A.; Godet, Olivier; Webb, Natalie A.; Barret, Didier

    2013-10-10

    Very few galactic nuclei are found to show significant X-ray quasi-periodic oscillations (QPOs). After carefully modeling the noise continuum, we find that the ∼3.8 hr QPO in the ultrasoft active galactic nucleus candidate 2XMM J123103.2+110648 was significantly detected (∼5σ) in two XMM-Newton observations in 2005, but not in the one in 2003. The QPO root mean square (rms) is very high and increases from ∼25% in 0.2-0.5 keV to ∼50% in 1-2 keV. The QPO probably corresponds to the low-frequency type in Galactic black hole X-ray binaries, considering its large rms and the probably low mass (∼10{sup 5} M {sub ☉}) of the black hole in the nucleus. We also fit the soft X-ray spectra from the three XMM-Newton observations and find that they can be described with either pure thermal disk emission or optically thick low-temperature Comptonization. We see no clear X-ray emission from the two Swift observations in 2013, indicating lower source fluxes than those in XMM-Newton observations.

  10. Exposure Assessment for Carbon Dioxide Gas: Full Shift Average and Short-Term Measurement Approaches.

    PubMed

    Hill, R Jedd; Smith, Philip A

    2015-01-01

    Carbon dioxide (CO2) makes up a relatively small percentage of atmospheric gases, yet when used or produced in large quantities as a gas, a liquid, or a solid (dry ice), substantial airborne exposures may occur. Exposure to elevated CO2 concentrations may elicit toxicity, even with oxygen concentrations that are not considered dangerous per se. Full-shift sampling approaches to measure 8-hr time weighted average (TWA) CO2 exposures are used in many facilities where CO2 gas may be present. The need to assess rapidly fluctuating CO2 levels that may approach immediately dangerous to life or health (IDLH) conditions should also be a concern, and several methods for doing so using fast responding measurement tools are discussed in this paper. Colorimetric detector tubes, a non-dispersive infrared (NDIR) detector, and a portable Fourier transform infrared (FTIR) spectroscopy instrument were evaluated in a laboratory environment using a flow-through standard generation system and were found to provide suitable accuracy and precision for assessing rapid fluctuations in CO2 concentration, with a possible effect related to humidity noted only for the detector tubes. These tools were used in the field to select locations and times for grab sampling and personal full-shift sampling, which provided laboratory analysis data to confirm IDLH conditions and 8-hr TWA exposure information. Fluctuating CO2 exposures are exemplified through field work results from several workplaces. In a brewery, brief CO2 exposures above the IDLH value occurred when large volumes of CO2-containing liquid were released for disposal, but 8-hr TWA exposures were not found to exceed the permissible level. In a frozen food production facility nearly constant exposure to CO2 concentrations above the permissible 8-hr TWA value were seen, as well as brief exposures above the IDLH concentration which were associated with specific tasks where liquid CO2 was used. In a poultry processing facility the use of dry

  11. Measurement and analysis of 8-hour time-weighted average sound pressure levels in a vivarium decontamination facility.

    PubMed

    Pate, William; Charlton, Michael; Wellington, Carl

    2013-01-01

    Occupational noise exposure is a recognized hazard for employees working near equipment and processes that generate high levels of sound pressure. High sound pressure levels have the potential to result in temporary or permanent alteration in hearing perception. The cleaning of cages used to house laboratory research animals is a process that uses equipment capable of generating high sound pressure levels. The purpose of this research study was to assess occupational exposure to sound pressure levels for employees operating cage decontamination equipment. This study reveals the potential for overexposure to hazardous noise as defined by the Occupational Safety and Health Administration (OSHA) permissible exposure limit and consistent surpassing of the OSHA action level. These results emphasize the importance of evaluating equipment and room design when acquiring new cage decontamination equipment in order to minimize employee exposure to potentially hazardous noise pressure levels.

  12. Constantly stirred sorbent and continuous flow integrative sampler: new integrative samplers for the time weighted average water monitoring.

    PubMed

    Llorca, Julio; Gutiérrez, Cristina; Capilla, Elisabeth; Tortajada, Rafael; Sanjuán, Lorena; Fuentes, Alicia; Valor, Ignacio

    2009-07-31

    Two innovative integrative samplers have been developed enabling high sampling rates unaffected by turbulences (thus avoiding the use of performance reference compounds) and with negligible lag time values. The first, called the constantly stirred sorbent (CSS) consists of a rotator head that holds the sorbent. The rotation speed given to the head generates a constant turbulence around the sorbent making it independent of the external hydrodynamics. The second, called the continuous flow integrative sampler (CFIS) consists of a small peristaltic pump which produces a constant flow through a glass cell. The sorbent is located inside this cell. Although different sorbents can be used, poly(dimethylsiloxane) PDMS under the commercial twister format (typically used for stir bar sorptive extraction) was evaluated for the sampling of six polycyclic aromatic hydrocarbons and three organochlorine pesticides. These new devices have many analogies with passive samplers but cannot truly be defined as such since they need a small energy supply of around 0.5 W supplied by a battery. Sampling rates from 181 x 10(-3) to 791 x 10(-3) L/day were obtained with CSS and 18 x 10(-3) to 53 x 10(-3) with CFIS. Limits of detection for these devices are in the range from 0.3 to 544 pg/L with a precision below 20%. An in field evaluation for both devices was carried out for a 5 days sampling period in the outlet of a waste water treatment plant with comparable results to those obtained with a classical sampling method.

  13. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  14. Determining time-weighted average concentrations of nitrate and ammonium in freshwaters using DGT with ion exchange membrane-based binding layers.

    PubMed

    Huang, Jianyin; Bennett, William W; Welsh, David T; Teasdale, Peter R

    2016-12-08

    Commercially-available AMI-7001 anion exchange and CMI-7000 cation exchange membranes were utilised as binding layers for DGT measurements of NO3-N and NH4-N in freshwaters. These ion exchange membranes are easier to prepare and handle than DGT binding layers consisting of hydrogels cast with ion exchange resins. The membranes showed good uptake and elution efficiencies for both NO3-N and NH4-N. The membrane-based DGTs are suitable for pH 3.5-8.5 and ionic strength ranges (0.0001-0.014 and 0.0003-0.012 mol L(-1) as NaCl for the AMI-7001 and CMI-7000 membrane, respectively) typical of most natural freshwaters. The binding membranes had high intrinsic binding capacities for NO3-N and NH4-N of 911 ± 88 μg and 3512 ± 51 μg, respectively. Interferences from the major competing ions for membrane-based DGTs are similar to DGTs employing resin-based binding layers but with slightly different selectivity. This different selectivity means that the two DGT types can be used in different types of freshwaters. The laboratory and field experiments demonstrated that AMI-DGT and CMI-DGT can be an alternative to A520E-DGT and PrCH-DGT for measuring NO3-N and NH4-N, respectively, as (i) membrane-based DGT have a consistent composition, (ii) avoid the use of toxic chemicals, (iii) provided highly representative results (CDGT : CSOLN between 0.81 and 1.3), and (iv) agreed with resin-based DGTs to within 85-120%.

  15. Development of accumulated heat stress index based on time-weighted function

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo

    2016-05-01

    Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.

  16. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  17. Areal Average Albedo (AREALAVEALB)

    DOE Data Explorer

    Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni

    2008-01-01

    he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.

  18. States' Average College Tuition.

    ERIC Educational Resources Information Center

    Eglin, Joseph J., Jr.; And Others

    This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…

  19. Aggregation and Averaging.

    ERIC Educational Resources Information Center

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  20. Threaded average temperature thermocouple

    NASA Technical Reports Server (NTRS)

    Ward, Stanley W. (Inventor)

    1990-01-01

    A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

  1. The average enzyme principle

    PubMed Central

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-01-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076

  2. Averaging of TNTC counts.

    PubMed Central

    Haas, C N; Heller, B

    1988-01-01

    When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211

  3. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates

    USGS Publications Warehouse

    Bradley, Paul M.; Journey, Celeste; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.

  4. Intra- and inter-basin mercury comparisons: Importance of basin scale and time-weighted methylmercury estimates.

    PubMed

    Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen

    2013-01-01

    To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.

  5. Effect of real-time weighted integration system for rapid calculation of functional images in clinical positron emission tomography

    SciTech Connect

    Iida, Hidehiro; Bloomfield, P.M.; Miura, Shuichi

    1995-03-01

    A system has been developed to rapidly calculate images of parametric rate constants, without acquiring dynamic frame data for clinical positron emission tomography (PET). This method is based on the weighted-integration algorithms for the two- and three-compartment models, and hardware developments (real-time operation and a large cache memory system) in a PET scanner, Headtome-IV, which enables the acquisition of multiple sinograms with independent weight integration functions. Following the administration of the radio-tracer, the scan is initiated to collect multiple time-weighted, integrated sinograms with three different weight functions. These sinograms are reconstructed and the images, with the arterial blood data, are inserted into the operational equations to provide parametric rate constant images. The implementation of this method has been checked in H{sub 2} {sup 15}O and {sup 18}F-fluorophenylalanine ({sup 18}FPhe) studies based on a two-compartment model, and in a {sup 18}F-fluorodeoxyglucose ({sup 18}FDG) study based on the three-compartment model. A volunteer study, completed for each compound, yielded results consistent with those produced by existing nonlinear fitting methods. Thus, this system has been developed capable of generating rapidly quantitative, physiological images, without dynamic data acquisition, which will be of great advantage to PET in the clinical environment. This system would also be of great advantage in the new generation high-resolution PET tomography, which acquire data in a 3-D, septaless mode.

  6. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  7. Temperature averaging thermal probe

    NASA Technical Reports Server (NTRS)

    Kalil, L. F.; Reinhardt, V. (Inventor)

    1985-01-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  8. Temperature averaging thermal probe

    NASA Astrophysics Data System (ADS)

    Kalil, L. F.; Reinhardt, V.

    1985-12-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  9. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  10. Effect of annealing time, weight pressure and cobalt doping on the electrical and magnetic behavior of barium titanate

    NASA Astrophysics Data System (ADS)

    Samuvel, K.; Ramachandran, K.

    2016-05-01

    BaTi0.5CO0.5O3 (BTCO) nanoparticles were prepared by the solid state reaction technique using different starting materials and the microstructure examined by XRD, FESEM, BDS and VSM. X-ray diffraction and electron diffraction patterns showed that the nanoparticles were the tetragonal BTCO phase. The BTCO nanoparticles prepared from the starting materials of as prepared titanium-oxide, Cobalt -oxide and barium carbonate have spherical grain morphology, an average size of 65 nm and a fairly narrow size distribution. The nano-scale presence and the formation of the tetragonal perovskite phase as well as the crystallinity were detected using the mentioned techniques. Dielectric properties of the samples were measured at different frequencies. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. The doped BTCO samples exhibited low loss factor at 1 kHz and 1 MHz frequencies respectively.

  11. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  12. Screen-time Weight-loss Intervention Targeting Children at Home (SWITCH): A randomized controlled trial study protocol

    PubMed Central

    2011-01-01

    Background Approximately one third of New Zealand children and young people are overweight or obese. A similar proportion (33%) do not meet recommendations for physical activity, and 70% do not meet recommendations for screen time. Increased time being sedentary is positively associated with being overweight. There are few family-based interventions aimed at reducing sedentary behavior in children. The aim of this trial is to determine the effects of a 24 week home-based, family oriented intervention to reduce sedentary screen time on children's body composition, sedentary behavior, physical activity, and diet. Methods/Design The study design is a pragmatic two-arm parallel randomized controlled trial. Two hundred and seventy overweight children aged 9-12 years and primary caregivers are being recruited. Participants are randomized to intervention (family-based screen time intervention) or control (no change). At the end of the study, the control group is offered the intervention content. Data collection is undertaken at baseline and 24 weeks. The primary trial outcome is child body mass index (BMI) and standardized body mass index (zBMI). Secondary outcomes are change from baseline to 24 weeks in child percentage body fat; waist circumference; self-reported average daily time spent in physical and sedentary activities; dietary intake; and enjoyment of physical activity and sedentary behavior. Secondary outcomes for the primary caregiver include change in BMI and self-reported physical activity. Discussion This study provides an excellent example of a theory-based, pragmatic, community-based trial targeting sedentary behavior in overweight children. The study has been specifically designed to allow for estimation of the consistency of effects on body composition for Māori (indigenous), Pacific and non-Māori/non-Pacific ethnic groups. If effective, this intervention is imminently scalable and could be integrated within existing weight management programs. Trial

  13. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  14. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  15. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  16. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  17. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  18. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  19. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  20. Achronal averaged null energy condition

    SciTech Connect

    Graham, Noah; Olum, Ken D.

    2007-09-15

    The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.

  1. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  2. Circadian Activity Rhythms and Sleep in Nurses Working Fixed 8-hr Shifts.

    PubMed

    Kang, Jiunn-Horng; Miao, Nae-Fang; Tseng, Ing-Jy; Sithole, Trevor; Chung, Min-Huey

    2015-05-01

    Shift work is associated with adverse health outcomes. The aim of this study was to explore the effects of shift work on circadian activity rhythms (CARs) and objective and subjective sleep quality in nurses. Female day-shift (n = 16), evening-shift (n = 6), and night-shift (n = 13) nurses wore a wrist actigraph to monitor the activity. We used cosinor analysis and time-frequency analysis to study CARs. Night-shift nurses exhibited the lowest values of circadian rhythm amplitude, acrophase, autocorrelation, and mean of the circadian relative power (CRP), whereas evening-shift workers exhibited the greatest standard deviation of the CRP among the three shift groups. That is, night-shift nurses had less robust CARs and evening-shift nurses had greater variations in CARs compared with nurses who worked other shifts. Our results highlight the importance of assessing CARs to prevent the adverse effects of shift work on nurses' health.

  3. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  5. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  6. Symptoms in pediatric asthmatics and air pollution: differences in effects by symptom severity, anti-inflammatory medication use and particulate averaging time.

    PubMed Central

    Delfino, R J; Zeiger, R S; Seltzer, J M; Street, D H

    1998-01-01

    Experimental research in humans and animals points to the importance of adverse respiratory effects from short-term particle exposures and to the importance of proinflammatory effects of air pollutants, particularly O(subscript)3. However, particle averaging time has not been subjected to direct scientific evaluation, and there is a lack of epidemiological research examining both this issue and whether modification of air pollutant effects occurs with differences in asthma severity and anti-inflammatory medication use. The present study examined the relationship of adverse asthma symptoms (bothersome or interfered with daily activities or sleep) to O(3) and particles (less than or equal to)10 micrometer (PM10) in a Southern California community in the air inversion zone (1200-2100 ft) with high O(3) and low PM (R = 0.3). A panel of 25 asthmatics 9-17 years of age were followed daily, August through October 1995 (n = 1,759 person-days excluding one subject without symptoms). Exposures included stationary outdoor hourly PM10 (highest 24-hr mean, 54 microgram/m(3), versus median of 1-hr maximums, 56 microgram/m(3) and O(3) (mean of 1-hr maximums, 90 ppb, 5 days (greater than or equal to)120 ppb). Longitudinal regression analyses utilized the generalized estimating equations (GEE) model controlling for autocorrelation, day of week, outdoor fungi, and weather. Asthma symptoms were significantly associated with both outdoor O(3) and PM(10) in single pollutant- and co-regressions, with 1-hr and 8-hr maximum PM(10) having larger effects than the 24-hr mean. Subgroup analyses showed effects of current day PM(10) maximums were strongest in 10 more frequently symptomatic (MS) children: the odds ratios (ORs) for adverse symptoms from 90th percentile increases were 2.24 [95% confidence interval (CI), 1.46-3.46] for 1-hr PM10 (47 microgram/m(3); 1.82 (CI, 1.18-2.81) for 8-hr PM10 (36 microgram/m(3); and 1.50 (CI, 0.80-2.80) for 24-hr PM10 (25 microgram/m(3). Subgroup analyses

  7. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  8. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  9. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  10. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  11. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  12. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  13. Designing Digital Control Systems With Averaged Measurements

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.; Beale, Guy O.

    1990-01-01

    Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.

  14. Bayesian Model Averaging for Propensity Score Analysis.

    PubMed

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  15. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  16. Statistics of time averaged atmospheric scintillation

    SciTech Connect

    Stroud, P.

    1994-02-01

    A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

  17. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  18. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  19. Cell averaging Chebyshev methods for hyperbolic problems

    NASA Technical Reports Server (NTRS)

    Wei, Cai; Gottlieb, David; Harten, Ami

    1990-01-01

    A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.

  20. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  1. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  2. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  3. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  4. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  5. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... class or subclass: Credit = (Average Standard − Emission Level) × (Total Annual Production) × (Useful Life) Deficit = (Emission Level − Average Standard) × (Total Annual Production) × (Useful Life) (l....000 Where: FELi = The FEL to which the engine family is certified. ULi = The useful life of the...

  6. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  7. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  8. Analogue Divider by Averaging a Triangular Wave

    NASA Astrophysics Data System (ADS)

    Selvam, Krishnagiri Chinnathambi

    2017-03-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  9. Chronic Moderate Sleep Restriction in Older Long Sleepers and Older Average Duration Sleepers: A Randomized Controlled Trial

    PubMed Central

    Youngstedt, Shawn D.; Jean-Louis, Girardin; Bootzin, Richard R.; Kripke, Daniel F.; Cooper, Jonnifer; Dean, Lauren R.; Catao, Fabio; James, Shelli; Vining, Caitlyn; Williams, Natasha J.; Irwin, Michael R.

    2013-01-01

    Epidemiologic studies have consistently shown that sleeping < 7 hr and ≥ 8 hr is associated with increased mortality and morbidity. The risks of short sleep may be consistent with results from experimental sleep deprivation studies. However, there has been little study of chronic moderate sleep restriction and no evaluation of older adults who might be more vulnerable to negative effects of sleep restriction, given their age-related morbidities. Moreover, the risks of long sleep have scarcely been examined experimentally. Moderate sleep restriction might benefit older long sleepers who often spend excessive time in bed (TIB), in contrast to older adults with average sleep patterns. Our aims are: (1) to examine the ability of older long sleepers and older average sleepers to adhere to 60 min TIB restriction; and (2) to contrast effects of chronic TIB restriction in older long vs. average sleepers. Older adults (n=100) (60–80 yr) who sleep 8–9 hr per night and 100 older adults who sleep 6–7.25 hr per night will be examined at 4 sites over 5 years. Following a 2-week baseline, participants will be randomized to one of two 12-week treatments: (1) a sleep restriction involving a fixed sleep-wake schedule, in which TIB is reduced 60 min below each participant’s baseline TIB; (2) a control treatment involving no sleep restriction, but a fixed sleep schedule. Sleep will be assessed with actigraphy and a diary. Measures will include glucose tolerance, sleepiness, depressive symptoms, quality of life, cognitive performance, incidence of illness or accident, and inflammation. PMID:23811325

  10. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  11. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  12. Cosmic inhomogeneities and averaged cosmological dynamics.

    PubMed

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  13. Average-passage flow model development

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  14. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  15. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  16. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  17. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  18. Bimetal sensor averages temperature of nonuniform profile

    NASA Technical Reports Server (NTRS)

    Dittrich, R. T.

    1968-01-01

    Instrument that measures an average temperature across a nonuniform temperature profile under steady-state conditions has been developed. The principle of operation is an application of the expansion of a solid material caused by a change in temperature.

  19. Rotational averaging of multiphoton absorption cross sections

    NASA Astrophysics Data System (ADS)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  20. Rotational averaging of multiphoton absorption cross sections.

    PubMed

    Friese, Daniel H; Beerepoot, Maarten T P; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  1. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  2. Radial averages of astigmatic TEM images.

    PubMed

    Fernando, K Vince

    2008-10-01

    The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.

  3. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  4. The generic modeling fallacy: Average biomechanical models often produce non-average results!

    PubMed

    Cook, Douglas D; Robertson, Daniel J

    2016-11-07

    Computational biomechanics models constructed using nominal or average input parameters are often assumed to produce average results that are representative of a target population of interest. To investigate this assumption a stochastic Monte Carlo analysis of two common biomechanical models was conducted. Consistent discrepancies were found between the behavior of average models and the average behavior of the population from which the average models׳ input parameters were derived. More interestingly, broadly distributed sets of non-average input parameters were found to produce average or near average model behaviors. In other words, average models did not produce average results, and models that did produce average results possessed non-average input parameters. These findings have implications on the prevalent practice of employing average input parameters in computational models. To facilitate further discussion on the topic, the authors have termed this phenomenon the "Generic Modeling Fallacy". The mathematical explanation of the Generic Modeling Fallacy is presented and suggestions for avoiding it are provided. Analytical and empirical examples of the Generic Modeling Fallacy are also given.

  5. Averaged controllability of parameter dependent conservative semigroups

    NASA Astrophysics Data System (ADS)

    Lohéac, Jérôme; Zuazua, Enrique

    2017-02-01

    We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.

  6. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  7. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  8. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  9. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  10. Average power meter for laser radiation

    NASA Astrophysics Data System (ADS)

    Shevnina, Elena I.; Maraev, Anton A.; Ishanin, Gennady G.

    2016-04-01

    Advanced metrology equipment, in particular an average power meter for laser radiation, is necessary for effective using of laser technology. In the paper we propose a measurement scheme with periodic scanning of a laser beam. The scheme is implemented in a pass-through average power meter that can perform continuous monitoring during the laser operation in pulse mode or in continuous wave mode and at the same time not to interrupt the operation. The detector used in the device is based on the thermoelastic effect in crystalline quartz as it has fast response, long-time stability of sensitivity, and almost uniform sensitivity dependence on the wavelength.

  11. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  12. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  13. Average: the juxtaposition of procedure and context

    NASA Astrophysics Data System (ADS)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  14. Average length of stay in hospitals.

    PubMed

    Egawa, H

    1984-03-01

    The average length of stay is essentially an important and appropriate index for hospital bed administration. However, from the position that it is not necessarily an appropriate index in Japan, an analysis is made of the difference in the health care facility system between the United States and Japan. Concerning the length of stay in Japanese hospitals, the median appeared to better represent the situation. It is emphasized that in order for the average length of stay to become an appropriate index, there is need to promote regional health, especially facility planning.

  15. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  16. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  17. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  18. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  19. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  20. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  1. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  2. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  3. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  4. Average thermal characteristics of solar wind electrons

    NASA Technical Reports Server (NTRS)

    Montgomery, M. D.

    1972-01-01

    Average solar wind electron properties based on a 1 year Vela 4 data sample-from May 1967 to May 1968 are presented. Frequency distributions of electron-to-ion temperature ratio, electron thermal anisotropy, and thermal energy flux are presented. The resulting evidence concerning heat transport in the solar wind is discussed.

  5. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  6. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  7. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  8. How Young Is Standard Average European?

    ERIC Educational Resources Information Center

    Haspelmath, Martin

    1998-01-01

    An analysis of Standard Average European, a European linguistic area, looks at 11 of its features (definite, indefinite articles, have-perfect, participial passive, antiaccusative prominence, nominative experiencers, dative external possessors, negation/negative pronouns, particle comparatives, A-and-B conjunction, relative clauses, verb fronting…

  9. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  10. Comparison of mouse brain DTI maps using K-space average, image-space average, or no average approach.

    PubMed

    Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan

    2013-11-01

    Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data were collected from five normal mice. Noisy data with signal-to-noise ratios (SNR) that varied between five and 30 (before averaging) were then simulated. The DTI indices, including relative anisotropy (RA), trace of diffusion tensor (TR), axial diffusivity (λ║), and radial diffusivity (λ⊥), derived from the k-avg, m-avg, and no-avg, were then compared in the corpus callosum white matter, cortex gray matter, and the ventricles. We found that k-avg and m-avg enhanced the SNR of DWI with no significant differences. However, k-avg produced lower RA in the white matter and higher RA in the gray matter, compared to the m-avg and no-avg, regardless of SNR. The latter two produced similar DTI quantifications. We concluded that k-avg is less preferred for DTI brain imaging.

  11. Occupational noise exposure in the printing industry.

    PubMed

    McMahon, K J; McManus, P E

    1988-01-01

    The noise exposures of 274 printing production workers in 34 establishments in the New York city area were monitored. Results showed that 43% were exposed to 8-hr time-weighted average (TWA) noise exposures of 85 dBA or greater and that 14% were exposed to 8-hr TWAs of 90 dBA or greater. Within the press department, web press workers were exposed to significantly greater mean 8-hr TWAs than sheetfed press workers. In general, a greater percentage of the workers in the bindery departments were exposed to potentially harmful noise than workers in the press departments. Results of this study indicate that many workers in the printing industry may be at risk of occupational hearing loss. Further research is needed to determine the extent of hearing impairment in this group of workers.

  12. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  13. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  14. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  15. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  16. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  17. Disk-Averaged Synthetic Spectra of Mars

    NASA Astrophysics Data System (ADS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong ,William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  18. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  19. Disk-averaged synthetic spectra of Mars

    NASA Technical Reports Server (NTRS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  20. On the ensemble averaging of PIC simulations

    NASA Astrophysics Data System (ADS)

    Codur, R. J. B.; Tsung, F. S.; Mori, W. B.

    2016-10-01

    Particle-in-cell simulations are used ubiquitously in plasma physics to study a variety of phenomena. They can be an efficient tool for modeling the Vlasov or Vlasov Fokker Planck equations in multi-dimensions. However, the PIC method actually models the Klimontovich equation for finite size particles. The Vlasov Fokker Planck equation can be derived as the ensemble average of the Klimontovich equation. We present results of studying Landau damping and Stimulated Raman Scattering using PIC simulations where we use identical ``drivers'' but change the random number generator seeds. We show that even for cases where a plasma wave is excited below the noise in a single simulation that the plasma wave can clearly be seen and studied if an ensemble average over O(10) simulations is made. Comparison between the results from an ensemble average and the subtraction technique are also presented. In the subtraction technique two simulations, one with the other without the ``driver'' are conducted with the same random number generator seed and the results are subtracted. This work is supported by DOE, NSF, and ENSC (France).

  1. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  2. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  3. A simple algorithm for averaging spike trains.

    PubMed

    Julienne, Hannah; Houghton, Conor

    2013-02-25

    Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.

  4. A Green's function quantum average atom model

    DOE PAGES

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  5. Local average height distribution of fluctuating interfaces

    NASA Astrophysics Data System (ADS)

    Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.

  6. Average neutronic properties of prompt fission products

    SciTech Connect

    Foster, D.G. Jr.; Arthur, E.D.

    1982-02-01

    Calculations of the average neutronic properties of the ensemble of fission products producted by fast-neutron fission of /sup 235/U and /sup 239/Pu, where the properties are determined before the first beta decay of any of the fragments, are described. For each case we approximate the ensemble by a weighted average over 10 selected nuclides, whose properties we calculate using nuclear-model parameters deduced from the systematic properties of other isotopes of the same elements as the fission fragments. The calculations were performed primarily with the COMNUC and GNASH statistical-model codes. The results, available in ENDF/B format, include cross sections, angular distributions of neutrons, and spectra of neutrons and photons, for incident-neutron energies between 10/sup -5/ eV and 20 MeV. Over most of this energy range, we find that the capture cross section of /sup 239/Pu fission fragments is systematically a factor of two to five greater than for /sup 235/U fission fragments.

  7. Local average height distribution of fluctuating interfaces.

    PubMed

    Smith, Naftali R; Meerson, Baruch; Sasorov, Pavel V

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1+1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1+1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2+1 dimensions.

  8. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  9. Lagrangian averaging, nonlinear waves, and shock regularization

    NASA Astrophysics Data System (ADS)

    Bhat, Harish S.

    In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity

  10. Asymmetric network connectivity using weighted harmonic averages

    NASA Astrophysics Data System (ADS)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  11. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  12. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  13. Comprehensive time average digital holographic vibrometry

    NASA Astrophysics Data System (ADS)

    Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, Pavel; Vojtíšek, Petr; Václavík, Jan

    2016-12-01

    This paper presents a method that simultaneously deals with drawbacks of time-average digital holography: limited measurement range, limited spatial resolution, and quantitative analysis of the measured Bessel fringe patterns. When the frequency of the reference wave is shifted by an integer multiple of frequency at which the object oscillates, the measurement range of the method can be shifted either to smaller or to larger vibration amplitudes. In addition, phase modulation of the reference wave is used to obtain a sequence of phase-modulated fringe patterns. Such fringe patterns can be combined by means of phase-shifting algorithms, and amplitudes of vibrations can be straightforwardly computed. This approach independently calculates the amplitude values in every single pixel. The frequency shift and phase modulation are realized by proper control of Bragg cells and therefore no additional hardware is required.

  14. High average power linear induction accelerator development

    SciTech Connect

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.

  15. Angle-averaged Compton cross sections

    SciTech Connect

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  16. The Average-Value Correspondence Principle

    NASA Astrophysics Data System (ADS)

    Goyal, Philip

    2007-12-01

    In previous work [1], we have presented an attempt to derive the finite-dimensional abstract quantum formalism from a set of physically comprehensible assumptions. In this paper, we continue the derivation of the quantum formalism by formulating a correspondence principle, the Average-Value Correspondence Principle, that allows relations between measurement outcomes which are known to hold in a classical model of a system to be systematically taken over into the quantum model of the system, and by using this principle to derive many of the correspondence rules (such as operator rules, commutation relations, and Dirac's Poisson bracket rule) that are needed to apply the abstract quantum formalism to model particular physical systems.

  17. Average prime-pair counting formula

    NASA Astrophysics Data System (ADS)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  18. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  19. Average oxidation state of carbon in proteins.

    PubMed

    Dick, Jeffrey M

    2014-11-06

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.

  20. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  1. Global Average Brightness Temperature for April 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1

    This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.

    The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  2. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  3. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  4. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  5. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  6. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  7. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  8. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  9. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  10. 40 CFR 1033.710 - Averaging emission credits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more...

  11. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...

  12. Changes in average length of stay and average charges generated following institution of PSRO review.

    PubMed

    Westphal, M; Frazier, E; Miller, M C

    1979-01-01

    A five-year review of accounting data at a university hospital shows that immediately following institution of concurrent PSRO admission and length of stay review of Medicare-Medicaid patients, there was a significant decrease in length of stay and a fall in average charges generated per patient against the inflationary trend. Similar changes did not occur for the non-Medicare-Medicaid patients who were not reviewed. The observed changes occurred even though the review procedure rarely resulted in the denial of services to patients, suggesting an indirect effect of review.

  13. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment... Carbon-Related Exhaust Emissions § 600.510-12 Calculation of average fuel economy and average carbon.... (iv) (2) Average carbon-related exhaust emissions will be calculated to the nearest one gram per...

  14. Cost averaging techniques for robust control of flexible structural systems

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.

  15. Patient and hospital characteristics associated with average length of stay.

    PubMed

    Shi, L

    1996-01-01

    This article examines the relationship between patient, hospital characteristics, and hospital average length of stay controlling for major disease categories. A constellation of patient and physician factors were found to be significantly associated with average hospital length of stay.

  16. Synthesis of Averaged Circuit Models for Switched Power Converters

    DTIC Science & Technology

    1989-11-01

    November 1989 LIDS-P-1930 Synthesis of Averaged Circuit Models for Switched Power Converters * Seth R. Sanders George C. Verghese Abstract Averaged... circuit models for switching power converters are useful for purposes of analysis and obtaining engineering intuition into the operation of these...switched circuits . This paper develops averaged circuit models for switching converters using an in-place averaging method. The method proceeds in a

  17. 76 FR 57081 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2010 was $28,284. The average annual cost to confine an inmate in a Community Corrections...

  18. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-03

    ... No: 2011-2363] DEPARTMENT OF JUSTICE Bureau of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an...

  19. 78 FR 16711 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2011 was $28,893.40. The average annual cost to confine an inmate in a Community...

  20. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  1. 7 CFR 760.640 - National average market price.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...

  2. Averaging and Globalising Quotients of Informetric and Scientometric Data.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1996-01-01

    Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…

  3. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib... attachment at the base to the first node....

  4. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib... attachment at the base to the first node....

  5. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  6. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards....2561 Average moisture content. (a) Determining average moisture content of the lot is not a...

  7. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  8. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards....2561 Average moisture content. (a) Determining average moisture content of the lot is not a...

  9. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average...

  10. 7 CFR 51.2561 - Average moisture content.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  11. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average...

  12. 40 CFR 1042.710 - Averaging emission credits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families...

  13. Perturbation resilience and superiorization methodology of averaged mappings

    NASA Astrophysics Data System (ADS)

    He, Hongjin; Xu, Hong-Kun

    2017-04-01

    We first prove the bounded perturbation resilience for the successive fixed point algorithm of averaged mappings, which extends the string-averaging projection and block-iterative projection methods. We then apply the superiorization methodology to a constrained convex minimization problem where the constraint set is the intersection of fixed point sets of a finite family of averaged mappings.

  14. Sample Size Bias in Judgments of Perceptual Averages

    ERIC Educational Resources Information Center

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  15. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  16. Robust Morphological Averages in Three Dimensions for Anatomical Atlas Construction

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Bloch, Isabelle; Schmitt, Francis

    2004-09-01

    We present original methods for obtaining robust, anatomical shape-based averages of features of the human head anatomy from a normal population. Our goals are computerized atlas construction with representative anatomical features and morphopometry for specific populations. A method for true-morphological averaging is proposed, consisting of a suitable blend of shape-related information for N objects to obtain a progressive average. It is made robust by penalizing, in a morphological sense, the contributions of features less similar to the current average. Morphological error and similarity, as well as penalization, are based on the same paradigm as the morphological averaging.

  17. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation in § 60.1935.... If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... Reference Method 19 in appendix A of this part, section 4.1, to calculate the daily arithmetic average...

  18. Adaptive face coding and discrimination around the average face.

    PubMed

    Rhodes, Gillian; Maloney, Laurence T; Turner, Jenny; Ewing, Louise

    2007-03-01

    Adaptation paradigms highlight the dynamic nature of face coding and suggest that identity is coded relative to an average face that is tuned by experience. In low-level vision, adaptive coding can enhance sensitivity to differences around the adapted level. We investigated whether sensitivity to differences around the average face is similarly enhanced. Converging evidence from three paradigms showed no enhancement. Discrimination of small interocular spacing differences was not better for faces close to the average (Study 1). Nor was perceived similarity reduced for face pairs close to (spanning) the average (Study 2). On the contrary, these pairs were judged most similar. Maximum likelihood perceptual difference scaling (Studies 3 and 4) confirmed that sensitivity to differences was reduced, not enhanced, around the average. We conclude that adaptive face coding does not enhance discrimination around the average face.

  19. TIME INVARIANT MULTI ELECTRODE AVERAGING FOR BIOMEDICAL SIGNALS.

    PubMed

    Orellana, R Martinez; Erem, B; Brooks, D H

    2013-12-31

    One of the biggest challenges in averaging ECG or EEG signals is to overcome temporal misalignments and distortions, due to uncertain timing or complex non-stationary dynamics. Standard methods average individual leads over a collection of epochs on a time-sample by time-sample basis, even when multi-electrode signals are available. Here we propose a method that averages multi electrode recordings simultaneously by using spatial patterns and without relying on time or frequency.

  20. Conditionally-averaged structures in wall-bounded turbulent flows

    NASA Technical Reports Server (NTRS)

    Guezennec, Yann G.; Piomelli, Ugo; Kim, John

    1987-01-01

    The quadrant-splitting and the wall-shear detection techniques were used to obtain ensemble-averaged wall layer structures. The two techniques give similar results for Q4 events, but the wall-shear method leads to smearing of Q2 events. Events were found to maintain their identity for very long times. The ensemble-averaged structures scale with outer variables. Turbulence producing events were associated with one dominant vortical structure rather than a pair of counter-rotating structures. An asymmetry-preserving averaging scheme was devised that allowed a picture of the average structure which more closely resembles the instantaneous one, to be obtained.

  1. Light-cone averaging in cosmology: formalism and applications

    NASA Astrophysics Data System (ADS)

    Gasperini, M.; Marozzi, G.; Nugier, F.; Veneziano, G.

    2011-07-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ``geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ``redshift drift'' in a generic inhomogeneous Universe.

  2. Light-cone averaging in cosmology: formalism and applications

    SciTech Connect

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F. E-mail: giovanni.marozzi@college-de-france.fr E-mail: gabriele.veneziano@cern.ch

    2011-07-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe.

  3. 78 FR 49770 - Annual Determination of Average Cost of Incarceration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal... annual cost to confine an inmate in a Community Corrections Center for Fiscal Year 2012 was $27,003...

  4. Hadley circulations for zonally averaged heating centered off the equator

    NASA Technical Reports Server (NTRS)

    Lindzen, Richard S.; Hou, Arthur Y.

    1988-01-01

    Consistent with observations, it is found that moving peak heating even 2 deg off the equator leads to profound asymmetries in the Hadley circulation, with the winter cell amplifying greatly and the summer cell becoming negligible. It is found that the annually averaged Hadley circulation is much larger than the circulation forced by the annually averaged heating.

  5. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... emissions average. This must include any Group 1 emission points to which the reference control technology.... (c) The following emission points can be used to generate emissions averaging credits if control was... agrees has a higher nominal efficiency than the reference control technology. Information on the...

  6. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... emissions average. This must include any Group 1 emission points to which the reference control technology.... (c) The following emission points can be used to generate emissions averaging credits if control was... agrees has a higher nominal efficiency than the reference control technology. Information on the...

  7. A Simple Geometrical Derivation of the Spatial Averaging Theorem.

    ERIC Educational Resources Information Center

    Whitaker, Stephen

    1985-01-01

    The connection between single phase transport phenomena and multiphase transport phenomena is easily accomplished by means of the spatial averaging theorem. Although different routes to the theorem have been used, this paper provides a route to the averaging theorem that can be used in undergraduate classes. (JN)

  8. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    ERIC Educational Resources Information Center

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  9. 40 CFR 63.503 - Emissions averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.503 Emissions averaging... emissions averages. (2) Compliance with the provisions of this section may be based on either organic HAP or... (a)(3)(ii) of this section. (i) The organic HAP used as the calibration gas for Method 25A, 40...

  10. 7 CFR 701.117 - Average adjusted gross income limitation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Average adjusted gross income limitation. 701.117 Section 701.117 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... Conservation Program § 701.117 Average adjusted gross income limitation. To be eligible for payments...

  11. Analytic computation of average energy of neutrons inducing fission

    SciTech Connect

    Clark, Alexander Rich

    2016-08-12

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  12. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Average midrib length. 51.577 Section 51.577... (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Celery Definitions § 51.577 Average... measured from the point of attachment at the base to the first node....

  13. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Average midrib length. 51.577 Section 51.577... (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Celery Definitions § 51.577 Average... measured from the point of attachment at the base to the first node....

  14. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  15. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  16. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  17. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  18. 42 CFR 423.279 - National average monthly bid amount.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...

  19. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-15

    ... computing income tax liability. The regulations reflect changes made by the American Jobs Creation Act of...) relating to the averaging of farm and fishing income in computing tax liability. A notice of proposed... to compute current year (election year) income tax liability under section 1 by averaging, over...

  20. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  1. LANDSAT-4 horizon scanner full orbit data averages

    NASA Technical Reports Server (NTRS)

    Stanley, J. P.; Bilanow, S.

    1983-01-01

    Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.

  2. Time domain averaging based on fractional delay filter

    NASA Astrophysics Data System (ADS)

    Wu, Wentao; Lin, Jing; Han, Shaobo; Ding, Xianghui

    2009-07-01

    For rotary machinery, periodic components in signals are always extracted to investigate the condition of each rotating part. Time domain averaging technique is a traditional method used to extract those periodic components. Originally, a phase reference signal is required to ensure all the averaged segments are with the same initial phase. In some cases, however, there is no phase reference; we have to establish some efficient algorithms to synchronize the segments before averaging. There are some algorithms available explaining how to perform time domain averaging without using phase reference signal. However, those algorithms cannot eliminate the phase error completely. Under this background, a new time domain averaging algorithm that has no phase error theoretically is proposed. The performance is improved by incorporating the fractional delay filter. The efficiency of the proposed algorithm is validated by some simulations.

  3. Average cross-responses in correlated financial markets

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  4. Sample size bias in retrospective estimates of average duration.

    PubMed

    Smith, Andrew R; Rule, Shanon; Price, Paul C

    2017-03-25

    People often estimate the average duration of several events (e.g., on average, how long does it take to drive from one's home to his or her office). While there is a great deal of research investigating estimates of duration for a single event, few studies have examined estimates when people must average across numerous stimuli or events. The current studies were designed to fill this gap by examining how people's estimates of average duration were influenced by the number of stimuli being averaged (i.e., the sample size). Based on research investigating the sample size bias, we predicted that participants' judgments of average duration would increase as the sample size increased. Across four studies, we demonstrated a sample size bias for estimates of average duration with different judgment types (numeric estimates and comparisons), study designs (between and within-subjects), and paradigms (observing images and performing tasks). The results are consistent with the more general notion that psychological representations of magnitudes in one dimension (e.g., quantity) can influence representations of magnitudes in another dimension (e.g., duration).

  5. Programmable noise bandwidth reduction by means of digital averaging

    NASA Technical Reports Server (NTRS)

    Poklemba, John J. (Inventor)

    1993-01-01

    Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.

  6. Inversion of the circular averages transform using the Funk transform

    NASA Astrophysics Data System (ADS)

    Evren Yarman, Can; Yazıcı, Birsen

    2011-06-01

    The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering.

  7. Structuring Collaboration in Mixed-Ability Groups to Promote Verbal Interaction, Learning, and Motivation of Average-Ability Students

    ERIC Educational Resources Information Center

    Saleh, Mohammad; Lazonder, Ard W.; Jong, Ton de

    2007-01-01

    Average-ability students often do not take full advantage of learning in mixed-ability groups because they hardly engage in the group interaction. This study examined whether structuring collaboration by group roles and ground rules for helping behavior might help overcome this participatory inequality. In a plant biology course, heterogeneously…

  8. It's not just average faces that are attractive: computer-manipulated averageness makes birds, fish, and automobiles attractive.

    PubMed

    Halberstadt, Jamin; Rhodes, Gillian

    2003-03-01

    Average faces are attractive. We sought to distinguish whether this preference is an adaptation for finding high-quality mates (the direct selection account) or whether it reflects more general information-processing mechanisms. In three experiments, we examined the attractiveness of birds, fish, and automobiles whose averageness had been manipulated using digital image manipulation techniques common in research on facial attractiveness. Both manipulated averageness and rated averageness were strongly associated with attractiveness in all three stimulus categories. In addition, for birds and fish, but not for automobiles, the correlation between subjective averageness and attractiveness remained significant when the effect of subjective familiarity was partialled out. The results suggest that at least two mechanisms contribute to the attractiveness of average exemplars. One is a general preference for familiar stimuli, which contributes to the appeal of averageness in all three categories. The other is a preference for averageness per se, which was found for birds and fish, but not for automobiles, and may reflect a preference for features signaling genetic quality in living organisms, including conspecifics.

  9. Time average vibration fringe analysis using Hilbert transformation

    SciTech Connect

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-10-20

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  10. Experimental demonstration of squeezed-state quantum averaging

    SciTech Connect

    Lassen, Mikael; Madsen, Lars Skovgaard; Andersen, Ulrik L.; Sabuncu, Metin; Filip, Radim

    2010-08-15

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented harmonic mean yields a lower value than the corresponding value obtained for the standard arithmetic-mean strategy. The effect of quantum averaging is experimentally tested for squeezed and thermal states as well as for uncorrelated and partially correlated noise sources. The harmonic-mean protocol can be used to efficiently stabilize a set of squeezed-light sources with statistically fluctuating noise levels.

  11. Sample Selected Averaging Method for Analyzing the Event Related Potential

    NASA Astrophysics Data System (ADS)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  12. Homelessness prevention in New York City: On average, it works.

    PubMed

    Goodman, Sarena; Messeri, Peter; O'Flaherty, Brendan

    2016-03-01

    This study evaluates the community impact of the first four years of Homebase, a homelessness prevention program in New York City. Family shelter entries decreased on average in the neighborhoods in which Homebase was operating. Homebase effects appear to be heterogeneous, and so different kinds of averages imply different-sized effects. The (geometric) average decrease in shelter entries was about 5% when census tracts are weighted equally, and 11% when community districts (which are much larger) are weighted equally. This study also examines the effect of foreclosures. Foreclosures are associated with more shelter entries in neighborhoods that usually do not send large numbers of families to the shelter system.

  13. Homelessness prevention in New York City: On average, it works

    PubMed Central

    Goodman, Sarena; Messeri, Peter; O'Flaherty, Brendan

    2016-01-01

    This study evaluates the community impact of the first four years of Homebase, a homelessness prevention program in New York City. Family shelter entries decreased on average in the neighborhoods in which Homebase was operating. Homebase effects appear to be heterogeneous, and so different kinds of averages imply different-sized effects. The (geometric) average decrease in shelter entries was about 5% when census tracts are weighted equally, and 11% when community districts (which are much larger) are weighted equally. This study also examines the effect of foreclosures. Foreclosures are associated with more shelter entries in neighborhoods that usually do not send large numbers of families to the shelter system. PMID:26941543

  14. [Average number of living children of the members of parliament].

    PubMed

    Toros, A

    1989-01-01

    "This study compares the average number of living children of the members of the parliament [in Turkey] with the average number of living children of the general public as found in the 1988 Population and Health Survey. The findings indicate that the average number of living children of the members of the parliament [is] substantially lower than that of the general public. Under the light of these findings the members of the parliament are invited not to refrain from speeches promoting family planning in Turkey." (SUMMARY IN ENG)

  15. Average waiting time in FDDI networks with local priorities

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.

  16. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  17. Development and evaluation of a hybrid averaged orbit generator

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.; Long, A. C.; Early, L. W.

    1978-01-01

    A rapid orbit generator based on a first-order application of the Generalized Method of Averaging has been developed for the Research and Development (R&D) version of the Goddard Trajectory Determination System (GTDS). The evaluation of the averaged equations of motion can use both numerically averaged and recursively evaluated, analytically averaged perturbation models. These equations are numerically integrated to obtain the secular and long-period motion. Factors affecting efficient orbit prediction are discussed and guidelines are presented for treatment of each major perturbation. Guidelines for obtaining initial mean elements compatible with the theory are presented. An overview of the orbit generator is presented and comparisons with high precision methods are given.

  18. Average local ionization energy generalized to correlated wavefunctions

    SciTech Connect

    Ryabinkin, Ilya G.; Staroverov, Viktor N.

    2014-08-28

    The average local ionization energy function introduced by Politzer and co-workers [Can. J. Chem. 68, 1440 (1990)] as a descriptor of chemical reactivity has a limited utility because it is defined only for one-determinantal self-consistent-field methods such as the Hartree–Fock theory and the Kohn–Sham density-functional scheme. We reinterpret the negative of the average local ionization energy as the average total energy of an electron at a given point and, by rewriting this quantity in terms of reduced density matrices, arrive at its natural generalization to correlated wavefunctions. The generalized average local electron energy turns out to be the diagonal part of the coordinate representation of the generalized Fock operator divided by the electron density; it reduces to the original definition in terms of canonical orbitals and their eigenvalues for one-determinantal wavefunctions. The discussion is illustrated with calculations on selected atoms and molecules at various levels of theory.

  19. Effects of spatial variability and scale on areal -average evapotranspiration

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, Eric F.

    1993-01-01

    This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.

  20. The origin of consistent protein structure refinement from structural averaging.

    PubMed

    Park, Hahnbeom; DiMaio, Frank; Baker, David

    2015-06-02

    Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state.

  1. Does subduction zone magmatism produce average continental crust

    NASA Technical Reports Server (NTRS)

    Ellam, R. M.; Hawkesworth, C. J.

    1988-01-01

    The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.

  2. Use of a Correlation Coefficient for Conditional Averaging.

    DTIC Science & Technology

    1997-04-01

    data. Selection of the sine function period and a correlation coefficient threshold are discussed. Also examined are the effects of the period and...threshold level on the number of ensembles captured for inclusion for conditional averaging. Both the selection of threshold correlation coefficient and the...A method of collecting ensembles for conditional averaging is presented that uses data collected from a plane mixing layer. The correlation

  3. Flavor Physics Data from the Heavy Flavor Averaging Group (HFAG)

    DOE Data Explorer

    The Heavy Flavor Averaging Group (HFAG) was established at the May 2002 Flavor Physics and CP Violation Conference in Philadelphia, and continues the LEP Heavy Flavor Steering Group's tradition of providing regular updates to the world averages of heavy flavor quantities. Data are provided by six subgroups that each focus on a different set of heavy flavor measurements: B lifetimes and oscillation parameters, Semi-leptonic B decays, Rare B decays, Unitarity triangle parameters, B decays to charm final states, and Charm Physics.

  4. Modelling and designing digital control systems with averaged measurements

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.; Beale, Guy O.

    1988-01-01

    An account is given of the control systems engineering methods applicable to the design of digital feedback controllers for aerospace deterministic systems in which the output, rather than being an instantaneous measure of the system at the sampling instants, instead represents an average measure of the system over the time interval between samples. The averaging effect can be included during the modeling of the plant, thereby obviating the iteration of design/simulation phases.

  5. Scalable Robust Principal Component Analysis using Grassmann Averages.

    PubMed

    Hauberg, Soren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael

    2015-12-23

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  6. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  7. Approximate average head models for EEG source imaging.

    PubMed

    Valdés-Hernández, Pedro A; von Ellenrieder, Nicolás; Ojeda-Gonzalez, Alejandro; Kochen, Silvia; Alemán-Gómez, Yasser; Muravchik, Carlos; Valdés-Sosa, Pedro A

    2009-12-15

    We examine the performance of approximate models (AM) of the head in solving the EEG inverse problem. The AM are needed when the individual's MRI is not available. We simulate the electric potential distribution generated by cortical sources for a large sample of 305 subjects, and solve the inverse problem with AM. Statistical comparisons are carried out with the distribution of the localization errors. We propose several new AM. These are the average of many individual realistic MRI-based models, such as surface-based models or lead fields. We demonstrate that the lead fields of the AM should be calculated considering source moments not constrained to be normal to the cortex. We also show that the imperfect anatomical correspondence between all cortices is the most important cause of localization errors. Our average models perform better than a random individual model or the usual average model in the MNI space. We also show that a classification based on race and gender or head size before averaging does not significantly improve the results. Our average models are slightly better than an existing AM with shape guided by measured individual electrode positions, and have the advantage of not requiring such measurements. Among the studied models, the Average Lead Field seems the most convenient tool in large and systematical clinical and research studies demanding EEG source localization, when MRI are unavailable. This AM does not need a strict alignment between head models, and can therefore be easily achieved for any type of head modeling approach.

  8. Demonstration of a Model Averaging Capability in FRAMES

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Castleton, K. J.

    2009-12-01

    Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.

  9. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  10. Average Soil Water Retention Curves Measured by Neutron Radiography

    SciTech Connect

    Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  11. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    PubMed

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  12. Spatially averaged flow over a wavy boundary revisited

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.

  13. Model Averaging for Improving Inference from Causal Diagrams.

    PubMed

    Hamra, Ghassan B; Kaufman, Jay S; Vahratian, Anjel

    2015-08-11

    Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as "wish bias". Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives.

  14. Perceptual Averaging in Individuals with Autism Spectrum Disorder

    PubMed Central

    Corbett, Jennifer E.; Venuti, Paola; Melcher, David

    2016-01-01

    There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task) despite poor accuracy in recalling individual circle sizes (member task). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment. PMID:27872602

  15. The average distances in random graphs with given expected degrees

    NASA Astrophysics Data System (ADS)

    Chung, Fan; Lu, Linyuan

    2002-12-01

    Random graph theory is used to examine the "small-world phenomenon"; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n/log , where is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/k for some fixed exponent . For the case of > 3, we prove that the average distance of the power law graphs is almost surely of order log n/log β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc/log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core.


  16. The Conservation of Area Integrals in Averaging Transformations

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. D.

    2010-06-01

    It is shown for the two-planetary version of the weakly perturbed two-body problem that, in a system defined by a finite part of a Poisson expansion of the averaged Hamiltonian, only one of the three components of the area vector is conserved, corresponding to the longitudes measuring plane. The variability of the other two components is demonstrated in two ways. The first is based on calculating the Poisson bracket of the averaged Hamiltonian and the components of the area vector written in closed form. In the second, an echeloned Poisson series processor (EPSP) is used when calculating the Poisson bracket. The averaged Hamiltonian is taken with accuracy to second order in the small parameter of the problem, and the components of the area vector are expanded in a Poisson series.

  17. Time-average based on scaling law in anomalous diffusions

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Joo

    2015-05-01

    To solve the obscureness in measurement brought about from the weak ergodicity breaking appeared in anomalous diffusions, we have suggested the time-averaged mean squared displacement (MSD) /line{δ 2 (τ )}τ with an integral interval depending linearly on the lag time τ. For the continuous time random walk describing a subdiffusive behavior, we have found that /line{δ 2 (τ )}τ ˜ τ γ like that of the ensemble-averaged MSD, which makes it be possible to measure the proper exponent values through time-average in experiments like a single molecule tracking. Also, we have found that it has originated from the scaling nature of the MSD at an aging time in anomalous diffusion and confirmed them through numerical results of the other microscopic non-Markovian model showing subdiffusions and superdiffusions with the origin of memory enhancement.

  18. Testing averaged cosmology with type Ia supernovae and BAO data

    NASA Astrophysics Data System (ADS)

    Santos, B.; Coley, A. A.; Chandrachani Devi, N.; Alcaniz, J. S.

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  19. Averaged initial Cartesian coordinates for long lifetime satellite studies

    NASA Technical Reports Server (NTRS)

    Pines, S.

    1975-01-01

    A set of initial Cartesian coordinates, which are free of ambiguities and resonance singularities, is developed to study satellite mission requirements and dispersions over long lifetimes. The method outlined herein possesses two distinct advantages over most other averaging procedures. First, the averaging is carried out numerically using Gaussian quadratures, thus avoiding tedious expansions and the resulting resonances for critical inclinations, etc. Secondly, by using the initial rectangular Cartesian coordinates, conventional, existing acceleration perturbation routines can be absorbed into the program without further modifications, thus making the method easily adaptable to the addition of new perturbation effects. The averaged nonlinear differential equations are integrated by means of a Runge Kutta method. A typical step size of several orbits permits rapid integration of long lifetime orbits in a short computing time.

  20. Stochastic averaging and sensitivity analysis for two scale reaction networks

    NASA Astrophysics Data System (ADS)

    Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.

    2016-02-01

    In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.

  1. Genuine non-self-averaging and ultraslow convergence in gelation.

    PubMed

    Cho, Y S; Mazza, M G; Kahng, B; Nagler, J

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  2. Genuine non-self-averaging and ultraslow convergence in gelation

    NASA Astrophysics Data System (ADS)

    Cho, Y. S.; Mazza, M. G.; Kahng, B.; Nagler, J.

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  3. Evolution of the average avalanche shape with the universality class.

    PubMed

    Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J

    2013-01-01

    A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics.

  4. Time-average TV holography for vibration fringe analysis

    SciTech Connect

    Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2009-06-01

    Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

  5. Evolution of the average avalanche shape with the universality class

    PubMed Central

    Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J

    2013-01-01

    A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571

  6. The Spectral Form Factor Is Not Self-Averaging

    SciTech Connect

    Prange, R.

    1997-03-01

    The form factor, k(t), is the spectral statistic which best displays nonuniversal quasiclassical deviations from random matrix theory. Recent estimations of k(t) for a single spectrum found interesting new effects of this type. It was supposed that k(t) is {ital self-averaging} and thus did not require an ensemble average. We here argue that this supposition sometimes fails and that for many important systems an ensemble average is essential to see detailed properties of k(t). In other systems, notably the nontrivial zeros of Riemann zeta function, it will be possible to see the nonuniversal properties by an analysis of a single spectrum. {copyright} {ital 1997} {ital The American Physical Society}

  7. Cascade of failures in interdependent networks with different average degree

    NASA Astrophysics Data System (ADS)

    Cheng, Zunshui; Cao, Jinde; Hayat, Tasawar

    2014-12-01

    Most of modern systems are coupled by two sub-networks and therefore should be modeled as interdependent networks. The study towards robustness of interdependent networks becomes interesting and significant. In this paper, mainly by numerical simulations, the robustness of interdependent Erdös-Rényi (ER) networks and interdependent scale-Free (SF) networks coupled by two sub-networks with different average degree are investigated. First, we study the robustness of interdependent networks under random attack. Second, we study the robustness of interdependent networks under targeted attack on high or low degree nodes, and find that interdependent networks with different average degree are significantly different from those interdependent networks with equal average degree.

  8. Size and emotion averaging: costs of dividing attention after all.

    PubMed

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  9. Exact solution to the averaging problem in cosmology.

    PubMed

    Wiltshire, David L

    2007-12-21

    The exact solution of a two-scale Buchert average of the Einstein equations is derived for an inhomogeneous universe that represents a close approximation to the observed universe. The two scales represent voids, and the bubble walls surrounding them within which clusters of galaxies are located. As described elsewhere [New J. Phys. 9, 377 (2007)10.1088/1367-2630/9/10/377], apparent cosmic acceleration can be recognized as a consequence of quasilocal gravitational energy gradients between observers in bound systems and the volume-average position in freely expanding space. With this interpretation, the new solution presented here replaces the Friedmann solutions, in representing the average evolution of a matter-dominated universe without exotic dark energy, while being observationally viable.

  10. A Spectral Estimate of Average Slip in Earthquakes

    NASA Astrophysics Data System (ADS)

    Boatwright, J.; Hanks, T. C.

    2014-12-01

    We demonstrate that the high-frequency acceleration spectral level ao of an ω-square source spectrum is directly proportional to the average slip of the earthquake ∆u divided by the travel time to the station r/βao = 1.37 Fs (β/r) ∆uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.

  11. Average coherence and its typicality for random mixed quantum states

    NASA Astrophysics Data System (ADS)

    Zhang, Lin

    2017-04-01

    The Wishart ensemble is a useful and important random matrix model used in diverse fields. By realizing induced random mixed quantum states as a Wishart ensemble with fixed unit trace, using matrix integral technique we give a fast track to the average coherence for random mixed quantum states induced via partial-tracing of the Haar-distributed bipartite pure states. As a direct consequence of this result, we get a compact formula for the average subentropy of random mixed states. These compact formulae extend our previous work.

  12. Probing turbulence intermittency via autoregressive moving-average models

    NASA Astrophysics Data System (ADS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele

    2014-12-01

    We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index Υ that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that Υ is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that Υ is a suitable index to reconstruct intermittency in experimental turbulent fields.

  13. Collision and average velocity effects on the ratchet pinch

    SciTech Connect

    Vlad, M.; Benkadda, S.

    2008-03-15

    A ratchet-type average velocity V{sup R} appears for test particles moving in a stochastic potential and a magnetic field that is space dependent. This model is developed by including particle collisions and an average velocity. We show that these components of the motion can destroy the ratchet velocity but they also can produce significant increase of V{sup R}, depending on the parameters. The amplification of the ratchet pinch is a nonlinear effect that appears in the presence of trajectory eddying.

  14. AMPERE AVERAGE CURRENT PHOTOINJECTOR AND ENERGY RECOVERY LINAC.

    SciTech Connect

    BEN-ZVI,I.; BURRILL,A.; CALAGA,R.; ET AL.

    2004-08-17

    High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. We describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode, an accelerator cavity, both capable of producing of the order of one ampere average current and plans for an ERL based on these units.

  15. Compact expressions for spherically averaged position and momentum densities

    NASA Astrophysics Data System (ADS)

    Crittenden, Deborah L.; Bernard, Yves A.

    2009-08-01

    Compact expressions for spherically averaged position and momentum density integrals are given in terms of spherical Bessel functions (jn) and modified spherical Bessel functions (in), respectively. All integrals required for ab initio calculations involving s, p, d, and f-type Gaussian functions are tabulated, highlighting a neat isomorphism between position and momentum space formulae. Spherically averaged position and momentum densities are calculated for a set of molecules comprising the ten-electron isoelectronic series (Ne-CH4) and the eighteen-electron series (Ar-SiH4, F2-C2H6).

  16. Quantum State Discrimination Using the Minimum Average Number of Copies

    NASA Astrophysics Data System (ADS)

    Slussarenko, Sergei; Weston, Morgan M.; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M.; Pryde, Geoff J.

    2017-01-01

    In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we introduce a new state discrimination task: minimizing the average resources for a fixed admissible error probability. We show that this new task is not performed optimally by previously known strategies, and derive and experimentally test a detection scheme that performs better.

  17. Spatial average ambiguity function for array radar with stochastic signals

    NASA Astrophysics Data System (ADS)

    Zha, Guofeng; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang

    2016-03-01

    For analyzing the spatial resolving performance of multi-transmitter single-receiver (MTSR) array radar with stochastic signals, the spatial average ambiguity function (SAAF) is introduced based on the statistical average theory. The analytic expression of SAAF and the corresponding resolutions in vertical range and in horizontal range are derived. Since spatial resolving performance is impacted by many parameters including signal modulation schemes, signal bandwidth, array aperture's size and target's spatial position, comparisons are implemented to analyze these influences. Simulation results are presented to validate the whole analysis.

  18. Averaged energy inequalities for the nonminimally coupled classical scalar field

    SciTech Connect

    Fewster, Christopher J.; Osterbrink, Lutz W.

    2006-08-15

    The stress-energy tensor for the classical nonminimally coupled scalar field is known not to satisfy the pointwise energy conditions of general relativity. In this paper we show, however, that local averages of the classical stress-energy tensor satisfy certain inequalities. We give bounds for averages along causal geodesics and show, e.g., that in Ricci-flat background spacetimes, ANEC and AWEC are satisfied. Furthermore we use our result to show that in the classical situation we have an analogue to the phenomenon of quantum interest. These results lay the foundations for analogous energy inequalities for the quantized nonminimally coupled fields, which will be discussed elsewhere.

  19. An averaging analysis of discrete-time indirect adaptive control

    NASA Technical Reports Server (NTRS)

    Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.

    1988-01-01

    An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.

  20. High average power scaleable thin-disk laser

    DOEpatents

    Beach, Raymond J.; Honea, Eric C.; Bibeau, Camille; Payne, Stephen A.; Powell, Howard; Krupke, William F.; Sutton, Steven B.

    2002-01-01

    Using a thin disk laser gain element with an undoped cap layer enables the scaling of lasers to extremely high average output power values. Ordinarily, the power scaling of such thin disk lasers is limited by the deleterious effects of amplified spontaneous emission. By using an undoped cap layer diffusion bonded to the thin disk, the onset of amplified spontaneous emission does not occur as readily as if no cap layer is used, and much larger transverse thin disks can be effectively used as laser gain elements. This invention can be used as a high average power laser for material processing applications as well as for weapon and air defense applications.

  1. Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei

    2015-07-01

    We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.

  2. Averaging processes in granular flows driven by gravity

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  3. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... annual credits and debits in the Periodic Reports as specified in § 63.655(g)(8). Every fourth Periodic... reported in the next Periodic Report. (iii) The following procedures and equations shall be used to..., dimensionless (see table 33 of subpart G). P=Weighted average rack partial pressure of organic HAP's...

  4. HIGH AVERAGE POWER UV FREE ELECTRON LASER EXPERIMENTS AT JLAB

    SciTech Connect

    Douglas, David; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle D; Tennant, Christopher; Williams, Gwyn

    2012-07-01

    Having produced 14 kW of average power at {approx}2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  5. Average characteristics and activity dependence of the subauroral polarization stream

    NASA Astrophysics Data System (ADS)

    Foster, J. C.; Vo, H. B.

    2002-12-01

    Data from the Millstone Hill incoherent scatter radar taken over two solar cycles (1979-2000) are examined to determine the average characteristics of the disturbance convection electric field in the midlatitude ionosphere. Radar azimuth scans provide a regular database of ionospheric plasma convection observations spanning auroral and subauroral latitudes, and these scans have been examined for all local times and activity conditions.We examine the occurrence and characteristics of a persistent secondary westward convection peak which lies equatorward of the auroral two-cell convection. Individual scans and average patterns of plasma flow identify and characterize this latitudinally broad and persistent subauroral polarization stream (SAPS), which spans the nightside from dusk to the early morning sector for all Kp greater than 4. Premidnight, the SAPS westward convection lies equatorward of L = 4 (60° invariant latitude, Λ), spans 3°-5° of latitude, and has an average peak amplitude of >900 m/s. In the predawn sector, SAPS is seen as a region of antisunward convection equatorward of L = 3 (55° Λ), spanning ˜3° of latitude, with an average peak amplitude of 400 m/s.

  6. 40 CFR 63.503 - Emissions averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... applied as a pollution prevention project, or a pollution prevention measure, where the control achieves a... measures are used to control five or more of the emission points included in the emissions average. (B) If... pollution prevention measures are used to control five or more of the emission points included in...

  7. 40 CFR 63.503 - Emissions averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... applied as a pollution prevention project, or a pollution prevention measure, where the control achieves a... measures are used to control five or more of the emission points included in the emissions average. (B) If... pollution prevention measures are used to control five or more of the emission points included in...

  8. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... additional emission points if pollution prevention measures are used to control five or more of the emission... five additional emission points if pollution prevention measures are used to control five or more of... averaging credits if control was applied after November 15, 1990, and if sufficient information is...

  9. 40 CFR 63.503 - Emissions averaging provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... applied as a pollution prevention project, or a pollution prevention measure, where the control achieves a... measures are used to control five or more of the emission points included in the emissions average. (B) If... pollution prevention measures are used to control five or more of the emission points included in...

  10. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the emission... section describe the emission points that may be used to generate emissions averaging credits if...

  11. 40 CFR 63.150 - Emissions averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... control device, a recovery device applied as a pollution prevention project, or a pollution prevention... Group 1 emission points to which the reference control technology (defined in § 63.111 of this subpart... following emission points can be used to generate emissions averaging credits, if control was applied...

  12. 40 CFR 63.150 - Emissions averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... control device, a recovery device applied as a pollution prevention project, or a pollution prevention... Group 1 emission points to which the reference control technology (defined in § 63.111 of this subpart... following emission points can be used to generate emissions averaging credits, if control was applied...

  13. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the emission... section describe the emission points that may be used to generate emissions averaging credits if...

  14. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... additional emission points if pollution prevention measures are used to control five or more of the emission... five additional emission points if pollution prevention measures are used to control five or more of... averaging credits if control was applied after November 15, 1990, and if sufficient information is...

  15. 40 CFR 63.150 - Emissions averaging provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... control device, a recovery device applied as a pollution prevention project, or a pollution prevention... Group 1 emission points to which the reference control technology (defined in § 63.111 of this subpart... following emission points can be used to generate emissions averaging credits, if control was applied...

  16. 34 CFR 668.196 - Average rates appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668... determine that you qualify, we notify you of that determination at the same time that we notify you of your... determine that you meet the requirements for an average rates appeal. (Approved by the Office of...

  17. Formulation of Maximized Weighted Averages in URTURIP Technique

    DTIC Science & Technology

    2001-10-25

    Formulation of Maximized Weighted Averages in URTURIP Technique Bruno Migeon, Philippe Deforge, Pierre Marché Laboratoire Vision et Robotique ...Organization Name(s) and Address(es) Laboratoire Vision et Robotique 63, avenue de Lattre de Tassigny, 18020 Bourges Cedex - France Performing Organization

  18. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  19. Punching Wholes into Parts, or Beating the Percentile Averages.

    ERIC Educational Resources Information Center

    Carwile, Nancy R.

    1990-01-01

    Presents a facetious, ingenious resolution to the percentile dilemma concerning above- and below-average test scores. If schools enrolled the same number of pigs as students and tested both groups, the pigs would fill up the bottom half and all children would rank in the top 50 percent. However, some wrinkles need to be ironed out! (MLH)

  20. 40 CFR 63.1332 - Emissions averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1332 Emissions averaging... based on either organic HAP or TOC. (3) For the purposes of these provisions, whenever Method 18, 40 CFR... (a)(3)(i) and (a)(3)(ii) of this section. (i) The organic HAP used as the calibration gas for...

  1. Average formation length of hadrons in a string model

    NASA Astrophysics Data System (ADS)

    Grigoryan, L.

    2010-04-01

    The space-time scales of the hadronization process in the framework of the string model are investigated. It is shown that the average formation lengths of pseudoscalar mesons, produced in semi-inclusive deep inelastic scattering of leptons on different targets, depend on their electrical charges. In particular, the average formation lengths of positively charged hadrons are larger than those of negatively charged ones. This statement is fulfilled for all scaling functions used, for z (the fraction of the virtual photon energy transferred to the detected hadron) larger than 0.15, for all nuclear targets, and for any value of the Björken scaling variable xBj. In all cases, the main mechanism is direct production of pseudoscalar mesons. Including in consideration an additional mechanism of production resulting in decay of resonances leads to a decrease in average formation lengths. It is shown that the average formation lengths of positively (negatively) charged mesons are slowly increasing (decreasing) functions of xBj. The results obtained can be important, in particular, for understanding of the hadronization process in the nuclear environment.

  2. 27 CFR 19.249 - Average effective tax rate.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Average effective tax rate. 19.249 Section 19.249 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits Taxes Effective Tax Rates §...

  3. Evaluation of spline and weighted average interpolation algorithms

    NASA Astrophysics Data System (ADS)

    Eckstein, Barbara Ann

    Bivariate polynomial and weighted average interpolations were tested on two data sets. One data set consisted of irregularly spaced Bouguer gravity values. Maps derived from automated interpolation were compared to a manually created map to determine the best computer-generated diagram. For this data set, bivariate polynomial interpolation was inadequate, showing many spurious circular anomalies with extrema greatly exceeding the input values. The greatest distortion occurred near roughly colinear observations and steep field gradients. The computerized map from weighted average interpolation matched the manual map when the number of grid points was roughly nine times the number of input points. Groundwater recharge and discharge rates were used for the second example. The discharge zones are two narrow irrigation ditches, and measurements were along linear traverses. Again, polynomial interpolation produced unreasonably large interpolated values near high field gradients. The weighted average method required a higher ratio of grid points to input data (about 64 to 1) because of the long narrow shape of the discharge zones. The weighted average interpolation method was more reliable than the polynomial method because it was less sensitive to the nature of the data distribution and to the field gradients.

  4. Polyline averaging using distance surfaces: A spatial hurricane climatology

    NASA Astrophysics Data System (ADS)

    Scheitlin, Kelsey N.; Mesev, Victor; Elsner, James B.

    2013-03-01

    The US Gulf states are frequently hit by hurricanes, causing widespread damage resulting in economic loss and occasional human fatalities. Current hurricane climatologies and predictive models frequently omit information on the spatial characteristics of hurricane movement—their linear tracks. We investigate the construction of a spatial hurricane climatology that condenses linear tracks to one-dimensional polylines. With the aid of distance surfaces, an average hurricane track is calculated by summing polylines as part of a grid-based algorithm. We demonstrate the procedure on a particularly vulnerable coastline around the city of Galveston in Texas, where the tracks of the closest storms to Galveston are also weighted by an inverse distance function. Track averaging is also applied as a means of interpolating possible paths of historical storms where records are sporadic observations, and sometimes anecdotal. We offer the average track as a convenient regional summary of expected hurricane movement. The average track, together with other hurricane attributes, also provides a means to assess the expected local vulnerability of property and environmental damage.

  5. Cognitive Patterns of "Retarded" and Below-Average Readers.

    ERIC Educational Resources Information Center

    Leong, Che K.

    1980-01-01

    The cognitive patterns of 58 "retarded" and 38 below-average readers were compared with controls, according to Luria's simultaneous and successive modes of information processing. Factor Analysis showed different cognitive patterns for disabled and nondisabled readers. Reading skills, rather than cognitive ability, were shown to be…

  6. Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.

    DTIC Science & Technology

    1977-02-01

    maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to

  7. Advising Students about Required Grade-Point Averages

    ERIC Educational Resources Information Center

    Moore, W. Kent

    2006-01-01

    Sophomores interested in professional colleges with grade-point average (GPA) standards for admission to upper division courses will need specific and realistic information concerning the requirements. Specifically, those who fall short of the standard must assess the likelihood of achieving the necessary GPA for professional program admission.…

  8. Average subentropy, coherence and entanglement of random mixed quantum states

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Singh, Uttam; Pati, Arun K.

    2017-02-01

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  9. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage §...

  10. 47 CFR 80.759 - Average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage §...

  11. Reducing Noise by Repetition: Introduction to Signal Averaging

    ERIC Educational Resources Information Center

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  12. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  13. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  14. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  15. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  16. 40 CFR 80.67 - Compliance on average.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...

  17. Speckle averaging system for laser raster-scan image projection

    DOEpatents

    Tiszauer, Detlev H.; Hackel, Lloyd A.

    1998-03-17

    The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.

  18. The method of averages applied to the KS differential equations

    NASA Technical Reports Server (NTRS)

    Graf, O. F., Jr.; Mueller, A. C.; Starke, S. E.

    1977-01-01

    A new approach for the solution of artificial satellite trajectory problems is proposed. The basic idea is to apply an analytical solution method (the method of averages) to an appropriate formulation of the orbital mechanics equations of motion (the KS-element differential equations). The result is a set of transformed equations of motion that are more amenable to numerical solution.

  19. AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN

    EPA Science Inventory

    The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...

  20. Fully variational average atom model with ion-ion correlations.

    PubMed

    Starrett, C E; Saumon, D

    2012-02-01

    An average atom model for dense ionized fluids that includes ion correlations is presented. The model assumes spherical symmetry and is based on density functional theory, the integral equations for uniform fluids, and a variational principle applied to the grand potential. Starting from density functional theory for a mixture of classical ions and quantum mechanical electrons, an approximate grand potential is developed, with an external field being created by a central nucleus fixed at the origin. Minimization of this grand potential with respect to electron and ion densities is carried out, resulting in equations for effective interaction potentials. A third condition resulting from minimizing the grand potential with respect to the average ion charge determines the noninteracting electron chemical potential. This system is coupled to a system of point ions and electrons with an ion fixed at the origin, and a closed set of equations is obtained. Solution of these equations results in a self-consistent electronic and ionic structure for the plasma as well as the average ionization, which is continuous as a function of temperature and density. Other average atom models are recovered by application of simplifying assumptions.

  1. Improvements in Dynamic GPS Positions Using Track Averaging

    DTIC Science & Technology

    1999-08-01

    Global Positioning System ( GPS ), Precise Positioning System (PPS) solution under dynamic...SUBJECT TERMS 15. NUMBER OF GPS , Global Positioning System , Dynamic Positioning PAGES 31 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY... Global Positioning System ( GPS ), Precise Positioning System (PPS) solution under dynamic conditions through averaging is investigated. Static

  2. Evolution of the average steepening factor for nonlinearly propagating waves.

    PubMed

    Muhlestein, Michael B; Gee, Kent L; Neilsen, Tracianne B; Thomas, Derek C

    2015-02-01

    Difficulties arise in attempting to discern the effects of nonlinearity in near-field jet-noise measurements due to the complicated source structure of high-velocity jets. This article describes a measure that may be used to help quantify the effects of nonlinearity on waveform propagation. This measure, called the average steepening factor (ASF), is the ratio of the average positive slope in a time waveform to the average negative slope. The ASF is the inverse of the wave steepening factor defined originally by Gallagher [AIAA Paper No. 82-0416 (1982)]. An analytical description of the ASF evolution is given for benchmark cases-initially sinusoidal plane waves propagating through lossless and thermoviscous media. The effects of finite sampling rates and measurement noise on ASF estimation from measured waveforms are discussed. The evolution of initially broadband Gaussian noise and signals propagating in media with realistic absorption are described using numerical and experimental methods. The ASF is found to be relatively sensitive to measurement noise but is a relatively robust measure for limited sampling rates. The ASF is found to increase more slowly for initially Gaussian noise signals than for initially sinusoidal signals of the same level, indicating the average distortion within noise waveforms occur more slowly.

  3. Robust Representations for Face Recognition: The Power of Averages

    ERIC Educational Resources Information Center

    Burton, A. Mike; Jenkins, Rob; Hancock, Peter J. B.; White, David

    2005-01-01

    We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal…

  4. Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works

    NASA Astrophysics Data System (ADS)

    Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha

    2015-04-01

    Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (φ'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the

  5. The Average Quality Factors by TEPC for Charged Particles

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.

    2004-01-01

    The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.

  6. High average power diode pumped solid state lasers for CALIOPE

    SciTech Connect

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

  7. Averaged universe confronted with cosmological observations: A fully covariant approach

    NASA Astrophysics Data System (ADS)

    Wijenayake, Tharake; Lin, Weikang; Ishak, Mustapha

    2016-10-01

    One of the outstanding problems in general relativistic cosmology is that of the averaging, that is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaître-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-known question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of macroscopic gravity. We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted ΩA . We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full cosmic microwave background analysis from Planck temperature anisotropy and polarization data, the supernova data from Union 2.1, the galaxy power spectrum from WiggleZ, the weak lensing tomography shear-shear cross correlations from the CFHTLenS survey, and the baryonic acoustic oscillation data from 6Df, SDSS DR7, and BOSS DR9. We find that -0.0155 ≤ΩA≤0 (at the 68% C.L.), thus providing a tight upper bound on the backreaction term. We also find that the term is strongly correlated with cosmological parameters, such ΩΛ, σ8, and H0. While small, a backreaction density parameter of a few percent should be kept in consideration along with other systematics for precision cosmology.

  8. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  9. High-average-power diode-pumped Yb: YAG lasers

    SciTech Connect

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-10-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.

  10. Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2005-01-01

    Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.

  11. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or Before August 30, 1999 Model Rule-Continuous Emission Monitoring § 60.1755 How do I convert my 1.... If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction...

  12. Differences in concentration lengths computed using band-averaged mass extinction coefficients and band-averaged transmittance

    NASA Astrophysics Data System (ADS)

    Farmer, W. Michael

    1990-09-01

    An understanding of how broad-band transmittance is affected by the atmosphere is crucial to accurately predicting how broad-band sensors such as FLIRs will perform. This is particularly true for sensors required to function in an environment where countermeasures such as smokes/obscurants have been used to limit sensor performance. A common method of estimating the attenuation capabilities of smokes/obscurants released in the atmosphere to defeat broad-band sensors is to use a band averaged extinction coefficient with concentration length values in the Beer-Bouguer transmission law. This approach ignores the effects of source spectra, sensor response, and normal atmospheric attenuation, and can lead to results for band averages of the relative transmittance that are significantly different from those obtained using the source spectra, sensor response, and normal atmospheric transmission. In this paper we discuss the differences that occur in predicting relative transmittance as a function of concentration length using band-averaged mass extinction coefficients or computing the band-averaged transmittance as a function of source spectra. Two examples are provided to illustrate the differences in results. The first example is applicable to 8- to l4-um band transmission through natural fogs. The second example considers 3- to 5-um transmission through phosphorus smoke produced at 17% and 90% relative humidity. The results show major differences in the prediction of concentration length values by the two methods when the relative transmittance falls below about 20%.

  13. Aperture averaging effects on the average spectral efficiency of FSO links over turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Aarthi, G.; Prabu, K.; Reddy, G. Ramachandra

    2017-02-01

    The average spectral efficiency (ASE) is investigated for the free space optical (FSO) communications employing On-Off keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems with and without pointing errors over the Gamma-Gamma (GG) channels. Additionally, the impact of aperture averaging on the ASE is explored. The influence of different turbulence conditions along with varying receiver aperture has been studied and analyzed. For the considered system, the exact average channel capacity (ACC) expressions are derived using Meijer G function. Results reveal that when pointing errors are introduced, there is a significant reduction in the ASE performance. The enhancement in the ASE can be achieved with an increase in the receiver aperture across various turbulence regimes and reducing the beam radius in the presence of pointing errors, but the rate of increment of ASE reduces with a larger diameter and it is saturated finally. The coherent OWC system provides better ASE performance of 49 bits/s/Hz at the average transmitted optical power of 5 dBm with an aperture diameter of 10 cm and 34 bits/s/Hz without and with pointing errors under strong turbulence respectively.

  14. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA... SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed...

  15. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  16. Multiple-level defect species evaluation from average carrier decay

    NASA Astrophysics Data System (ADS)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  17. Averaged model for momentum and dispersion in hierarchical porous media.

    PubMed

    Chabanon, Morgan; David, Bertrand; Goyeau, Benoît

    2015-08-01

    Hierarchical porous media are multiscale systems, where different characteristic pore sizes and structures are encountered at each scale. Focusing the analysis to three pore scales, an upscaling procedure based on the volume-averaging method is applied twice, in order to obtain a macroscopic model for momentum and diffusion-dispersion. The effective transport properties at the macroscopic scale (permeability and dispersion tensors) are found to be explicitly dependent on the mesoscopic ones. Closure problems associated to these averaged properties are numerically solved at the different scales for two types of bidisperse porous media. Results show a strong influence of the lower-scale porous structures and flow intensity on the macroscopic effective transport properties.

  18. Ampere Average Current Photoinjector and Energy Recovery Linac

    SciTech Connect

    Ilan Ben-Zvi; A. Burrill; R. Calaga; P. Cameron; X. Chang; D. Gassner; H. Hahn; A. Hershcovitch; H.C. Hseuh; P. Johnson; D. Kayran; J. Kewisch; R. Lambiase; Vladimir N. Litvinenko; G. McIntyre; A. Nicoletti; J. Rank; T. Roser; J. Scaduto; K. Smith; T. Srinivasan-Rao; K.-C. Wu; A. Zaltsman; Y. Zhao; H. Bluem; A. Burger; Mike Cole; A. Favale; D. Holmes; John Rathke; Tom Schultheiss; A. Todd; J. Delayen; W. Funk; L. Phillips; Joe Preble

    2004-08-01

    High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode, as demonstrated by the spectacular success of the Jefferson Laboratory IR-Demo. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. BNL's Collider-Accelerator Department is pursuing some of these technologies for a different application, that of electron cooling of high-energy hadron beams. I will describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode and an accelerator cavity, both capable of producing of the order of one ampere average current.

  19. Pulsar average waveforms and hollow cone beam models

    NASA Technical Reports Server (NTRS)

    Backer, D. C.

    1975-01-01

    An analysis of pulsar average waveforms at radio frequencies from 40 MHz to 15 GHz is presented. The analysis is based on the hypothesis that the observer sees one cut of a hollow-cone beam pattern and that stationary properties of the emission vary over the cone. The distributions of apparent cone widths for different observed forms of the average pulse profiles (single, double/unresolved, double/resolved, triple and multiple) are in modest agreement with a model of a circular hollow-cone beam with random observer-spin axis orientation, a random cone axis-spin axis alignment, and a small range of physical hollow-cone parameters for all objects.

  20. More Voodoo correlations: when average-based measures inflate correlations.

    PubMed

    Brand, Andrew; Bradley, Michael T

    2012-01-01

    A Monte-Carlo simulation was conducted to assess the extent that a correlation estimate can be inflated when an average-based measure is used in a commonly employed correlational design. The results from the simulation reveal that the inflation of the correlation estimate can be substantial, up to 76%. Additionally, data was re-analyzed from two previously published studies to determine the extent that the correlation estimate was inflated due to the use of an averaged based measure. The re-analyses reveal that correlation estimates had been inflated by just over 50% in both studies. Although these findings are disconcerting, we are somewhat comforted by the fact that there is a simple and easy analysis that can be employed to prevent the inflation of the correlation estimate that we have simulated and observed.

  1. Laser Diode Cooling For High Average Power Applications

    NASA Astrophysics Data System (ADS)

    Mundinger, David C.; Beach, Raymond J.; Benett, William J.; Solarz, Richard W.; Sperry, Verry

    1989-06-01

    Many applications for semiconductor lasers that require high average power are limited by the inability to remove the waste heat generated by the diode lasers. In order to reduce the cost and complexity of these applications a heat sink package has been developed which is based on water cooled silicon microstructures. Thermal resistivities of less than 0.025°C/01/cm2) have been measured which should be adequate for up to CW operation of diode laser arrays. This concept can easily be scaled to large areas and is ideal for high average power solid state laser pumping. Several packages which illustrate the essential features of this design have been fabricated and tested. The theory of operation will be briefly covered, and several conceptual designs will be described. Also the fabrication and assembly procedures and measured levels of performance will be discussed.

  2. Averaged variational principle for autoresonant Bernstein-Greene-Kruskal modes

    SciTech Connect

    Khain, P.; Friedland, L.

    2010-10-15

    Whitham's averaged variational principle is applied in studying dynamics of formation of autoresonant (continuously phase-locked) Bernstein-Greene-Kruskal (BGK) modes in a plasma driven by a chirped frequency ponderomotive wave. A flat-top electron velocity distribution is used as a model allowing a variational formulation within the water bag theory. The corresponding Lagrangian, averaged over the fast phase variable yields evolution equations for the slow field variables, allows uniform description of all stages of excitation of driven-chirped BGK modes, and predicts modulational stability of these nonlinear phase-space structures. Numerical solutions of the system of slow variational equations are in good agreement with Vlasov-Poisson simulations.

  3. Robust myelin water quantification: averaging vs. spatial filtering.

    PubMed

    Jones, Craig K; Whittall, Kenneth P; MacKay, Alex L

    2003-07-01

    The myelin water fraction is calculated, voxel-by-voxel, by fitting decay curves from a multi-echo data acquisition. Curve-fitting algorithms require a high signal-to-noise ratio to separate T(2) components in the T(2) distribution. This work compared the effect of averaging, during acquisition, to data postprocessed with a noise reduction filter. Forty regions, from five volunteers, were analyzed. A consistent decrease in the myelin water fraction variability with no bias in the mean was found for all 40 regions. Images of the myelin water fraction of white matter were more contiguous and had fewer "holes" than images of myelin water fractions from unfiltered echoes. Spatial filtering was effective for decreasing the variability in myelin water fraction calculated from 4-average multi-echo data.

  4. Thermal effects in high average power optical parametric amplifiers.

    PubMed

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  5. Microchannel heatsinks for high average power laser diode arrays

    SciTech Connect

    Beach, R.; Benett, B.; Freitas, B.; Ciarlo, D.; Sperry, V.; Comaskey, B.; Emanuel, M.; Solarz, R.; Mundinger, D.

    1992-01-01

    Detailed performance results and fabrication techniques for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor or even CW operation of fully filled laser diode arrays is enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using a photolithographic pattern definition procedure followed by anisotropic chemical etching. A modular rack-and-stack architecture is adopted for the heatsink design allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel cooled heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that result from coupling the diode light to several nanometers wide absorption features characteristic of leasing ions in crystals.

  6. Microchannel cooled heatsinks for high average power laser diode arrays

    SciTech Connect

    Bennett, W.J.; Freitas, B.L.; Ciarlo, D.; Beach, R.; Sutton, S.; Emanuel, M.; Solarz, R.

    1993-01-15

    Detailed performance results for an efficient and low impedance laser diode array heatsink are presented. High duty factor and even cw operation of fully filled laser diode arrays at high stacking densities are enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using an anisotropic chemical etching process. A modular rack-and-stack architecture is adopted for heatsink design, allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel heatsinks is ideally suited to pump army requirements for high average power crystalline laser because of the stringent temperature demands are required to efficiently couple diode light to several-nanometer-wide absorption features characteristic of lasing ions in crystals.

  7. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  8. Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction

    PubMed Central

    Ahlfors, Seppo P.; Hinrichs, Hermann

    2016-01-01

    Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196

  9. The B-dot Earth Average Magnetic Field

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  10. Averaging of nuclear modulation artefacts in RIDME experiments

    NASA Astrophysics Data System (ADS)

    Keller, Katharina; Doll, Andrin; Qi, Mian; Godt, Adelheid; Jeschke, Gunnar; Yulikov, Maxim

    2016-11-01

    The presence of artefacts due to Electron Spin Echo Envelope Modulation (ESEEM) complicates the analysis of dipolar evolution data in Relaxation Induced Dipolar Modulation Enhancement (RIDME) experiments. Here we demonstrate that averaging over the two delay times in the refocused RIDME experiment allows for nearly quantitative removal of the ESEEM artefacts, resulting in potentially much better performance than the so far used methods. The analytical equations are presented and analyzed for the case of electron and nuclear spins S = 1 / 2, I = 1 / 2 . The presented analysis is also relevant for Double Electron Electron Resonance (DEER) and Chirp-Induced Dipolar Modulation Enhancement (CIDME) techniques. The applicability of the ESEEM averaging approach is demonstrated on a Gd(III)-Gd(III) rigid ruler compound in deuterated frozen solution at Q band (35 GHz).

  11. Correct averaging in transmission radiography: Analysis of the inverse problem

    NASA Astrophysics Data System (ADS)

    Wagner, Michael; Hampel, Uwe; Bieberle, Martina

    2016-05-01

    Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.

  12. On representation formulas for long run averaging optimal control problem

    NASA Astrophysics Data System (ADS)

    Buckdahn, R.; Quincampoix, M.; Renault, J.

    2015-12-01

    We investigate an optimal control problem with an averaging cost. The asymptotic behaviour of the values is a classical problem in ergodic control. To study the long run averaging we consider both Cesàro and Abel means. A main result of the paper says that there is at most one possible accumulation point - in the uniform convergence topology - of the values, when the time horizon of the Cesàro means converges to infinity or the discount factor of the Abel means converges to zero. This unique accumulation point is explicitly described by representation formulas involving probability measures on the state and control spaces. As a byproduct we obtain the existence of a limit value whenever the Cesàro or Abel values are equicontinuous. Our approach allows to generalise several results in ergodic control, and in particular it allows to cope with cases where the limit value is not constant with respect to the initial condition.

  13. Scaling registration of multiview range scans via motion averaging

    NASA Astrophysics Data System (ADS)

    Zhu, Jihua; Zhu, Li; Jiang, Zutao; Li, Zhongyu; Li, Chen; Zhang, Fan

    2016-07-01

    Three-dimensional modeling of scene or object requires registration of multiple range scans, which are obtained by range sensor from different viewpoints. An approach is proposed for scaling registration of multiview range scans via motion averaging. First, it presents a method to estimate overlap percentages of all scan pairs involved in multiview registration. Then, a variant of iterative closest point algorithm is presented to calculate relative motions (scaling transformations) for these scan pairs, which contain high overlap percentages. Subsequently, the proposed motion averaging algorithm can transform these relative motions into global motions of multiview registration. In addition, it also introduces the parallel computation to increase the efficiency of multiview registration. Furthermore, it presents the error criterion for accuracy evaluation of multiview registration result, which can make it easy to compare results of different multiview registration approaches. Experimental results carried out with public available datasets demonstrate its superiority over related approaches.

  14. Improved MCMAC with momentum, neighborhood, and averaged trapezoidal output.

    PubMed

    Ang, K K; Chai, Q

    2000-01-01

    An improved modified cerebellar articulation controller (MCMAC) neural control algorithm with better learning and recall processes using momentum, neighborhood learning, and averaged trapezoidal output, is proposed in this paper. The learning and recall processes of MCMAC are investigated using the characteristic surface of MCMAC and the control action exerted in controlling a continuously variable transmission (CVT). Extensive experimental results demonstrate a significant improvement with reduced training time and an extended range of trained MCMAC cells. The improvement in recall process using the averaged trapezoidal output (MCMAC-ATO) are contrasted against the original MCMAC using the square of the Pearson product moment correlation coefficient. Experimental results show that the new recall process has significantly reduced the fluctuations in the control action of the MCMAC and addressed partially the problem associated with the resolution of the MCMAC memory array.

  15. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  16. Non-self-averaging in Ising spin glasses and hyperuniversality

    NASA Astrophysics Data System (ADS)

    Lundow, P. H.; Campbell, I. A.

    2016-01-01

    Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U22(T ,L ) for the spin glass susceptibility [and for higher moments Un n(T ,L ) ] is reported for dimensions 2 ,3 ,4 ,5 , and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ (T ,L ) as Un n(β ,L ) =[Kdξ (T ,L ) /L ] d and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996), 10.1103/PhysRevLett.77.3700]. Empirically, it is found that the Kd values are independent of d to within the statistics. The maximum values [Unn(T,L ) ] max are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [Unn(T,L ) ] max peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and X Y spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.

  17. Non-self-averaging in Ising spin glasses and hyperuniversality.

    PubMed

    Lundow, P H; Campbell, I A

    2016-01-01

    Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U_{22}(T,L) for the spin glass susceptibility [and for higher moments U_{nn}(T,L)] is reported for dimensions 2,3,4,5, and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ(T,L) as U_{nn}(β,L)=[K_{d}ξ(T,L)/L]^{d} and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996)PRLTAO0031-900710.1103/PhysRevLett.77.3700]. Empirically, it is found that the K_{d} values are independent of d to within the statistics. The maximum values [U_{nn}(T,L)]_{max} are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [U_{nn}(T,L)]_{max} peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and XY spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.

  18. Asymptotic Properties of Some Estimators in Moving Average Models

    DTIC Science & Technology

    1975-09-08

    consider a different approach due to Durbin (1959), based on approximating the moving average of order .q by an autoregression of order k ( k ~ q). This...method shows good statistical properties. The paper by Durbin does not treat in detail the role of k in the parameters of the limiting normal...k) confirming some of the examples presented by Durbin . The parallel analysis with k = k(T) was also attempted.:> but at this point no complete

  19. Self-averaging in complex brain neuron signals

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.

    2002-12-01

    Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.

  20. Average dynamics of a finite set of coupled phase oscillators

    SciTech Connect

    Dima, Germán C. Mindlin, Gabriel B.

    2014-06-15

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  1. High average power solid state laser power conditioning system

    SciTech Connect

    Steinkraus, R.F.

    1987-03-03

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers.

  2. Light-cone averages in a Swiss-cheese universe

    NASA Astrophysics Data System (ADS)

    Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino

    2008-01-01

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaître-Tolman-Bondi solution of Einstein’s equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w0 and wa follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.

  3. The role of the harmonic vector average in motion integration.

    PubMed

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.

  4. Hydrophone spatial averaging corrections from 1 to 100 MHz

    NASA Astrophysics Data System (ADS)

    Radulescu, Emil George

    The purpose of this work was to develop and experimentally verify a set of robust and readily applicable spatial averaging models to account for ultrasonic hydrophone probe's finite aperture in acoustic field measurements in the frequency range 1--100 MHz. Electronically and mechanically focused acoustic sources of different geometries were considered. The geometries included single element circular sources and rectangular shape transducers that were representative of ultrasound imaging arrays used in clinical diagnostic applications. The field distributions of the acoustic sources were predicted and used in the development of the spatial averaging models. The validity of the models was tested using commercially available hydrophone probes having active element diameters ranging from 50 to 1200 mum. The models yielded guidelines which were applicable to both linear and nonlinear wave propagation conditions. By accounting for hydrophones' finite aperture and correcting the recorded pressure-time waveforms, the models allowed the uncertainty associated with determining the key acoustic output parameters such as: Pulse Intensity Integral (PII) and the intensities derived from it to be minimized. In addition, the work offered a correction factor for the safety indicator Mechanical Index (MI) that is required by AIUM/NEMA standards. The novelty of this research stems primarily from the fact that, to the best of the author's knowledge, such comprehensive set of models and guidelines has not been developed so far. Although different spatial averaging models have already been suggested, they have been limited to circular geometries, linear propagation conditions and conventional, low megahertz medical imaging frequencies, only. Also, the spatial averaging models described here provided the necessary corrections to obtain the true sensitivity versus frequency response during calibration of hydrophone probes up to 100 MHz and allowed for a subsequent development of two novel

  5. Averaging cross section data so we can fit it

    SciTech Connect

    Brown, D.

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  6. Effects of velocity averaging on the shapes of absorption lines

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.

    1980-01-01

    The velocity averaging of collision cross sections produces non-Lorentz line shapes, even at densities where Doppler broadening is not apparent. The magnitude of the effects will be described using a model in which the collision broadening depends on a simple velocity power law. The effect of the modified profile on experimental measures of linewidth, shift and amplitude will be examined and an improved approximate line shape will be derived.

  7. Characterizing individual painDETECT symptoms by average pain severity

    PubMed Central

    Sadosky, Alesia; Koduru, Vijaya; Bienen, E Jay; Cappelleri, Joseph C

    2016-01-01

    Background painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure), a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe), but their ability to discriminate individual item severity has not been evaluated. Methods Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624). Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level. Results A probability >50% for a better outcome (less severe pain) was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain) and highest probability was 76.4% (on cold/heat for mild vs severe pain). The pain radiation item was significant (P<0.05) and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ. Conclusion painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain-severity levels can serve as proxies to determine treatment effects, thus indicating probabilities for more favorable outcomes on pain symptoms. PMID:27555789

  8. Fundamental techniques for resolution enhancement of average subsampled images

    NASA Astrophysics Data System (ADS)

    Shen, Day-Fann; Chiu, Chui-Wen

    2012-07-01

    Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.

  9. Noise reduction of video imagery through simple averaging

    NASA Astrophysics Data System (ADS)

    Vorder Bruegge, Richard W.

    1999-02-01

    Examiners in the Special Photographic Unit of the Federal Bureau of Investigation Laboratory Division conduct examinations of questioned photographic evidence of all types, including surveillance imagery recorded on film and video tape. A primary type of examination includes side-by- side comparisons, in which unknown objects or people depicted in the questioned images are compared with known objects recovered from suspects or with photographs of suspects themselves. Most imagery received in the SPU for such comparisons originate from time-lapse video or film systems. In such circumstances, the delay between sequential images is so great that standard image summing and/or averaging techniques are useless as a means of improving image detail in questioned subjects or objects without also resorting to processing-intensive pattern reconstruction algorithms. Occasionally, however, the receipt of real-time video imagery will include a questioned object at rest. In such cases, it is possible to use relatively simple image averaging techniques as a means of reducing transient noise in the images, without further compromising the already-poor resolution inherent in most video surveillance images. This paper presents an example of one such case in which multiple images were averaged to reduce the transient noise to a sufficient degree to permit the positive identification of a vehicle based upon the presence of scrape marks and dents on the side of the vehicle.

  10. Variations in Nimbus-7 cloud estimates. Part I: Zonal averages

    SciTech Connect

    Weare, B.C. )

    1992-12-01

    Zonal averages of low, middle, high, and total cloud amount estimates derived from measurements from Nimbus-7 have been analyzed for the six-year period April 1979 through March 1985. The globally and zonally averaged valued of six-year annual means and standard deviations of total cloud amount and a proxy of cloudtop height are illustrated. Separate means for day and night and land and sea are also shown. The globally averaged value of intra-annual variability of total cloud amount is greater than 7%, and that for cloud height is greater than 0.3 km. Those of interannual variability are more than one-third of these values. Important latitudinal differences in variability are illustrated. The dominant empirical orthogonal analyses of the intra-annual variations of total cloud amount and heights show strong annual cycles, indicating that in the tropics increases in total cloud amount of up to about 30% are often accompanied by increases in cloud height of up to 1.2 km. This positive link is also evident in the dominant empirical orthogonal function of interannual variations of a total cloud/cloud height complex. This function shows a large coherent variation in total cloud cover of about 10% coupled with changes in cloud height of about 1.1 km associated with the 1982-83 El Ni[tilde n]o-Southern Oscillation event. 14 refs. 12 figs., 2 tabs.

  11. Local and average behaviour in inhomogeneous superdiffusive media

    NASA Astrophysics Data System (ADS)

    Vezzani, Alessandro; Burioni, Raffaella; Caniparoli, Luca; Lepri, Stefano

    2011-05-01

    We consider a random walk on one-dimensional inhomogeneous graphs built from Cantor fractals. Our study is motivated by recent experiments that demonstrated superdiffusion of light in complex disordered materials, thereby termed Lévy glasses. We introduce a geometric parameter α which plays a role analogous to the exponent characterising the step length distribution in random systems. We study the large-time behaviour of both local and average observables; for the latter case, we distinguish two different types of averages, respectively over the set of all initial sites and over the scattering sites only. The 'single long-jump approximation" is applied to analytically determine the different asymptotic behaviour as a function of α and to understand their origin. We also discuss the possibility that the root of the mean square displacement and the characteristic length of the walker distribution may grow according to different power laws; this anomalous behaviour is typical of processes characterised by Lévy statistics and here, in particular, it is shown to influence average quantities.

  12. Design Principles for a Compact High Average Power IR FEL

    SciTech Connect

    Lia Merminga; Steve Benson

    2001-08-01

    Progress in superconducting rf (srf) technology has led to dramatic changes in cryogenic losses, cavity gradients, and microphonic levels. Design principles for a compact high average power Energy Recovery FEL at IR wavelengths, consistent with the state of the art in srf, are outlined, High accelerating gradients, of order 20 MV/m at Q{sub 0}{approx}1x10{sup 10} possible at rf frequencies of 1300 MHz and 1500 MHz, allow for a single-cryomodule linac, with minimum cryogenic losses. Filling every rf bucket, at these high frequencies, results in high average current at relatively low charge per bunch, thereby greatly ameliorating all single bunch phenomena, such as wakefields and coherent synchrotron radiation. These principles are applied to derive self-consistent sets of parameters for 100 kW and 1 MW average power IR FELs and are compared with low frequency solutions. This work supported by U.S. DOE Contract No. DE-AC05-84ER40150, the Commonwealth of Virginia and the Laser Processing Consortium.

  13. On the average uncertainty for systems with nonlinear coupling

    NASA Astrophysics Data System (ADS)

    Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.

    2017-02-01

    The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.

  14. Estimating a weighted average of stratum-specific parameters.

    PubMed

    Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul

    2008-10-30

    This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.

  15. Improvement of scanning radiometer performance by digital reference averaging

    NASA Technical Reports Server (NTRS)

    Bremer, J. C.

    1979-01-01

    Most radiometers utilize a calibration technique in which measurements of a known reference are subtracted from measurements of an unknown source so that common-mode bias errors are cancelled. When a radiometer is scanned over a varying scene, it produces a sequence of outputs, each being proportional to the difference between the reference and the corresponding input. A reference averaging technique is presented that employs a simple digital algorithm which exploits the asymmetry between the time-variable scene inputs and the nominally constant reference input by averaging many reference measurements to decrease the statistical uncertainty in the reference value. This algorithm is, therefore, optimized by an asymmetric chopping sequence in which the scene is viewed for more than one-half of the duty cycle (unlike the analog Dicke technique). Reference averaging algorithms are well within the capabilities of small microprocessors. Although this paper develops the technique for microwave radiometry, it may be beneficial for any system which measures a large number of unknowns relative to a known reference in the presence of slowly varying common-mode errors.

  16. Exploring JLA supernova data with improved flux-averaging technique

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Wen, Sixiang; Li, Miao

    2017-03-01

    In this work, we explore the cosmological consequences of the ``Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the (zcut, Δ z) plane, where zcut and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying zcut and varying Δ z, revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is (zcut = 0.6, Δ z=0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at zcut >= 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ωm. In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  17. Averaged null energy condition in loop quantum cosmology

    SciTech Connect

    Li Lifang; Zhu Jianyang

    2009-02-15

    Wormholes and time machines are objects of great interest in general relativity. However, to support them it needs exotic matters which are impossible at the classical level. Semiclassical gravity introduces the quantum effects into the stress-energy tensor and constructs many self-consistent wormholes. But they are not traversable due to the averaged null energy condition. Loop quantum gravity (LQG) significantly modifies the Einstein equation in the deep quantum region. If we write the modified Einstein equation in the form of the standard one but with an effective stress-energy tensor, it is convenient to analyze the geometry in LQG through the energy condition. Loop quantum cosmology (LQC), an application of LQG, has an effective stress-energy tensor which violates some kinds of local energy conditions. So it is natural that the inflation emerges in LQC. In this paper, we investigate the averaged null energy condition in LQC in the framework of the effective Hamiltonian, and we find that the effective stress-energy tensor in LQC violates the averaged null energy condition in the massless scalar field coupled model.

  18. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2016-09-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  19. Role of spatial averaging in multicellular gradient sensing

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  20. Average methods and their applications in differential geometry I

    NASA Astrophysics Data System (ADS)

    Vincze, Cs.

    2015-06-01

    In Minkowski geometry the metric features are based on a compact convex body containing the origin in its interior. This body works as a unit ball and its boundary is formed by the unit vectors. Using one-homogeneous extension we have a so-called Minkowski functional to measure the length of vectors. The half of its square is called the energy function. Under some regularity conditions we can introduce an averaged Euclidean inner product by integrating the Hessian matrix of the energy function on the Minkowskian unit sphere. Changing the origin in the interior of the body we have a collection of Minkowskian unit balls together with Minkowski functionals depending on the base points. It is a kind of special Finsler manifolds called a Funk space. Using the previous method we can associate a Riemannian metric as the collection of the averaged Euclidean inner products belonging to different base points. We investigate this procedure in case of Finsler manifolds in general. Central objects of the associated Riemannian structure will be expressed in terms of the canonical data of the Finsler space. We take one more step forward. Randers spaces will be introduced by averaging of the vertical derivatives of the Finslerian fundamental function. The construction will have a crucial role when we apply the general results to Funk spaces together with some contributions to Brickell's conjecture on Finsler manifolds with vanishing curvature tensor Q.

  1. H∞ control of switched delayed systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Li, Zhicheng; Gao, Huijun; Agarwal, Ramesh; Kaynak, Okyay

    2013-12-01

    This paper considers the problems of stability analysis and H∞ controller design of time-delay switched systems with average dwell time. In order to obtain less conservative results than what is seen in the literature, a tighter bound for the state delay term is estimated. Based on the scaled small gain theorem and the model transformation method, an improved exponential stability criterion for time-delay switched systems with average dwell time is formulated in the form of convex matrix inequalities. The aim of the proposed approach is to reduce the minimal average dwell time of the systems, which is made possible by a new Lyapunov-Krasovskii functional combined with the scaled small gain theorem. It is shown that this approach is able to tolerate a smaller dwell time or a larger admissible delay bound for the given conditions than most of the approaches seen in the literature. Moreover, the exponential H∞ controller can be constructed by solving a set of conditions, which is developed on the basis of the exponential stability criterion. Simulation examples illustrate the effectiveness of the proposed method.

  2. High Average Power, High Energy Short Pulse Fiber Laser System

    SciTech Connect

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  3. Noise reduction in elastograms using temporal stretching with multicompression averaging.

    PubMed

    Varghese, T; Ophir, J; Céspedes, I

    1996-01-01

    Elastography uses estimates of the time delay (obtained by cross-correlation) to compute strain estimates in tissue due to quasistatic compression. Because the time delay estimates do not generally occur at the sampling intervals, the location of the cross-correlation peak does not give an accurate estimate of the time delay. Sampling errors in the time-delay estimate are reduced using signal interpolation techniques to obtain subsample time-delay estimates. Distortions of the echo signals due to tissue compression introduce correlation artifacts in the elastogram. These artifacts are reduced by a combination of small compressions and temporal stretching of the postcompression signal. Random noise effects in the resulting elastograms are reduced by averaging several elastograms, obtained from successive small compressions (assuming that the errors are uncorrelated). Multicompression averaging with temporal stretching is shown to increase the signal-to-noise ratio in the elastogram by an order of magnitude, without sacrificing sensitivity, resolution or dynamic range. The strain filter concept is extended in this article to theoretically characterize the performance of multicompression averaging with temporal stretching.

  4. How to Address Measurement Noise in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Schöniger, A.; Wöhling, T.; Nowak, W.

    2014-12-01

    When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting

  5. Prediction of broadband attenuation computed using band-averaged mass extinction coefficients and band-averaged transmittance

    NASA Astrophysics Data System (ADS)

    Farmer, W. M.

    1991-09-01

    A common method of estimating the attenuation capabilities of military smokes/obscurants is to use a band-averaged mass-extinction coefficient with concentration-length values in the Beer-Bouguer transmission law. This approach ignores the effects of source spectra, sensor response, and normal atmospheric attenuation on broadband transmittance characteristics, which can significantly affect broadband transmittance. The differences that can occur in predicting relative transmittance as a function of concentration length by using band-averaged mass-extinction coefficients as opposed to more properly computing the band-averaged transmittance are discussed in this paper. Two examples are provided to illustrate the differences in results. The first example considers 3- to 5-micron and 8- to 14-micron band transmission through natural fogs. The second example considers 3- to 5-micron and 8- to 12-micron transmission through phosphorus-derived smoke (a common military obscurant) produced at 17 percent and at 90 percent relative humidity. Major differences are found in the values of concentration lengths predicted by the two methods when the transmittance relative to an unobscured atmosphere falls below about 20 percent. These results can affect conclusions concerning the detection of targets in smokes screens, smoke concentration lengths required to obscure a target, and radiative transport through polluted atmospheres.

  6. 40 CFR 60.2943 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.2975 to calculate emissions at 7 percent oxygen. (b) Use Equation 2 in § 60.2975 to calculate the 12-hour rolling averages...

  7. Average waiting time profiles of uniform DQDB model

    SciTech Connect

    Rao, N.S.V.; Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D.

    1993-09-07

    The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.

  8. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  9. When did the average cosmic ray flux increase?

    NASA Technical Reports Server (NTRS)

    Nishiizumi, K.; Murty, S. V. S.; Marti, K.; Arnold, J. R.

    1985-01-01

    A new 129 to 129 Xe method to obtain cosmic ray exposure ages and to study the average cosmic ray flux on a 10 to the 7th power to 10 to the 8th power year time-scale was developed. The method is based on secondary neutron reactions on Te in troilite and the subsequent decay of 129I, the reaction product to stable 129 Xe. The first measurements of 129 I and 129 Xe in aliquot samples of a Cape York troilite sample are reported.

  10. Constructing the Average Natural History of HIV-1 Infection

    NASA Astrophysics Data System (ADS)

    Diambra, L.; Capurro, A.; Malta, C. P.

    2007-05-01

    Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.

  11. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  12. Optical Parametric Amplification for High Peak and Average Power

    SciTech Connect

    Jovanovic, Igor

    2001-11-26

    Optical parametric amplification is an established broadband amplification technology based on a second-order nonlinear process of difference-frequency generation (DFG). When used in chirped pulse amplification (CPA), the technology has been termed optical parametric chirped pulse amplification (OPCPA). OPCPA holds a potential for producing unprecedented levels of peak and average power in optical pulses through its scalable ultrashort pulse amplification capability and the absence of quantum defect, respectively. The theory of three-wave parametric interactions is presented, followed by a description of the numerical model developed for nanosecond pulses. Spectral, temperature and angular characteristics of OPCPA are calculated, with an estimate of pulse contrast. An OPCPA system centered at 1054 nm, based on a commercial tabletop Q-switched pump laser, was developed as the front end for a large Nd-glass petawatt-class short-pulse laser. The system does not utilize electro-optic modulators or multi-pass amplification. The obtained overall 6% efficiency is the highest to date in OPCPA that uses a tabletop commercial pump laser. The first compression of pulses amplified in highly nondegenerate OPCPA is reported, with the obtained pulse width of 60 fs. This represents the shortest pulse to date produced in OPCPA. Optical parametric amplification in {beta}-barium borate was combined with laser amplification in Ti:sapphire to produce the first hybrid CPA system, with an overall conversion efficiency of 15%. Hybrid CPA combines the benefits of high gain in OPCPA with high conversion efficiency in Ti:sapphire to allow significant simplification of future tabletop multi-terawatt sources. Preliminary modeling of average power limits in OPCPA and pump laser design are presented, and an approach based on cascaded DFG is proposed to increase the average power beyond the single-crystal limit. Angular and beam quality effects in optical parametric amplification are modeled

  13. High average power, high current pulsed accelerator technology

    SciTech Connect

    Neau, E.L.

    1995-05-01

    Which current pulsed accelerator technology was developed during the late 60`s through the late 80`s to satisfy the needs of various military related applications such as effects simulators, particle beam devices, free electron lasers, and as drivers for Inertial Confinement Fusion devices. The emphasis in these devices is to achieve very high peak power levels, with pulse lengths on the order of a few 10`s of nanoseconds, peak currents of up to 10`s of MA, and accelerating potentials of up to 10`s of MV. New which average power systems, incorporating thermal management techniques, are enabling the potential use of high peak power technology in a number of diverse industrial application areas such as materials processing, food processing, stack gas cleanup, and the destruction of organic contaminants. These systems employ semiconductor and saturable magnetic switches to achieve short pulse durations that can then be added to efficiently give MV accelerating, potentials while delivering average power levels of a few 100`s of kilowatts to perhaps many megawatts. The Repetitive High Energy Puled Power project is developing short-pulse, high current accelerator technology capable of generating beams with kJ`s of energy per pulse delivered to areas of 1000 cm{sup 2} or more using ions, electrons, or x-rays. Modular technology is employed to meet the needs of a variety of applications requiring from 100`s of kV to MV`s and from 10`s to 100`s of kA. Modest repetition rates, up to a few 100`s of pulses per second (PPS), allow these machines to deliver average currents on the order of a few 100`s of mA. The design and operation of the second generation 300 kW RHEPP-II machine, now being brought on-line to operate at 2.5 MV, 25 kA, and 100 PPS will be described in detail as one example of the new high average power, high current pulsed accelerator technology.

  14. Recent advances in phase shifted time averaging and stroboscopic interferometry

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  15. Measurement of the average φ multiplicity in B meson decay

    NASA Astrophysics Data System (ADS)

    Aubert, B.; Barate, R.; Boutigny, D.; Gaillard, J.-M.; Hicheur, A.; Karyotakis, Y.; Lees, J. P.; Robbe, P.; Tisserand, V.; Zghiche, A.; Palano, A.; Pompili, A.; Chen, J. C.; Qi, N. D.; Rong, G.; Wang, P.; Zhu, Y. S.; Eigen, G.; Ofte, I.; Stugu, B.; Abrams, G. S.; Borgland, A. W.; Breon, A. B.; Brown, D. N.; Button-Shafer, J.; Cahn, R. N.; Charles, E.; Day, C. T.; Gill, M. S.; Gritsan, A. V.; Groysman, Y.; Jacobsen, R. G.; Kadel, R. W.; Kadyk, J.; Kerth, L. T.; Kolomensky, Yu. G.; Kukartsev, G.; Leclerc, C.; Levi, M. E.; Lynch, G.; Mir, L. M.; Oddone, P. J.; Orimoto, T. J.; Pripstein, M.; Roe, N. A.; Romosan, A.; Ronan, M. T.; Shelkov, V. G.; Telnov, A. V.; Wenzel, W. A.; Ford, K.; Harrison, T. J.; Hawkes, C. M.; Knowles, D. J.; Morgan, S. E.; Penny, R. C.; Watson, A. T.; Watson, N. K.; Goetzen, K.; Held, T.; Koch, H.; Lewandowski, B.; Pelizaeus, M.; Peters, K.; Schmuecker, H.; Steinke, M.; Boyd, J. T.; Chevalier, N.; Cottingham, W. N.; Kelly, M. P.; Latham, T. E.; Mackay, C.; Wilson, F. F.; Abe, K.; Cuhadar-Donszelmann, T.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; Thiessen, D.; Kyberd, P.; McKemey, A. K.; Teodorescu, L.; Blinov, V. E.; Bukin, A. D.; Golubev, V. B.; Ivanchenko, V. N.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Yushkov, A. N.; Best, D.; Bruinsma, M.; Chao, M.; Kirkby, D.; Lankford, A. J.; Mandelkern, M.; Mommsen, R. K.; Roethel, W.; Stoker, D. P.; Buchanan, C.; Hartfiel, B. L.; Gary, J. W.; Layter, J.; Shen, B. C.; Wang, K.; del Re, D.; Hadavand, H. K.; Hill, E. J.; Macfarlane, D. B.; Paar, H. P.; Rahatlou, Sh.; Sharma, V.; Berryhill, J. W.; Campagnari, C.; Dahmes, B.; Kuznetsova, N.; Levy, S. L.; Long, O.; Lu, A.; Mazur, M. A.; Richman, J. D.; Rozen, Y.; Verkerke, W.; Beck, T. W.; Beringer, J.; Eisner, A. M.; Heusch, C. A.; Lockman, W. S.; Schalk, T.; Schmitz, R. E.; Schumm, B. A.; Seiden, A.; Turri, M.; Walkowiak, W.; Williams, D. C.; Wilson, M. G.; Albert, J.; Chen, E.; Dubois-Felsmann, G. P.; Dvoretskii, A.; Erwin, R. J.; Hitlin, D. G.; Narsky, I.; Piatenko, T.; Porter, F. C.; Ryd, A.; Samuel, A.; Yang, S.; Jayatilleke, S.; Mancinelli, G.; Meadows, B. T.; Sokoloff, M. D.; Abe, T.; Blanc, F.; Bloom, P.; Chen, S.; Clark, P. J.; Ford, W. T.; Nauenberg, U.; Olivas, A.; Rankin, P.; Roy, J.; Smith, J. G.; van Hoek, W. C.; Zhang, L.; Harton, J. L.; Hu, T.; Soffer, A.; Toki, W. H.; Wilson, R. J.; Zhang, J.; Altenburg, D.; Brandt, T.; Brose, J.; Colberg, T.; Dickopp, M.; Dubitzky, R. S.; Hauke, A.; Lacker, H. M.; Maly, E.; Müller-Pfefferkorn, R.; Nogowski, R.; Otto, S.; Schubert, J.; Schubert, K. R.; Schwierz, R.; Spaan, B.; Wilden, L.; Bernard, D.; Bonneaud, G. R.; Brochard, F.; Cohen-Tanugi, J.; Grenier, P.; Thiebaux, Ch.; Vasileiadis, G.; Verderi, M.; Khan, A.; Lavin, D.; Muheim, F.; Playfer, S.; Swain, J. E.; Andreotti, M.; Azzolini, V.; Bettoni, D.; Bozzi, C.; Calabrese, R.; Cibinetto, G.; Luppi, E.; Negrini, M.; Piemontese, L.; Sarti, A.; Treadwell, E.; Anulli, F.; Baldini-Ferroli, R.; Biasini, M.; Calcaterra, A.; de Sangro, R.; Falciai, D.; Finocchiaro, G.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Pioppi, M.; Zallo, A.; Buzzo, A.; Capra, R.; Contri, R.; Crosetti, G.; Lo Vetere, M.; Macri, M.; Monge, M. R.; Passaggio, S.; Patrignani, C.; Robutti, E.; Santroni, A.; Tosi, S.; Bailey, S.; Morii, M.; Won, E.; Bhimji, W.; Bowerman, D. A.; Dauncey, P. D.; Egede, U.; Eschrich, I.; Gaillard, J. R.; Morton, G. W.; Nash, J. A.; Sanders, P.; Taylor, G. P.; Grenier, G. J.; Lee, S.-J.; Mallik, U.; Cochran, J.; Crawley, H. B.; Lamsa, J.; Meyer, W. T.; Prell, S.; Rosenberg, E. I.; Yi, J.; Davier, M.; Grosdidier, G.; Höcker, A.; Laplace, S.; Le Diberder, F.; Lepeltier, V.; Lutz, A. M.; Petersen, T. C.; Plaszczynski, S.; Schune, M. H.; Tantot, L.; Wormser, G.; Brigljević, V.; Cheng, C. H.; Lange, D. J.; Simani, M. C.; Wright, D. M.; Bevan, A. J.; Coleman, J. P.; Fry, J. R.; Gabathuler, E.; Gamet, R.; Kay, M.; Parry, R. J.; Payne, D. J.; Sloane, R. J.; Touramanis, C.; Back, J. J.; Cormack, C. M.; Harrison, P. F.; Shorthouse, H. W.; Vidal, P. B.; Brown, C. L.; Cowan, G.; Flack, R. L.; Flaecher, H. U.; George, S.; Green, M. G.; Kurup, A.; Marker, C. E.; McMahon, T. R.; Ricciardi, S.; Salvatore, F.; Vaitsas, G.; Winter, M. A.; Brown, D.; Davis, C. L.; Allison, J.; Barlow, N. R.; Barlow, R. J.; Hart, P. A.; Hodgkinson, M. C.; Jackson, F.; Lafferty, G. D.; Lyon, A. J.; Weatherall, J. H.; Williams, J. C.; Farbin, A.; Jawahery, A.; Kovalskyi, D.; Lae, C. K.; Lillard, V.; Roberts, D. A.; Blaylock, G.; Dallapiccola, C.; Flood, K. T.; Hertzbach, S. S.; Kofler, R.; Koptchev, V. B.; Moore, T. B.; Saremi, S.; Staengle, H.; Willocq, S.; Cowan, R.; Sciolla, G.; Taylor, F.; Yamamoto, R. K.; Mangeol, D. J.; Patel, P. M.; Robertson, S. H.; Lazzaro, A.; Palombo, F.; Bauer, J. M.; Cremaldi, L.; Eschenburg, V.; Godang, R.; Kroeger, R.; Reidy, J.; Sanders, D. A.; Summers, D. J.; Zhao, H. W.; Brunet, S.; Cote-Ahern, D.; Taras, P.; Nicholson, H.; Cartaro, C.; Cavallo, N.; de Nardo, G.; Fabozzi, F.; Gatto, C.; Lista, L.; Paolucci, P.; Piccolo, D.; Sciacca, C.; Baak, M. A.; Raven, G.; Losecco, J. M.; Gabriel, T. A.; Brau, B.; Gan, K. K.; Honscheid, K.; Hufnagel, D.; Kagan, H.; Kass, R.; Pulliam, T.; Wong, Q. K.; Brau, J.; Frey, R.; Potter, C. T.; Sinev, N. B.; Strom, D.; Torrence, E.; Colecchia, F.; Dorigo, A.; Galeazzi, F.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simonetto, F.; Stroili, R.; Tiozzo, G.; Voci, C.; Benayoun, M.; Briand, H.; Chauveau, J.; David, P.; de La Vaissière, Ch.; del Buono, L.; Hamon, O.; John, M. J.; Leruste, Ph.; Ocariz, J.; Pivk, M.; Roos, L.; Stark, J.; T'jampens, S.; Therin, G.; Manfredi, P. F.; Re, V.; Behera, P. K.; Gladney, L.; Guo, Q. H.; Panetta, J.; Angelini, C.; Batignani, G.; Bettarini, S.; Bondioli, M.; Bucci, F.; Calderini, G.; Carpinelli, M.; del Gamba, V.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Marchiori, G.; Martinez-Vidal, F.; Morganti, M.; Neri, N.; Paoloni, E.; Rama, M.; Rizzo, G.; Sandrelli, F.; Walsh, J.; Haire, M.; Judd, D.; Paick, K.; Wagoner, D. E.; Danielson, N.; Elmer, P.; Lu, C.; Miftakov, V.; Olsen, J.; Smith, A. J.; Tanaka, H. A.; Varnes, E. W.; Bellini, F.; Cavoto, G.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Mazzoni, M. A.; Morganti, S.; Pierini, M.; Piredda, G.; Safai Tehrani, F.; Voena, C.; Christ, S.; Wagner, G.; Waldi, R.; Adye, T.; de Groot, N.; Franek, B.; Geddes, N. I.; Gopal, G. P.; Olaiya, E. O.; Xella, S. M.; Aleksan, R.; Emery, S.; Gaidot, A.; Ganzhur, S. F.; Giraud, P.-F.; Hamel de Monchenault, G.; Kozanecki, W.; Langer, M.; Legendre, M.; London, G. W.; Mayer, B.; Schott, G.; Vasseur, G.; Yeche, Ch.; Zito, M.; Purohit, M. V.; Weidemann, A. W.; Yumiceva, F. X.; Aston, D.; Bartoldus, R.; Berger, N.; Boyarski, A. M.; Buchmueller, O. L.; Convery, M. R.; Coupal, D. P.; Dong, D.; Dorfan, J.; Dujmic, D.; Dunwoodie, W.; Field, R. C.; Glanzman, T.; Gowdy, S. J.; Grauges-Pous, E.; Hadig, T.; Halyo, V.; Hryn'ova, T.; Innes, W. R.; Jessop, C. P.; Kelsey, M. H.; Kim, P.; Kocian, M. L.; Langenegger, U.; Leith, D. W.; Libby, J.; Luitz, S.; Luth, V.; Lynch, H. L.; Marsiske, H.; Messner, R.; Muller, D. R.; O'Grady, C. P.; Ozcan, V. E.; Perazzo, A.; Perl, M.; Petrak, S.; Ratcliff, B. N.; Roodman, A.; Salnikov, A. A.; Schindler, R. H.; Schwiening, J.; Simi, G.; Snyder, A.; Soha, A.; Stelzer, J.; Su, D.; Sullivan, M. K.; Va'Vra, J.; Wagner, S. R.; Weaver, M.; Weinstein, A. J.; Wisniewski, W. J.; Wright, D. H.; Young, C. C.; Burchat, P. R.; Edwards, A. J.; Meyer, T. I.; Petersen, B. A.; Roat, C.; Ahmed, M.; Ahmed, S.; Alam, M. S.; Ernst, J. A.; Saeed, M. A.; Saleem, M.; Wappler, F. R.; Bugg, W.; Krishnamurthy, M.; Spanier, S. M.; Eckmann, R.; Kim, H.; Ritchie, J. L.; Schwitters, R. F.; Izen, J. M.; Kitayama, I.; Lou, X. C.; Ye, S.; Bianchi, F.; Bona, M.; Gallo, F.; Gamba, D.; Borean, C.; Bosisio, L.; della Ricca, G.; Dittongo, S.; Grancagnolo, S.; Lanceri, L.; Poropat, P.; Vitale, L.; Vuagnin, G.; Panvini, R. S.; Banerjee, Sw.; Brown, C. M.; Fortin, D.; Jackson, P. D.; Kowalewski, R.; Roney, J. M.; Band, H. R.; Dasu, S.; Datta, M.; Eichenbaum, A. M.; Johnson, J. R.; Kutter, P. E.; Li, H.; Liu, R.; di Lodovico, F.; Mihalyi, A.; Mohapatra, A. K.; Pan, Y.; Prepost, R.; Sekula, S. J.; von Wimmersperg-Toeller, J. H.; Wu, J.; Wu, S. L.; Yu, Z.; Neal, H.

    2004-03-01

    We present a measurement of the average multiplicity of φ mesons in B0, B0, and B± meson decays. Using 17.6 fb-1 of data taken at the Υ(4S) resonance by the BABAR detector at the PEP-II e+e- storage ring at the Stanford Linear Accelerator Center, we reconstruct φ mesons in the K+K- decay mode and measure B(B→φX)=(3.41±0.06±0.12)%. This is significantly more precise than any previous measurement.

  16. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  17. Radial behavior of the average local ionization energies of atoms

    SciTech Connect

    Politzer, P.; Murray, J.S.; Grice, M.E.; Brinck, T.; Ranganathan, S. )

    1991-11-01

    The radial behavior of the average local ionization energy {ital {bar I}}({bold r}) has been investigated for the atoms He--Kr, using {ital ab} {ital initio} Hartree--Fock atomic wave functions. {ital {bar I}}({bold r}) is found to decrease in a stepwise manner with the inflection points serving effectively to define boundaries between electronic shells. There is a good inverse correlation between polarizability and the ionization energy in the outermost region of the atom, suggesting that {ital {bar I}}({bold r}) may be a meaningful measure of local polarizabilities in atoms and molecules.

  18. Weighted Average Consensus-Based Unscented Kalman Filtering.

    PubMed

    Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong

    2016-02-01

    In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example.

  19. Measurement of small temperature fluctuations at high average temperature

    NASA Technical Reports Server (NTRS)

    Scholl, James W.; Scholl, Marija S.

    1988-01-01

    Both absolute and differential temperature measurements were simultaneously performed as a function of time for a pixel on a high-temperature, multi-spectral, spatially and temporally varying infrared target simulator. A scanning laser beam was used to maintain a pixel at an on-the-average constant temperature of 520 K. The laser refresh rate of up to 1 kHz resulted in small-amplitude temperature fluctuations with a peak-to-peak amplitude of less than 1 K. The experimental setup to accurately measure the differential and the absolute temperature as a function of time is described.

  20. Energy stability in a high average power FEL

    SciTech Connect

    Merminga, L.; Bisognano, J.J.

    1995-12-31

    Recirculating, energy-recovering linacs can be used as driver accelerators for high power FELs. Instabilities which arise from fluctuations of the cavity fields are investigated. Energy changes can cause beam loss on apertures, or, when coupled to M{sub 56}, phase oscillations. Both effects change the beam induced voltage in the cavities and can lead to unstable variations of the accelerating field. Stability analysis for small perturbations from equilibrium is performed and threshold currents determined. Design strategies to increase the instability threshold are discussed and the high average power FEL proposed for construction at CEBAF is used as an example.

  1. High average power second harmonic generation in air

    SciTech Connect

    Beresna, Martynas; Kazansky, Peter G.; Svirko, Yuri; Barkauskas, Martynas; Danielius, Romas

    2009-09-21

    We demonstrate second harmonic vortex generation in atmospheric pressure air using tightly focused femtosecond laser beam. The circularly polarized ring-shaped beam of the second harmonic is generated in the air by fundamental beam of the same circular polarization, while the linear polarized beam produces two-lobe beam at the second harmonic frequency. The achieved normalized conversion efficiency and average second harmonic power are two orders of magnitude higher compared to those previously reported and can be increased up to 20 times by external gas flow. We demonstrate that the frequency doubling originates from the gradient of photoexcited free electrons created by pondermotive force.

  2. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    NASA Astrophysics Data System (ADS)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of

  3. Status of Average-x from Lattice QCD

    SciTech Connect

    Dru Renner

    2011-09-01

    As algorithms and computing power have advanced, lattice QCD has become a precision technique for many QCD observables. However, the calculation of nucleon matrix elements remains an open challenge. I summarize the status of the lattice effort by examining one observable that has come to represent this challenge, average-x: the fraction of the nucleon's momentum carried by its quark constituents. Recent results confirm a long standing tendency to overshoot the experimentally measured value. Understanding this puzzle is essential to not only the lattice calculation of nucleon properties but also the broader effort to determine hadron structure from QCD.

  4. On the jump behavior of distributions and logarithmic averages

    NASA Astrophysics Data System (ADS)

    Vindas, Jasson; Estrada, Ricardo

    2008-11-01

    The jump behavior and symmetric jump behavior of distributions are studied. We give several formulas for the jump of distributions in terms of logarithmic averages, this is done in terms of Cesàro-logarithmic means of decompositions of the Fourier transform and in terms of logarithmic radial and angular local asymptotic behaviors of harmonic conjugate functions. Application to Fourier series are analyzed. In particular, we give formulas for jumps of periodic distributions in terms of Cesàro-Riesz logarithmic means and Abel-Poisson logarithmic means of conjugate Fourier series.

  5. Boundedness of generalized Cesaro averaging operators on certain function spaces

    NASA Astrophysics Data System (ADS)

    Agrawal, M. R.; Howlett, P. G.; Lucas, S. K.; Naik, S.; Ponnusamy, S.

    2005-08-01

    We define a two-parameter family of Cesaro averaging operators , where , is analytic on the unit disc [Delta], and F(a,b;c;z) is the classical hypergeometric function. In the present article the boundedness of , , on various function spaces such as Hardy, BMOA and a-Bloch spaces is proved. In the special case b=1+[alpha] and c=1, becomes the [alpha]-Cesaro operator , . Thus, our results connect the special functions in a natural way and extend and improve several well-known results of Hardy-Littlewood, Miao, Stempak and Xiao.

  6. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  7. Studies into the averaging problem: Macroscopic gravity and precision cosmology

    NASA Astrophysics Data System (ADS)

    Wijenayake, Tharake S.

    2016-08-01

    With the tremendous improvement in the precision of available astrophysical data in the recent past, it becomes increasingly important to examine some of the underlying assumptions behind the standard model of cosmology and take into consideration nonlinear and relativistic corrections which may affect it at percent precision level. Due to its mathematical rigor and fully covariant and exact nature, Zalaletdinov's macroscopic gravity (MG) is arguably one of the most promising frameworks to explore nonlinearities due to inhomogeneities in the real Universe. We study the application of MG to precision cosmology, focusing on developing a self-consistent cosmology model built on the averaging framework that adequately describes the large-scale Universe and can be used to study real data sets. We first implement an algorithmic procedure using computer algebra systems to explore new exact solutions to the MG field equations. After validating the process with an existing isotropic solution, we derive a new homogeneous, anisotropic and exact solution. Next, we use the simplest (and currently only) solvable homogeneous and isotropic model of MG and obtain an observable function for cosmological expansion using some reasonable assumptions on light propagation. We find that the principal modification to the angular diameter distance is through the change in the expansion history. We then linearize the MG field equations and derive a framework that contains large-scale structure, but the small scale inhomogeneities have been smoothed out and encapsulated into an additional cosmological parameter representing the averaging effect. We derive an expression for the evolution of the density contrast and peculiar velocities and integrate them to study the growth rate of large-scale structure. We find that increasing the magnitude of the averaging term leads to enhanced growth at late times. Thus, for the same matter content, the growth rate of large scale structure in the MG model

  8. A local average distance descriptor for flexible protein structure comparison

    PubMed Central

    2014-01-01

    Background Protein structures are flexible and often show conformational changes upon binding to other molecules to exert biological functions. As protein structures correlate with characteristic functions, structure comparison allows classification and prediction of proteins of undefined functions. However, most comparison methods treat proteins as rigid bodies and cannot retrieve similarities of proteins with large conformational changes effectively. Results In this paper, we propose a novel descriptor, local average distance (LAD), based on either the geodesic distances (GDs) or Euclidean distances (EDs) for pairwise flexible protein structure comparison. The proposed method was compared with 7 structural alignment methods and 7 shape descriptors on two datasets comprising hinge bending motions from the MolMovDB, and the results have shown that our method outperformed all other methods regarding retrieving similar structures in terms of precision-recall curve, retrieval success rate, R-precision, mean average precision and F1-measure. Conclusions Both ED- and GD-based LAD descriptors are effective to search deformed structures and overcome the problems of self-connection caused by a large bending motion. We have also demonstrated that the ED-based LAD is more robust than the GD-based descriptor. The proposed algorithm provides an alternative approach for blasting structure database, discovering previously unknown conformational relationships, and reorganizing protein structure classification. PMID:24694083

  9. The partially averaged field approach to cosmic ray diffusion

    NASA Technical Reports Server (NTRS)

    Jones, F. C.; Birmingham, T. J.; Kaiser, T. B.

    1976-01-01

    The kinetic equation for particles interacting with turbulent fluctuations is derived by a new nonlinear technique which successfully corrects the difficulties associated with quasilinear theory. In this new method the effects of the fluctuations are evaluated along particle orbits which themselves include the effects of a statistically averaged subset of the possible configurations of the turbulence. The new method is illustrated by calculating the pitch angle diffusion coefficient D sub Mu Mu for particles interacting with slab model magnetic turbulence, i.e., magnetic fluctuations linearly polarized transverse to a mean magnetic field. Results are compared with those of quasilinear theory and also with those of Monte Carlo calculations. The major effect of the nonlinear treatment in this illustration is the determination of D sub Mu Mu in the vicinity of 90 deg pitch angles where quasilinear theory breaks down. The spatial diffusion coefficient parallel to a mean magnetic field is evaluated using D sub Mu Mu as calculated by this technique. It is argued that the partially averaged field method is not limited to small amplitude fluctuating fields and is hence not a perturbation theory.

  10. Numerical Study of Fractional Ensemble Average Transport Equations

    NASA Astrophysics Data System (ADS)

    Kim, S.; Park, Y.; Gyeong, C. B.; Lee, O.

    2014-12-01

    In this presentation, a newly developed theory is applied to the case of stationary and non-stationary stochastic advective flow field, and a numerical solution method is presented for the resulting fractional Fokker-Planck equation (fFPE), which describes the evolution of the probability density function (PDF) of contaminant concentration. The derived fFPE is evaluated for three different form: 1) purely advective form, 2) second-order moment form and 3) second-order cumulant form. The Monte Carlo analysis of the fractional governing equation is then performed in a stochastic flow field, generated by a fractional Brownian motion for the stationary and non-stationary stochastic advection, in order to provide a benchmark for the results obtained from the fFPEs. When compared to the Monte Carlo simulation based PDFs and their ensemble average, the second-order cumulant form gives a good fit in terms of the shape and mode of the PDF of the contaminant concentration. Therefore, it is quite promising that the non-Fickian transport behavior can be modeled by the derived fractional ensemble average transport equations either by means of the long memory in the underlying stochastic flow, or by means of the time-space non-stationarity of the underlying stochastic flow, or by means of the time and space fractional derivatives of the transport equations. This subject is supported by Korea Ministry of Environment as "The Eco Innovation Project : Non-point source pollution control research group"

  11. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data.

  12. Average and individual B hadron lifetimes at CDF

    SciTech Connect

    Schneider, O.; CDF Collaboration

    1993-09-01

    Bottom hadron lifetime measurements have been performed using B {yields} J/{psi} {yields} {mu}+{mu}{sup {minus}}X dacays recorded with the collider Detector at Fermilab (CDF) during the first half of the 1992--1993 Tevatron collider run. These decays have been reconstructed in a silicon vertex detector. Using 5344 {plus_minus} 73 inclusive J/{psi} events, the average lifetime of all bottom hadrons produced in 1.8 TeV p{bar p} collisions and decaying into a J/{psi} events, the average lifetime of all bottom hadrons produced in 1.8 TeV p{bar p} collisions and decaying into a J/{psi} is found to be 1.46 {plus_minus} 0.06(stat) {plus_minus}0.06(sys)ps. The charged and neutral B meson lifetimes have been measured separately using 75 {plus_minus}10 (charged) and 61{plus_minus}9 (neutral) fully reconstructed decays; preliminary results are {tau}{sup {plus_minus}} = 1.63 {plus_minus} 0.21(stat) {plus_minus} 0.16(sys) {plus_minus} 0. 10(sys) ps, yielding a lifetime ratio of {tau}{sup {plus_minus}}/{tau}{sup 0} = 1.06{plus_minus} 0.20(stat){plus_minus}0.12(sys).

  13. Yearly average performance of the principal solar collector types

    SciTech Connect

    Rabl, A.

    1981-01-01

    The results of hour-by-hour simulations for 26 meteorological stations are used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types: flat plate, evacuated tubes, CPC, single- and dual-axis tracking collectors, and central receiver. The correlations are first- and second-order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour-by-hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators. This method can be applied to collectors that operate year-around in such a way that no collected energy is discarded, including photovoltaic systems, solar-augmented industrial process heat systems, and solar thermal power systems. The method is also recommended for rating collectors of different type or manufacturer by yearly average performance, evaluating the effects of collector degradation, the benefits of collector cleaning, and the gains from collector improvements (due to enhanced optical efficiency or decreased heat loss per absorber surface). For most of these applications, the method is accurate enough to replace a system simulation.

  14. Microstructural effects on the average properties in porous battery electrodes

    NASA Astrophysics Data System (ADS)

    García-García, Ramiro; García, R. Edwin

    2016-03-01

    A theoretical framework is formulated to analytically quantify the effects of the microstructure on the average properties of porous electrodes, including reactive area density and the through-thickness tortuosity as observed in experimentally-determined tomographic sections. The proposed formulation includes the microstructural non-idealities but also captures the well-known perfectly spherical limit. Results demonstrate that in the absence of any particle alignment, the through-thickness Bruggeman exponent α, reaches an asymptotic value of α ∼ 2 / 3 as the shape of the particles become increasingly prolate (needle- or fiber-like). In contrast, the Bruggeman exponent diverges as the shape of the particles become increasingly oblate, regardless of the degree of particle alignment. For aligned particles, tortuosity can be dramatically suppressed, e.g., α → 1 / 10 for ra → 1 / 10 and MRD ∼ 40 . Particle size polydispersity impacts the porosity-tortuosity relation when the average particle size is comparable to the thickness of the electrode layers. Electrode reactivity density can be arbitrarily increased as the particles become increasingly oblate, but asymptotically reach a minimum value as the particles become increasingly prolate. In the limit of a porous electrode comprised of fiber-like particles, the area density decreases by 24% , with respect to a distribution of perfectly spherical particles.

  15. Thermal management in high average power pulsed compression systems

    SciTech Connect

    Wavrik, R.W.; Reed, K.W.; Harjes, H.C.; Weber, G.J.; Butler, M.; Penn, K.J.; Neau, E.L.

    1992-08-01

    High average power repetitively pulsed compression systems offer a potential source of electron beams which may be applied to sterilization of wastes, treatment of food products, and other environmental and consumer applications. At Sandia National Laboratory, the Repetitive High Energy Pulsed Power (RHEPP) program is developing a 7 stage magnetic pulse compressor driving a linear induction voltage adder with an electron beam diode load. The RHEPP machine is being design to deliver 350 kW of average power to the diode in 60 ns FWHM, 2.5 MV, 3 kJ pulses at a repetition rate of 120 Hz. In addition to the electrical design considerations, the repetition rate requires thermal management of the electrical losses. Steady state temperatures must be kept below the material degradation temperatures to maximize reliability and component life. The optimum design is a trade off between thermal management, maximizing overall electrical performance of the system, reliability, and cost effectiveness. Cooling requirements and configurations were developed for each of the subsystems of RHEPP. Finite element models that combine fluid flow and heat transfer were used to screen design concepts. The analysis includes one, two, and three dimensional heat transfer using surface heat transfer coefficients and boundary layer models. Experiments were conducted to verify the models as well as to evaluate cooling channel fabrication materials and techniques in Metglas wound cores. 10 refs.

  16. Cause of the exceptionally high AE average for 2003

    NASA Astrophysics Data System (ADS)

    Prestes, A.

    2012-04-01

    In this work we focus on the year of 2003 when the AE index was extremely high (AE=341nT, with peak intensity more than 2200nT), this value is almost 100 nT higher when compared with others years of the cycle 23. Interplanetary magnetic field (IMF) and plasma data are compared with geomagnetic AE and Dst indices to determine the causes of exceptionally high AE average value. Analyzing the solar wind parameters we found that the annual average speed value was extremely high, approximately 542 km/s (peak value ~1074 km/s). These values were due to recurrent high-speed solar streams from large coronal holes, which stretch to the solar equator, and low-latitude coronal holes, which exist for many solar rotations. AE was found to increase with increasing solar wind speed and decrease when solar wind speed decrease. The cause of the high AE activity during 2003 is the presence of the high-speed corotating streams that contain large-amplitude Alfvén waves throughout the streams, which resulted in a large number of HILDCAAs events. When plasma and field of solar wind impinge on Earth's magnetosphere, the southward field turnings associated with the wave fluctuations cause magnetic reconnection and consequential high levels of AE activity and very long recovery phases on Dst, sometimes lasting until the next stream arrives.

  17. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Cinnella, P.; Dwight, R. P.

    2014-10-01

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  18. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  19. Calculations of nonspherically averaged charge densities for subtitutionally disordered alloys

    SciTech Connect

    Singh, P.P.; Gonis, A.

    1994-02-01

    Based on screening transformations of muffin-tin orbitals introduced by Andersen et al. [Phys. Rev. Lett. 53, 2571 (1984)], we have developed a formalism for calculating the non-spherically averaged charge densities of substitutionally disordered alloys using the Korringa-Kohn-Rostoker coherent potential approximation (KKR CPA) method in the atomic-sphere approximation (ASA). We have validated our method by calculating charge densities for ordered structures, where we find that our approach yields charge densities that are essentially indistinguishable from the results of full-potential methods. For substitutionally disordered alloys, where full-potential methods have not been implemented so far, our approach can be used to calculate reliable non-spherically averaged charge densities from spherically symmetric one-electron potentials obtained from the KKR-ASA CPA. We report on our study of differences in charge denisty between ordered AlLi in L1{sub o} phase and substitutionally disordered Al{sub 0.5}Li{sub 0.5} on face-centered cubic lattice.

  20. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary

    PubMed Central

    Baxter, Nancy N.; Dubé, Catherine; Hey, Amanda

    2016-01-01

    Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS) prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs). There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial) to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program. PMID:27597935

  1. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  2. Design of a High Average Power Waveguide Window

    NASA Astrophysics Data System (ADS)

    Chojnacki, E.; Hays, T.; Kirchgessner, J.; Padamsee, H.; Cole, M.; Schultheiss, T.

    1997-05-01

    A study has been performed to design a waveguide vacuum window operating at 500 MHz capable of propagating >1 MW average power. This would extend current technology by about a factor of 2 in average power, made possible by advances in available ceramic size and quality. Self-matched and tuning-post-matched configurations were examined, as well as full-height and reduced-height waveguide cross sections. The two ceramics considered were aluminum oxide and beryllia oxide. Beryllia's greater thermal conductivity over alumina and its availability in large sizes with low loss tangent (<3 × 10-4) made it very attractive despite its tensile strength being lower than alumina's. The analyses to be presented comprise of obtaining satisfactory RF design using the computer code MAFIA, performing a perturbation calculation in MAFIA to obtain power deposition in the slightly lossy ceramic, feeding the power deposition data into the thermo-mechanical computer code ANSYS, then using ANSYS to determine ceramic operating temperature and mechanical stress. Another pertinent quantity obtained from MAFIA is the electric field profile throughout the window assembly. Results from numerous window configurations will be tabulated, plotted, and discussed.

  3. Using Bayes Model Averaging for Wind Power Forecasts

    NASA Astrophysics Data System (ADS)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  4. Runoff and leaching of metolachlor from Mississippi River alluvial soil during seasons of average and below-average rainfall.

    PubMed

    Southwick, Lloyd M; Appelboom, Timothy W; Fouss, James L

    2009-02-25

    The movement of the herbicide metolachlor [2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide] via runoff and leaching from 0.21 ha plots planted to corn on Mississippi River alluvial soil (Commerce silt loam) was measured for a 6-year period, 1995-2000. The first three years received normal rainfall (30 year average); the second three years experienced reduced rainfall. The 4-month periods prior to application plus the following 4 months after application were characterized by 1039 +/- 148 mm of rainfall for 1995-1997 and by 674 +/- 108 mm for 1998-2000. During the normal rainfall years 216 +/- 150 mm of runoff occurred during the study seasons (4 months following herbicide application), accompanied by 76.9 +/- 38.9 mm of leachate. For the low-rainfall years these amounts were 16.2 +/- 18.2 mm of runoff (92% less than the normal years) and 45.1 +/- 25.5 mm of leachate (41% less than the normal seasons). Runoff of metolachlor during the normal-rainfall seasons was 4.5-6.1% of application, whereas leaching was 0.10-0.18%. For the below-normal periods, these losses were 0.07-0.37% of application in runoff and 0.22-0.27% in leachate. When averages over the three normal and the three less-than-normal seasons were taken, a 35% reduction in rainfall was characterized by a 97% reduction in runoff loss and a 71% increase in leachate loss of metolachlor on a percent of application basis. The data indicate an increase in preferential flow in the leaching movement of metolachlor from the surface soil layer during the reduced rainfall periods. Even with increased preferential flow through the soil during the below-average rainfall seasons, leachate loss (percent of application) of the herbicide remained below 0.3%. Compared to the average rainfall seasons of 1995-1997, the below-normal seasons of 1998-2000 were characterized by a 79% reduction in total runoff and leachate flow and by a 93% reduction in corresponding metolachlor movement via these routes

  5. Transmitter-receiver system for time average fourier telescopy

    NASA Astrophysics Data System (ADS)

    Pava, Diego Fernando

    Time Average Fourier Telescopy (TAFT) has been proposed as a means for obtaining high-resolution, diffraction-limited images over large distances through ground-level horizontal-path atmospheric turbulence. Image data is collected in the spatial-frequency, or Fourier, domain by means of Fourier Telescopy; an inverse twodimensional Fourier transform yields the actual image. TAFT requires active illumination of the distant object by moving interference fringe patterns. Light reflected from the object is collected by a "light-buckt" detector, and the resulting electrical signal is digitized and subjected to a series of signal processing operations, including an all-critical averaging of the amplitude and phase of a number of narrow-band signals. This dissertation reports on the formulation and analysis of a transmitter-receiver system appropriate for the illumination, signal detection, and signal processing required for successful application of the TAFT concept. The analysis assumes a Kolmogorov model for the atmospheric turbulence, that the object is rough on the scale of the optical wavelength of the illumination pattern, and that the object is not changing with time during the image-formation interval. An important original contribution of this work is the development of design principles for spatio-temporal non-redundant arrays of active sources for object illumination. Spatial non-redundancy has received considerable attention in connection with the arrays of antennas used in radio astronomy. The work reported here explores different alternatives and suggests the use of two-dimensional cyclic difference sets, which favor low frequencies in the spatial frequency domain. The temporal nonredundancy condition requires that all active sources oscillate at a different optical frequency and that the frequency difference between any two sources be unique. A novel algorithm for generating the array, based on optimized perfect cyclic difference sets, is described

  6. Ultra-low noise miniaturized neural amplifier with hardware averaging

    NASA Astrophysics Data System (ADS)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the

  7. Multifractal detrending moving-average cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0

  8. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, David, Jr. (Inventor)

    2014-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  9. High average power diode pumped solid state laser

    NASA Astrophysics Data System (ADS)

    Gao, Yue; Wang, Yanjie; Chan, Amy; Dawson, Murray; Greene, Ben

    2017-03-01

    A new generation of high average power pulsed multi-joule solid state laser system has been developed at EOS Space Systems for various space related tracking applications. It is a completely diode pumped, fully automated multi-stage system consisting of a pulsed single longitudinal mode oscillator, three stages of pre-amplifiers, two stages of power amplifiers, completely sealed phase conjugate mirror or stimulated Brillouin scattering (SBS) cell and imaging relay optics with spatial filters in vacuum cells. It is capable of generating pulse energy up to 4.7 J, a beam quality M 2 ~ 3, pulse width between 10–20 ns, and a pulse repetition rate between 100–200 Hz. The system has been in service for more than two years with excellent performance and reliability.

  10. TIDAL AND TIDALLY AVERAGED CIRCULATION CHARACTERISTICS OF SUISUN BAY, CALIFORNIA.

    USGS Publications Warehouse

    Smith, Lawrence H.; Cheng, Ralph T.

    1987-01-01

    Availability of extensive field data permitted realistic calibration and validation of a hydrodynamic model of tidal circulation and salt transport for Suisun Bay, California. Suisun Bay is a partially mixed embayment of northern San Francisco Bay located just seaward of the Sacramento-San Joaquin Delta. The model employs a variant of an alternating direction implicit finite-difference method to solve the hydrodynamic equations and an Eulerian-Lagrangian method to solve the salt transport equation. An upwind formulation of the advective acceleration terms of the momentum equations was employed to avoid oscillations in the tidally averaged velocity field produced by central spatial differencing of these terms. Simulation results of tidal circulation and salt transport demonstrate that tides and the complex bathymetry determine the patterns of tidal velocities and that net changes in the salinity distribution over a few tidal cycles are small despite large changes during each cycle.

  11. Average interconnection length and interconnection distribution for rectangular arrays

    NASA Astrophysics Data System (ADS)

    Gura, Carol; Abraham, Jacob A.

    1989-05-01

    It is shown that it is necessary to utilize different partitioning coefficients in interconnection length analyses which are based on Rent's rule, depending on whether one- or two-dimensional placement strategies are used. Beta is the partitioning coefficient in the power-law relationship Alpha Beta which provides a measure of the number of interconnection that cross a boundary which encloses Beta blocks. The partitioning coefficients are Beta = p/2 and Beta = p for two- and one-dimensional arrays, respectively, where p is the experimental coefficient, of the Rent relationship. Based on these separate partitioning coefficients, an average interconnection length prediction is presented for rectangular arrays that out performs existing predictions. Examples are given to support this theory.

  12. THE FIRST LUNAR MAP OF THE AVERAGE SOIL ATOMIC MASS

    SciTech Connect

    O. GASNAULT; W. FELDMAN; ET AL

    2001-01-01

    Measurements of indexes of lunar surface composition were successfully made during Lunar Prospector (LP) mission, using the Neutron Spectrometers (NS) [1]. This capability is demonstrated for fast neutrons in Plates 1 of Maurice et al. [2] (similar to Figure 2 here). Inspection shows a clear distinction between mare basalt (bright) and highland terranes [2]. Fast neutron simulations demonstrate the sensitivity of the fast neutron leakage flux to the presence of iron and titanium in the soil [3]. The dependence of the flux to a third element (calcium or aluminum) was also suspected [4]. We expand our previous work in this study by estimating fast neutron leakage fluxes for a more comprehensive set of assumed lunar compositions. We find a strong relationship between the fast neutron fluxes and the average soil atomic mass: . This relation can be inverted to provide a map of from the measured map of fast neutrons from the Moon.

  13. A vertically averaged spectral model for tidal circulation in estuaries

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    A frequency dependent computer model based on the two-dimensional vertically averaged shallow-water equations is described for general purpose application in tidally dominated embayments. This model simulates the response of both tides and tidal currents to user-specified geometries and boundary conditions. The mathematical formulation and practical application of the model are discussed in detail. Salient features of the model include the ability to specify: (1) stage at the open boundaries as well as within the model grid, (2) velocities on open boundaries (river inflows and so forth), (3) spatially variable wind stress, and (4) spatially variable bottom friction. Using harmonically analyzed field data as boundary conditions, this model can be used to make real time predictions of tides and tidal currents. (USGS)

  14. Model selection versus model averaging in dose finding studies.

    PubMed

    Schorning, Kirsten; Bornkamp, Björn; Bretz, Frank; Dette, Holger

    2016-09-30

    A key objective of Phase II dose finding studies in clinical drug development is to adequately characterize the dose response relationship of a new drug. An important decision is then on the choice of a suitable dose response function to support dose selection for the subsequent Phase III studies. In this paper, we compare different approaches for model selection and model averaging using mathematical properties as well as simulations. We review and illustrate asymptotic properties of model selection criteria and investigate their behavior when changing the sample size but keeping the effect size constant. In a simulation study, we investigate how the various approaches perform in realistically chosen settings. Finally, the different methods are illustrated with a recently conducted Phase II dose finding study in patients with chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  16. Moving average rules as a source of market instability

    NASA Astrophysics Data System (ADS)

    Chiarella, Carl; He, Xue-Zhong; Hommes, Cars

    2006-10-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets has both a fundamentalist and a chartist component. The chartist demand is governed by the difference between current price and a (long-run) MA. Our simulations show that the MA is a source of market instability, and the interaction of the MA and market noises can lead to the tendency for the market price to take long excursions away from the fundamental. The model reveals various market price phenomena, the coexistence of apparent market efficiency and a large chartist component, price resistance levels, long memory and skewness and kurtosis of returns.

  17. Average annual precipitation and runoff for Arkansas, 1951-1980

    USGS Publications Warehouse

    Freiwald, David A.

    1984-01-01

    Ten intercomparison studies to determine the accuracy of pH and specific-conductance measurements, using dilute-nitric acid solutions, were managed by the U.S. Geological Survey for the National Atmospheric Deposition Program and the National Trends Network precipitation networks. These precipitation networks set quality-control goals for site-operator measurements of pH and specific conductance. The accuracy goal for pH is plus or minus 0.1 pH unit; the accuracy goal for specific conductance is plus or minus 4 microsiemens per centimeter at 25 degrees Celsius. These intercomparison studies indicated that an average of 65 percent of the site-operator pH measurements and 79 percent of the site-operator specific-conductance measurements met the quality-control goal. A statistical approach that is resistant to outliers was used to evaluate and illustrate the results obtained from these intercomparisons. (USGS)

  18. A Multichannel Averaging Phasemeter for Picometer Precision Laser Metrology

    NASA Technical Reports Server (NTRS)

    Halverson, Peter G.; Johnson, Donald R.; Kuhnert, Andreas; Shaklan, Stuart B.; Sero, Robert

    1999-01-01

    The Micro-Arcsecond Metrology (MAM) team at the Jet Propulsion Laboratory has developed a precision phasemeter for the Space Interferometry Mission (SIM). The current version of the phasemeter is well-suited for picometer accuracy distance measurements and tracks at speeds up to 50 cm/sec, when coupled to SIM's 1.3 micron wavelength heterodyne laser metrology gauges. Since the phasemeter is implemented with industry standard FPGA chips, other accuracy/speed trade-off points can be programmed for applications such as metrology for earth-based long-baseline astronomical interferometry (planet finding), and industrial applications such as translation stage and machine tool positioning. The phasemeter is a standard VME module, supports 6 metrology gauges, a 128 MHz clock, has programmable hardware averaging, and a maximum range of 232 cycles (2000 meters at 1.3 microns).

  19. A coefficient average approximation towards Gutzwiller wavefunction formalism.

    PubMed

    Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming

    2015-06-24

    Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.

  20. Average System Cost Methodology : Administrator's Record of Decision.

    SciTech Connect

    United States. Bonneville Power Administration.

    1984-06-01

    Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)

  1. Averaging schemes for solving fixed point and variational inequality problems

    SciTech Connect

    Magnanti, T.L.; Perakis, G.

    1994-12-31

    In this talk we develop and study averaging schemes for solving fixed point and variational inequality problems. Typically, researchers have established convergence results for methods that solve these problems by establishing contractive estimates for the underlying algorithmic maps. In this talk we establish global convergence results using nonexpansive estimates. After first establishing convergence for a general iterative scheme for computing fixed points, we consider applications to projection and relaxation algorithms for solving variational inequality problems and to a generalized steepest descent method for solving systems of equations. As part of our development, we also establish a new interpretation of a norm condition typically used for establishing convergence of linearization schemes, by associating it with a strong-f-monotonicity condition. We conclude by applying these results to congested transportation networks.

  2. The average rate of change for continuous time models.

    PubMed

    Kelley, Ken

    2009-05-01

    The average rate of change (ARC) is a concept that has been misunderstood in the applied longitudinal data analysis literature, where the slope from the straight-line change model is often thought of as though it were the ARC. The present article clarifies the concept of ARC and shows unequivocally the mathematical definition and meaning of ARC when measurement is continuous across time. It is shown that the slope from the straight-line change model generally is not equal to the ARC. General equations are presented for two measures of discrepancy when the slope from the straight-line change model is used to estimate the ARC in the case of continuous time for any model linear in its parameters, and for three useful models nonlinear in their parameters.

  3. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  4. Comparison of conditional averaging and super-resolution method

    NASA Astrophysics Data System (ADS)

    Block, Dietmar; Teliban, Iulian; Piel, Alexander

    2006-10-01

    Conditional averaging and cross-correlation analysis allow in-depth study of plasma turbulence with just two probe tips. Two-dimensional probe arrays are now employed to provide spatial-temporal resolution at plasma turbulence. Increasing the spatial resolution of probe arrays to those of two probe techniques is difficult to achieve. Typically, there is at least a factor of four less resolution in space for probe arrays. Recently, we introduced a super-resolution method to numerically enhance the spatial resolution of probe arrays by transfering information from time to space domain [1]. This allows us to compare two point techniques with spatial-temporal measurements directly. Here, we will use experimental data to discuss the prospects and limitations of two probe methods [2] in detail. [1] I. Teliban, D. Block, A. Piel, and V. Naulin, PPCF 48 (2006). [2] D. Block, I. Teliban, F. Greiner, and A. Piel, Phys. Scripta T122 (2006).

  5. An averaged polarizable potential for multiscale modeling in phospholipid membranes.

    PubMed

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper; Petersen, Michael; Beerepoot, Maarten T P; Kongsted, Jacob

    2017-04-05

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl-2-oleoyl-sn-glycerol-3-phospho-L-serine given their common use both in experimental and computational studies. The charges, and to a lesser extent the polarizabilities, are found to depend strongly on the molecular conformation of the lipids. Furthermore, the importance of explicit polarization is underlined for the description of larger assemblies of lipids, that is, membranes. In conclusion, we find that specially developed polarizable parameters are needed for embedding calculations in membranes, while common non-polarizable point-charge force fields usually perform well enough for structural and dynamical studies. © 2017 Wiley Periodicals, Inc.

  6. Local versus average field failure criterion in amorphous polymers

    NASA Astrophysics Data System (ADS)

    Xie, Yuesong; Mao, Yunzhe; Sun, Lin; Koslowski, Marisol

    2015-03-01

    There is extensive work developing laws that predict yielding in amorphous polymers, ranging from the pioneer experimental work of Sternstein et al (1968 Appl. Polym. Symp. 7 175-99) to the novel molecular dynamics simulations of Jaramillo et al (2012 Phys. Rev. B 85 024114). While atomistic models render damage criteria in terms of local values of the stress and strain fields, experiments provide yield conditions in terms of the average values of these fields. Unfortunately, it is not possible to compare these results due to the differences in time and length scales. Here, we use a micromechanical phase-field damage model with parameters calculated from atomistic simulations to connect atomistic and macroscopic scale experiments. The phase-field damage model is used to study failure in composite materials. We find that the yield criterion should be described in terms of local stress and strains fields and cannot be extended directly from applied stress field values to determine yield conditions.

  7. The Average Field Approximation for Almost Bosonic Extended Anyons

    NASA Astrophysics Data System (ADS)

    Lundholm, Douglas; Rougerie, Nicolas

    2015-12-01

    Anyons are 2D or 1D quantum particles with intermediate statistics, interpolating between bosons and fermions. We study the ground state of a large number N of 2D anyons, in a scaling limit where the statistics parameter α is proportional to N ^{-1} when N→ ∞ . This means that the statistics is seen as a "perturbation from the bosonic end". We model this situation in the magnetic gauge picture by bosons interacting through long-range magnetic potentials. We assume that these effective statistical gauge potentials are generated by magnetic charges carried by each particle, smeared over discs of radius R (extended anyons). Our method allows to take R→ 0 not too fast at the same time as N→ ∞ . In this limit we rigorously justify the so-called "average field approximation": the particles behave like independent, identically distributed bosons interacting via a self-consistent magnetic field.

  8. Unbiased Average Age-Appropriate Atlases for Pediatric Studies

    PubMed Central

    Fonov, Vladimir; Evans, Alan C.; Botteron, Kelly; Almli, C. Robert; McKinstry, Robert C.; Collins, D. Louis

    2010-01-01

    Spatial normalization, registration, and segmentation techniques for Magnetic Resonance Imaging (MRI) often use a target or template volume to facilitate processing, take advantage of prior information, and define a common coordinate system for analysis. In the neuroimaging literature, the MNI305 Talairach-like coordinate system is often used as a standard template. However, when studying pediatric populations, variation from the adult brain makes the MNI305 suboptimal for processing brain images of children. Morphological changes occurring during development render the use of age-appropriate templates desirable to reduce potential errors and minimize bias during processing of pediatric data. This paper presents the methods used to create unbiased, age-appropriate MRI atlas templates for pediatric studies that represent the average anatomy for the age range of 4.5–18.5 years, while maintaining a high level of anatomical detail and contrast. The creation of anatomical T1-weighted, T2-weighted, and proton density-weighted templates for specific developmentally important age-ranges, used data derived from the largest epidemiological, representative (healthy and normal) sample of the U.S. population, where each subject was carefully screened for medical and psychiatric factors and characterized using established neuropsychological and behavioral assessments. . Use of these age-specific templates was evaluated by computing average tissue maps for gray matter, white matter, and cerebrospinal fluid for each specific age range, and by conducting an exemplar voxel-wise deformation-based morphometry study using 66 young (4.5–6.9 years) participants to demonstrate the benefits of using the age-appropriate templates. The public availability of these atlases/templates will facilitate analysis of pediatric MRI data and enable comparison of results between studies in a common standardized space specific to pediatric research. PMID:20656036

  9. The average crossing number of equilateral random polygons

    NASA Astrophysics Data System (ADS)

    Diao, Y.; Dobay, A.; Kusner, R. B.; Millett, K.; Stasiak, A.

    2003-11-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form \\frac{3}{16} n \\ln n +O(n) . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the \\langle ACN({\\cal K})\\rangle for each knot type \\cal K can be described by a function of the form \\langle ACN({\\cal K})\\rangle=a (n-n_0) \\ln (n-n_0)+b (n-n_0)+c where a, b and c are constants depending on \\cal K and n0 is the minimal number of segments required to form \\cal K . The \\langle ACN({\\cal K})\\rangle profiles diverge from each other, with more complex knots showing higher \\langle ACN({\\cal K})\\rangle than less complex knots. Moreover, the \\langle ACN({\\cal K})\\rangle profiles intersect with the langACNrang profile of all closed walks. These points of intersection define the equilibrium length of \\cal K , i.e., the chain length n_e({\\cal K}) at which a statistical ensemble of configurations with given knot type \\cal K —upon cutting, equilibration and reclosure to a new knot type \\cal K^\\prime —does not show a tendency to increase or decrease \\langle ACN({\\cal K^\\prime)}\\rangle . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration langRgrang.

  10. Molecular dynamics averaging of Xe chemical shifts in liquids

    NASA Astrophysics Data System (ADS)

    Jameson, Cynthia J.; Sears, Devin N.; Murad, Sohail

    2004-11-01

    The Xe nuclear magnetic resonance chemical shift differences that afford the discrimination between various biological environments are of current interest for biosensor applications and medical diagnostic purposes. In many such environments the Xe signal appears close to that in water. We calculate average Xe chemical shifts (relative to the free Xe atom) in solution in eleven liquids: water, isobutane, perfluoro-isobutane, n-butane, n-pentane, neopentane, perfluoroneopentane, n-hexane, n-octane, n-perfluorooctane, and perfluorooctyl bromide. The latter is a liquid used for intravenous Xe delivery. We calculate quantum mechanically the Xe shielding response in Xe-molecule van der Waals complexes, from which calculations we develop Xe (atomic site) interpolating functions that reproduce the ab initio Xe shielding response in the complex. By assuming additivity, these Xe-site shielding functions can be used to calculate the shielding for any configuration of such molecules around Xe. The averaging over configurations is done via molecular dynamics (MD). The simulations were carried out using a MD technique that one of us had developed previously for the simulation of Henry's constants of gases dissolved in liquids. It is based on separating a gaseous compartment in the MD system from the solvent using a semipermeable membrane that is permeable only to the gas molecules. We reproduce the experimental trends in the Xe chemical shifts in n-alkanes with increasing number of carbons and the large chemical shift difference between Xe in water and in perfluorooctyl bromide. We also reproduce the trend for a given solvent of decreasing Xe chemical shift with increasing temperature. We predict chemical shift differences between Xe in alkanes vs their perfluoro counterparts.

  11. Phase-based direct average strain estimation for elastography.

    PubMed

    Ara, Sharmin R; Mohsin, Faisal; Alam, Farzana; Rupa, Sharmin Akhtar; Awwal, Rayhana; Lee, Soo Yeol; Hasan, Md Kamrul

    2013-11-01

    In this paper, a phase-based direct average strain estimation method is developed. A mathematical model is presented to calculate axial strain directly from the phase of the zero-lag cross-correlation function between the windowed precompression and stretched post-compression analytic signals. Unlike phase-based conventional strain estimators, for which strain is computed from the displacement field, strain in this paper is computed in one step using the secant algorithm by exploiting the direct phase-strain relationship. To maintain strain continuity, instead of using the instantaneous phase of the interrogative window alone, an average phase function is defined using the phases of the neighboring windows with the assumption that the strain is essentially similar in a close physical proximity to the interrogative window. This method accounts for the effect of lateral shift but without requiring a prior estimate of the applied strain. Moreover, the strain can be computed both in the compression and relaxation phases of the applied pressure. The performance of the proposed strain estimator is analyzed in terms of the quality metrics elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe), and mean structural similarity (MSSIM), using a finite element modeling simulation phantom. The results reveal that the proposed method performs satisfactorily in terms of all the three indices for up to 2.5% applied strain. Comparative results using simulation and experimental phantom data, and in vivo breast data of benign and malignant masses also demonstrate that the strain image quality of our method is better than the other reported techniques.

  12. Quantifying the increase in average human heterozygosity due to urbanisation.

    PubMed

    Rudan, Igor; Carothers, Andrew D; Polasek, Ozren; Hayward, Caroline; Vitart, Veronique; Biloglav, Zrinka; Kolcic, Ivana; Zgaga, Lina; Ivankovic, Davor; Vorko-Jovic, Ariana; Wilson, James F; Weber, James L; Hastie, Nick; Wright, Alan; Campbell, Harry

    2008-09-01

    The human population is undergoing a major transition from a historical metapopulation structure of relatively isolated small communities to an outbred structure. This process is predicted to increase average individual genome-wide heterozygosity (h) and could have effects on health. We attempted to quantify this increase in mean h. We initially sampled 1001 examinees from a metapopulation of nine isolated villages on five Dalmatian islands (Croatia). Village populations had high levels of genetic differentiation, endogamy and consanguinity. We then selected 166 individuals with highly specific personal genetic histories to form six subsamples, which could be ranked a priori by their predicted level of outbreeding. The measure h was then estimated in the 166 examinees by genotyping 1184 STR/indel markers and using two different computation methods. Compared to the value of mean h in the least outbred sample, values of h in the remaining samples increased successively with predicted outbreeding by 0.023, 0.038, 0.058, 0.067 and 0.079 (P<0.0001), where these values are measured on the same scale as the inbreeding coefficient (but opposite sign). We have shown that urbanisation was associated with an average increase in h of up to 0.08-0.10 in this Croatian metapopulation, regardless of the method used. Similar levels of differentiation have been described in many populations. Therefore, changes in the level of heterozygosity across the genome of this magnitude may be common during isolate break-up in humans and could have significant health effects through the established genetic mechanism of hybrid vigour/heterosis.

  13. Evaluation of soft x-ray average recombination coefficient and average charge for metallic impurities in beam-heated plasmas

    SciTech Connect

    Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.

    1986-05-01

    The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti ..gamma.., even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti ..gamma.. can be as much as one-half to two-thirds. We calculate the parametric dependence of anti ..gamma.. and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti ..gamma.. and anti Z in different TFTR discharges.

  14. The importance of ensemble averaging in enzyme kinetics.

    PubMed

    Masgrau, Laura; Truhlar, Donald G

    2015-02-17

    CONSPECTUS: The active site of an enzyme is surrounded by a fluctuating environment of protein and solvent conformational states, and a realistic calculation of chemical reaction rates and kinetic isotope effects of enzyme-catalyzed reactions must take account of this environmental diversity. Ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT) was developed as a way to carry out such calculations. This theory incorporates ensemble averaging, quantized vibrational energies, energy, tunneling, and recrossing of transition state dividing surfaces in a systematic way. It has been applied successfully to a number of hydrogen-, proton-, and hydride-transfer reactions. The theory also exposes the set of effects that should be considered in reliable rate constants calculations. We first review the basic theory and the steps in the calculation. A key role is played by the generalized free energy of activation profile, which is obtained by quantizing the classical potential of mean force as a function of a reaction coordinate because the one-way flux through the transition state dividing surface can be written in terms of the generalized free energy of activation. A recrossing transmission coefficient accounts for the difference between the one-way flux through the chosen transition state dividing surface and the net flux, and a tunneling transmission coefficient converts classical motion along the reaction coordinate to quantum mechanical motion. The tunneling calculation is multidimensional, accounting for the change in vibrational frequencies along the tunneling path and shortening of the tunneling path with respect to the minimum energy path (MEP), as promoted by reaction-path curvature. The generalized free energy of activation and the transmission coefficients both involve averaging over an ensemble of reaction paths and conformations, and this includes the coupling of protein motions to the rearrangement of chemical bonds

  15. Hemoglobin A1c and Self-Monitored Average Glucose

    PubMed Central

    Kovatchev, Boris P.; Breton, Marc D.

    2015-01-01

    Background: Previously we have introduced the eA1c—a new approach to real-time tracking of average glycemia and estimation of HbA1c from infrequent self-monitoring (SMBG) data, which was developed and tested in type 2 diabetes. We now test eA1c in type 1 diabetes and assess its relationship to the hemoglobin glycation index (HGI)—an established predictor of complications and treatment effect. Methods: Reanalysis of previously published 12-month data from 120 patients with type 1 diabetes, age 39.15 (14.35) years, 51/69 males/females, baseline HbA1c = 7.99% (1.48), duration of diabetes 20.28 (12.92) years, number SMBG/day = 4.69 (1.84). Surrogate fasting BG and 7-point daily profiles were derived from these unstructured SMBG data and the previously reported eA1c method was applied without any changes. Following the literature, we calculated HGI = HbA1c – (0.009 × Fasting BG + 6.8). Results: The correlation of eA1c with reference HbA1c was r = .75, and its deviation from reference was MARD = 7.98%; 95% of all eA1c values fell within ±20% from reference. The HGI was well approximated by a linear combination of the eA1c calibration factors: HGI = 0.007552*θ1 + 0.007645*θ2 – 3.154 (P < .0001); 73% of low versus moderate-high HGIs were correctly classified by the same factors as well. Conclusions: The eA1c procedure developed in type 2 diabetes to track in real-time changes in average glycemia and present the results in HbA1c-equivalent units has shown similar performance in type 1 diabetes. The eA1c calibration factors are highly predictive of the HGI, thereby explaining partially the biological variation causing discrepancies between HbA1c and its linear estimates from SMBG data. PMID:26553023

  16. Increase in average testis size of Canadian beef bulls.

    PubMed

    García Guerra, Alvaro; Hendrick, Steve; Barth, Albert D

    2013-05-01

    Selection for adequate testis size in beef bulls is an important part of bull breeding soundness evaluation. Scrotal circumference (SC) is highly correlated with paired testis weight and is a practical method for estimating testis weight in the live animal. Most bulls presented for sale in Canada have SC included in the presale information. Scrotal circumference varies by age and breed, and may change over time due to selection for larger testis size. Therefore, it is important to periodically review the mean SC of various cattle breeds to provide valid bull selection criteria. Scrotal circumference data were obtained from bulls sold in western Canada from 2008 to 2011 and in Quebec from 2006 to 2010. Average scrotal circumferences for the most common beef breeds in Canada have increased significantly in the last 25 years. Differences between breeds have remained unchanged and Simmental bulls still have the largest SC at 1 year of age. Data provided here could aid in the establishment of new suggested minimum SC measurements for beef bulls.

  17. Inferring path average Cn2 values in the marine environment.

    PubMed

    Vetelino, Frida Strömqvist; Grayshan, Katelyn; Young, Cynthia Y

    2007-10-01

    Current mathematical scintillation theory describing laser propagation through the atmosphere has been developed for terrestrial environments. Scintillation expressions valid in all regimes of optical turbulence for propagation in the maritime environment, based on what we believe to be a newly developed marine atmospheric spectrum, have been developed for spherical waves. Path average values of the structure parameter, C(n)(2), were inferred from optical scintillation measurements of a diverged laser beam propagating in a marine environment, using scintillation expressions based on both terrestrial and marine refractive index spectra. In the moderate-to-strong fluctuation regime, the inferred marine C(n)(2) values were about 20% smaller than inferred terrestrial C(n)(2) values, but a minimal difference was observed in the weak fluctuation regime. Measurements of angle-of-arrival fluctuations were used to infer C(n)(2) values in the moderate-to-strong fluctuation regime, resulting in values of the structure parameter that were at least an order of magnitude larger than the two scintillation-inferred C(n)(2) values.

  18. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    NASA Astrophysics Data System (ADS)

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-01

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  19. Dosimetry in Mammography: Average Glandular Dose Based on Homogeneous Phantom

    SciTech Connect

    Benevides, Luis A.; Hintenlang, David E.

    2011-05-05

    The objective of this study was to demonstrate that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 and 46 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.

  20. Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.

    2016-06-01

    Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.

  1. Resolution improvement by 3D particle averaging in localization microscopy

    NASA Astrophysics Data System (ADS)

    Broeken, Jordi; Johnson, Hannah; Lidke, Diane S.; Liu, Sheng; Nieuwenhuizen, Robert P. J.; Stallinga, Sjoerd; Lidke, Keith A.; Rieger, Bernd

    2015-03-01

    Inspired by recent developments in localization microscopy that applied averaging of identical particles in 2D for increasing the resolution even further, we discuss considerations for alignment (registration) methods for particles in general and for 3D in particular. We detail that traditional techniques for particle registration from cryo electron microscopy based on cross-correlation are not suitable, as the underlying image formation process is fundamentally different. We argue that only localizations, i.e. a set of coordinates with associated uncertainties, are recorded and not a continuous intensity distribution. We present a method that owes to this fact and that is inspired by the field of statistical pattern recognition. In particular we suggest to use an adapted version of the Bhattacharyya distance as a merit function for registration. We evaluate the method in simulations and demonstrate it on 3D super-resolution data of Alexa 647 labelled to the Nup133 protein in the nuclear pore complex of Hela cells. From the simulations we find suggestions that for successful registration the localization uncertainty must be smaller than the distance between labeling sites on a particle. These suggestions are supported by theoretical considerations concerning the attainable resolution in localization microscopy and its scaling behavior as a function of labeling density and localization precision.

  2. Measurement of the average lifetime of hadrons containing bottom quarks

    SciTech Connect

    Klem, D.E.

    1986-06-01

    This thesis reports a measurement of the average lifetime of hadrons containing bottom quarks. It is based on data taken with the DELCO detector at the PEP e/sup +/e/sup -/ storage ring at a center of mass energy of 29 GeV. The decays of hadrons containing bottom quarks are tagged in hadronic events by the presence of electrons with a large component of momentum transverse to the event axis. Such electrons are identified in the DELCO detector by an atmospheric pressure Cherenkov counter assisted by a lead/scintillator electromagnetic shower counter. The lifetime measured is 1.17 psec, consistent with previous measurements. This measurement, in conjunction with a limit on the non-charm branching ratio in b-decay obtained by other experiments, can be used to constrain the magnitude of the V/sub cb/ element of the Kobayashi-Maskawa matrix to the range 0.042 (+0.005 or -0.004 (stat.), +0.004 or -0.002 (sys.)), where the errors reflect the uncertainty on tau/sub b/ only and not the uncertainties in the calculations which relate the b-lifetime and the element of the Kobayashi-Maskawa matrix.

  3. THEORY OF SINGLE-MOLECULE SPECTROSCOPY: Beyond the Ensemble Average

    NASA Astrophysics Data System (ADS)

    Barkai, Eli; Jung, Younjoon; Silbey, Robert

    2004-01-01

    Single-molecule spectroscopy (SMS) is a powerful experimental technique used to investigate a wide range of physical, chemical, and biophysical phenomena. The merit of SMS is that it does not require ensemble averaging, which is found in standard spectroscopic techniques. Thus SMS yields insight into complex fluctuation phenomena that cannot be observed using standard ensemble techniques. We investigate theoretical aspects of SMS, emphasizing (a) dynamical fluctuations (e.g., spectral diffusion, photon-counting statistics, antibunching, quantum jumps, triplet blinking, and nonergodic blinking) and (b) single-molecule fluctuations in disordered systems, specifically distribution of line shapes of single molecules in low-temperature glasses. Special emphasis is given to single-molecule systems that reveal surprising connections to Levy statistics (i.e., blinking of quantum dots and single molecules in glasses). We compare theory with experiment and mention open problems. Our work demonstrates that the theory of SMS is a complementary field of research for describing optical spectroscopy in the condensed phase.

  4. Search for an Average Potential describing Transfer Reactions

    NASA Astrophysics Data System (ADS)

    Suehiro, Teruo; Nakagawa, Takemi

    2001-10-01

    Variety of attempts such as coupled channels, non-locality corrections of optical potentials, projectile breakup etc. were made to resolve discrepancies between the distorted-wave Born approximation (DWBA) calculations and experimental differential cross section data of the transfer reactions initiated by light ions. The present work assumes that these discrepancies basically reflect detailed structure of the average interaction exerting on the nucleons involved in the transfer. Computations were carried out searching a potential that successfully describe both transfer reactions and the ordering and energies of neutron shells in the relevant nuclei. The (p,d) reactions on ^54,56Fe and ^58Ni at 40 and 50 MeV were taken for example, for which experimental data exist with good statistics in wider angular range. The potential was simulated by a sum of the volume and the derivative Wood-Saxon potential with seven free parameters. Finite-range DWBA calculations were done with the code DWUCK5(We are much indebted to Prof. P. D. Kunz for providing us with a PC version of the code DWUCK5, without which this work was impossible.). One set of such interaction potential was obtained which is markedly different from the volume Wood-Saxon potential customary used in the previous calculations. Implications of this potential will be discussed with regard to matter distributions of nuclei.

  5. Average annual precipitation classes to characterize watersheds in North Carolina

    USGS Publications Warehouse

    Terziotti, Silvia; Eimers, Jo Leslie

    2001-01-01

    This web site contains the Federal Geographic Data Committee-compliant metadata (documentation) for digital data produced for the North Carolina, Department of Environment and Natural Resources, Public Water Supply Section, Source Water Assessment Program. The metadata are for 11 individual Geographic Information System data sets. An overlay and indexing method was used with the data to derive a rating for unsaturated zone and watershed characteristics for use by the State of North Carolina in assessing more than 11,000 public water-supply wells and approximately 245 public surface-water intakes for susceptibility to contamination. For ground-water supplies, the digital data sets used in the assessment included unsaturated zone rating, vertical series hydraulic conductance, land-surface slope, and land cover. For assessment of public surface-water intakes, the data sets included watershed characteristics rating, average annual precipitation, land-surface slope, land cover, and ground-water contribution. Documentation for the land-use data set applies to both the unsaturated zone and watershed characteristics ratings. Documentation for the estimated depth-to-water map used in the calculation of the vertical series hydraulic conductance also is included.

  6. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  7. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  8. Understanding Stokes forces in the wave-averaged equations

    NASA Astrophysics Data System (ADS)

    Suzuki, Nobuhiro; Fox-Kemper, Baylor

    2016-05-01

    The wave-averaged, or Craik-Leibovich, equations describe the dynamics of upper ocean flow interacting with nonbreaking, not steep, surface gravity waves. This paper formulates the wave effects in these equations in terms of three contributions to momentum: Stokes advection, Stokes Coriolis force, and Stokes shear force. Each contribution scales with a distinctive parameter. Moreover, these contributions affect the turbulence energetics differently from each other such that the classification of instabilities is possible accordingly. Stokes advection transfers energy between turbulence and Eulerian mean-flow kinetic energy, and its form also parallels the advection of tracers such as salinity, buoyancy, and potential vorticity. Stokes shear force transfers energy between turbulence and surface waves. The Stokes Coriolis force can also transfer energy between turbulence and waves, but this occurs only if the Stokes drift fluctuates. Furthermore, this formulation elucidates the unique nature of Stokes shear force and also allows direct comparison of Stokes shear force with buoyancy. As a result, the classic Langmuir instabilities of Craik and Leibovich, wave-balanced fronts and filaments, Stokes perturbations of symmetric and geostrophic instabilities, the wavy Ekman layer, and the wavy hydrostatic balance are framed in terms of intuitive physical balances.

  9. Potential of high-average-power solid state lasers

    SciTech Connect

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  10. The average size and temperature profile of quasar accretion disks

    SciTech Connect

    Jiménez-Vicente, J.; Mediavilla, E.; Muñoz, J. A.; Motta, V.; Falco, E.

    2014-03-01

    We use multi-wavelength microlensing measurements of a sample of 10 image pairs from 8 lensed quasars to study the structure of their accretion disks. By using spectroscopy or narrowband photometry, we have been able to remove contamination from the weakly microlensed broad emission lines, extinction, and any uncertainties in the large-scale macro magnification of the lens model. We determine a maximum likelihood estimate for the exponent of the size versus wavelength scaling (r{sub s} ∝λ {sup p}, corresponding to a disk temperature profile of T∝r {sup –1/p}) of p=0.75{sub −0.2}{sup +0.2} and a Bayesian estimate of p = 0.8 ± 0.2, which are significantly smaller than the prediction of the thin disk theory (p = 4/3). We have also obtained a maximum likelihood estimate for the average quasar accretion disk size of r{sub s}=4.5{sub −1.2}{sup +1.5} lt-day at a rest frame wavelength of λ = 1026 Å for microlenses with a mean mass of M = 1 M {sub ☉}, in agreement with previous results, and larger than expected from thin disk theory.

  11. The dynamics of multimodal integration: The averaging diffusion model.

    PubMed

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-03-08

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  12. The folding of an ``average'' beta trefoil protein.

    NASA Astrophysics Data System (ADS)

    Gosavi, Shachi; Jennings, Pat; Onuchic, Jose

    2007-03-01

    The beta-trefoil fold is characterized by twelve beta strands folded into three similar beta-beta-beta-loop-beta (trefoil) units. The overall fold has pseudo-threefold symmetry and consists of a six stranded-barrel, capped by a triangular hairpin triplet. The loops connecting the beta-strands vary in length and structure. It is these loops that give the fold its varied binding capability and the binding sites lie in different parts of the fold. The beta-trefoil proteins have little sequence similarity (sometimes less than 17%) and bind a range of molecules, including other proteins, DNA, membranes and carbohydrates. Protein folding experiments have been performed on four of the beta trefoils, namely, interleukin-1 (IL1B), acidic and basic fibroblast growth factors (FGF-1 and FGF-2) and hisactophilin (HIS). These experiments indicate that the proteins fold by different routes. Folding simulations of the proteins identify the possible folding routes and also show that the shapes of the barriers are different for the different proteins. In this work, we design a model protein which contains only the core fold elements of the beta-trefoil fold. We compare the folding of this ``average'' protein to the folding of His, FGF and IL1B and make some connections with function.

  13. Domain-averaged Fermi-hole analysis for solids.

    PubMed

    Baranov, Alexey I; Ponec, Robert; Kohout, Miroslav

    2012-12-07

    The domain-averaged Fermi hole (DAFH) orbitals provide highly visual representation of bonding in terms of orbital-like functions with attributed occupation numbers. It was successfully applied on many molecular systems including those with non-trivial bonding patterns. This article reports for the first time the extension of the DAFH analysis to the realm of extended periodic systems. Simple analytical model of DAFH orbital for single-band solids is introduced which allows to rationalize typical features that DAFH orbitals for extended systems may possess. In particular, a connection between Wannier and DAFH orbitals has been analyzed. The analysis of DAFH orbitals on the basis of DFT calculations is applied to hydrogen lattices of different dimensions as well as to the solids diamond, graphite, Na, Cu and NaCl. In case of hydrogen lattices, remarkable similarity is found between the DAFH orbitals evaluated with both the analytical approach and DFT. In case of the selected ionic and covalent solids the DAFH orbitals deliver bonding descriptions, which are compatible with classical orbital interpretation. For metals the DAFH analysis shows essential multicenter nature of bonding.

  14. Domain-averaged Fermi-hole analysis for solids

    NASA Astrophysics Data System (ADS)

    Baranov, Alexey I.; Ponec, Robert; Kohout, Miroslav

    2012-12-01

    The domain-averaged Fermi hole (DAFH) orbitals provide highly visual representation of bonding in terms of orbital-like functions with attributed occupation numbers. It was successfully applied on many molecular systems including those with non-trivial bonding patterns. This article reports for the first time the extension of the DAFH analysis to the realm of extended periodic systems. Simple analytical model of DAFH orbital for single-band solids is introduced which allows to rationalize typical features that DAFH orbitals for extended systems may possess. In particular, a connection between Wannier and DAFH orbitals has been analyzed. The analysis of DAFH orbitals on the basis of DFT calculations is applied to hydrogen lattices of different dimensions as well as to the solids diamond, graphite, Na, Cu and NaCl. In case of hydrogen lattices, remarkable similarity is found between the DAFH orbitals evaluated with both the analytical approach and DFT. In case of the selected ionic and covalent solids the DAFH orbitals deliver bonding descriptions, which are compatible with classical orbital interpretation. For metals the DAFH analysis shows essential multicenter nature of bonding.

  15. Urban noise functional stratification for estimating average annual sound level.

    PubMed

    Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos

    2015-06-01

    Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time.

  16. Effects of Polynomial Trends on Detrending Moving Average Analysis

    NASA Astrophysics Data System (ADS)

    Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2015-07-01

    The detrending moving average (DMA) algorithm is one of the best performing methods to quantify the long-term correlations in nonstationary time series. As many long-term correlated time series in real systems contain various trends, we investigate the effects of polynomial trends on the scaling behaviors and the performances of three widely used DMA methods including backward algorithm (BDMA), centered algorithm (CDMA) and forward algorithm (FDMA). We derive a general framework for polynomial trends and obtain analytical results for constant shifts and linear trends. We find that the behavior of the CDMA method is not influenced by constant shifts. In contrast, linear trends cause a crossover in the CDMA fluctuation functions. We also find that constant shifts and linear trends cause crossovers in the fluctuation functions obtained from the BDMA and FDMA methods. When a crossover exists, the scaling behavior at small scales comes from the intrinsic time series while that at large scales is dominated by the constant shifts or linear trends. We also derive analytically the expressions of crossover scales and show that the crossover scale depends on the strength of the polynomial trends, the Hurst index, and in some cases (linear trends for BDMA and FDMA) the length of the time series. In all cases, the BDMA and the FDMA behave almost the same under the influence of constant shifts or linear trends. Extensive numerical experiments confirm excellently the analytical derivations. We conclude that the CDMA method outperforms the BDMA and FDMA methods in the presence of polynomial trends.

  17. A simple depth-averaged model for dry granular flow

    NASA Astrophysics Data System (ADS)

    Hung, Chi-Yao; Stark, Colin P.; Capart, Herve

    Granular flow over an erodible bed is an important phenomenon in both industrial and geophysical settings. Here we develop a depth-averaged theory for dry erosive flows using balance equations for mass, momentum and (crucially) kinetic energy. We assume a linearized GDR-Midi rheology for granular deformation and Coulomb friction along the sidewalls. The theory predicts the kinematic behavior of channelized flows under a variety of conditions, which we test in two sets of experiments: (1) a linear chute, where abrupt changes in tilt drive unsteady uniform flows; (2) a rotating drum, to explore steady non-uniform flow. The theoretical predictions match the experimental results well in all cases, without the need to tune parameters or invoke an ad hoc equation for entrainment at the base of the flow. Here we focus on the drum problem. A dimensionless rotation rate (related to Froude number) characterizes flow geometry and accounts not just for spin rate, drum radius and gravity, but also for grain size, wall friction and channel width. By incorporating Coriolis force the theory can treat behavior under centrifuge-induced enhanced gravity. We identify asymptotic flow regimes at low and high dimensionless rotation rates that exhibit distinct power-law scaling behaviors.

  18. Coherent and stochastic averaging in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Nevzorov, Alexander A.

    2014-12-01

    A new approach for calculating solid-state NMR lineshapes of uniaxially rotating membrane proteins under the magic-angle spinning conditions is presented. The use of stochastic Liouville equation (SLE) allows one to account for both coherent sample rotation and stochastic motional averaging of the spherical dipolar powder patterns by uniaxial diffusion of the spin-bearing molecules. The method is illustrated via simulations of the dipolar powder patterns of rigid samples under the MAS conditions, as well as the recent method of rotational alignment in the presence of both MAS and rotational diffusion under the conditions of dipolar recoupling. It has been found that it is computationally more advantageous to employ direct integration over a spherical grid rather than to use a full angular basis set for the SLE solution. Accuracy estimates for the bond angles measured from the recoupled amide 1H-15N dipolar powder patterns have been obtained at various rotational diffusion coefficients. It has been shown that the rotational alignment method is applicable to membrane proteins approximated as cylinders with radii of approximately 20 Å, for which uniaxial rotational diffusion within the bilayer is sufficiently fast and exceeds the rate 2 × 105 s-1.

  19. Identification and estimation of survivor average causal effects

    PubMed Central

    Tchetgen, Eric J Tchetgen

    2014-01-01

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022

  20. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  1. Global average net radiation sensitivity to cloud amount variations

    SciTech Connect

    Karner, O.

    1993-12-01

    Time series analysis performed using an autoregressive model is carried out to study monthly oscillations in the earth radiation budget (ERB) at the top of the atmosphere (TOA) and cloud amount estimates on a global basis. Two independent cloud amount datasets, produced elsewhere by different authors, and the ERB record based on the Nimbus-7 wide field-of-view 8-year (1978-86) observations are used. Autoregressive models are used to eliminate the effects of the earth`s orbit eccentricity on the radiation budget and cloud amount series. Nonzero cross correlation between the residual series provides a way of estimating the contribution of the cloudiness variations to the variance in the net radiation. As a result, a new parameter to estimate the net radiation sensitivity at the TOA to changes in cloud amount is introduced. This parameter has a more general character than other estimates because it contains time-lag terms of different length responsible for different cloud-radiation feedback mechanisms in the earth climate system. Time lags of 0, 1, 12, and 13 months are involved. Inclusion of the zero-lag term only shows that the albedo effect of clouds dominates, as is known from other research. Inclusion of all four terms leads to an average quasi-annual insensitivity. Approximately 96% of the ERB variance at the TOA can be explained by the eccentricity factor and 1% by cloudiness variations, provided that the data used are without error. Although the latter assumption is not fully correct, the results presented allow one to estimate the contribution of current cloudiness changes to the net radiation variability. Two independent cloud amount datasets have very similar temporal variability and also approximately equal impact on the net radiation at the TOA.

  2. The average common substring approach to phylogenomic reconstruction.

    PubMed

    Ulitsky, Igor; Burstein, David; Tuller, Tamir; Chor, Benny

    2006-03-01

    We describe a novel method for efficient reconstruction of phylogenetic trees, based on sequences of whole genomes or proteomes, whose lengths may greatly vary. The core of our method is a new measure of pairwise distances between sequences. This measure is based on computing the average lengths of maximum common substrings, which is intrinsically related to information theoretic tools (Kullback-Leibler relative entropy). We present an algorithm for efficiently computing these distances. In principle, the distance of two l long sequences can be calculated in O(l) time. We implemented the algorithm using suffix arrays our implementation is fast enough to enable the construction of the proteome phylogenomic tree for hundreds of species and the genome phylogenomic forest for almost two thousand viruses. An initial analysis of the results exhibits a remarkable agreement with "acceptable phylogenetic and taxonomic truth." To assess our approach, our results were compared to the traditional (single-gene or protein-based) maximum likelihood method. The obtained trees were compared to implementations of a number of alternative approaches, including two that were previously published in the literature, and to the published results of a third approach. Comparing their outcome and running time to ours, using a "traditional" trees and a standard tree comparison method, our algorithm improved upon the "competition" by a substantial margin. The simplicity and speed of our method allows for a whole genome analysis with the greatest scope attempted so far. We describe here five different applications of the method, which not only show the validity of the method, but also suggest a number of novel phylogenetic insights.

  3. The Lake Wobegon Effect: Are All Cancer Patients above Average?

    PubMed Central

    Wolf, Jacqueline H; Wolf, Kevin S

    2013-01-01

    Context When elderly patients face a terminal illness such as lung cancer, most are unaware that what we term in this article “the Lake Wobegon effect” taints the treatment advice imparted to them by their oncologists. In framing treatment plans, cancer specialists tend to intimate that elderly patients are like the children living in Garrison Keillor's mythical Lake Wobegon: above average and thus likely to exceed expectations. In this article, we use the story of our mother's death from lung cancer to investigate the consequences of elderly people's inability to reconcile the grave reality of their illness with the overly optimistic predictions of their physicians. Methods In this narrative analysis, we examine the routine treatment of elderly, terminally ill cancer patients through alternating lenses: the lens of a historian of medicine who also teaches ethics to medical students and the lens of an actuary who is able to assess physicians’ claims for the outcome of medical treatments. Findings We recognize that a desire to instill hope in patients shapes physicians’ messages. We argue, however, that the automatic optimism conveyed to elderly, dying patients by cancer specialists prompts those patients to choose treatment that is ineffective and debilitating. Rather than primarily prolong life, treatments most notably diminish patients’ quality of life, weaken the ability of patients and their families to prepare for their deaths, and contribute significantly to the unsustainable costs of the U.S. health care system. Conclusions The case described in this article suggests how physicians can better help elderly, terminally ill patients make medical decisions that are less damaging to them and less costly to the health care system. PMID:24320166

  4. Microbes make average 2 nanometer diameter crystalline UO2 particles.

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.; Kelly, S. D.; Kemner, K. M.; Banfield, J. F.

    2001-12-01

    It is well known that phylogenetically diverse groups of microorganisms are capable of catalyzing the reduction of highly soluble U(VI) to highly insoluble U(IV), which rapidly precipitates as uraninite (UO2). Because biological uraninite is highly insoluble, microbial uranyl reduction is being intensively studied as the basis for a cost-effective in-situ bioremediation strategy. Previous studies have described UO2 biomineralization products as amorphous or poorly crystalline. The objective of this study is to characterize the nanocrystalline uraninite in detail in order to determine the particle size, crystallinity, and size-related structural characteristics, and to examine the implications of these for reoxidation and transport. In this study, we obtained U-contaminated sediment and water from an inactive U mine and incubated them anaerobically with nutrients to stimulate reductive precipitation of UO2 by indigenous anaerobic bacteria, mainly Gram-positive spore-forming Desulfosporosinus and Clostridium spp. as revealed by RNA-based phylogenetic analysis. Desulfosporosinus sp. was isolated from the sediment and UO2 was precipitated by this isolate from a simple solution that contains only U and electron donors. We characterized UO2 formed in both of the experiments by high resolution-TEM (HRTEM) and X-ray absorption fine structure analysis (XAFS). The results from HRTEM showed that both the pure and the mixed cultures of microorganisms precipitated around 1.5 - 3 nm crystalline UO2 particles. Some particles as small as around 1 nm could be imaged. Rare particles around 10 nm in diameter were also present. Particles adhere to cells and form colloidal aggregates with low fractal dimension. In some cases, coarsening by oriented attachment on \\{111\\} is evident. Our preliminary results from XAFS for the incubated U-contaminated sample also indicated an average diameter of UO2 of 2 nm. In nanoparticles, the U-U distance obtained by XAFS was 0.373 nm, 0.012 nm

  5. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to calculate emissions at 7 percent oxygen. (b) Use Equation 2 in § 60.3076 to calculate the 12-hour...

  6. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... averages into the appropriate averaging times and units? 60.1265 Section 60.1265 Protection of Environment... averaging times and units? (a) Use the equation in § 60.1460(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of this part, section 4.3, to calculate the...

  7. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of... daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate a... Continuous Emission Monitoring § 60.1265 How do I convert my 1-hour arithmetic averages into the...

  8. Forecasting of Average Monthly River Flows in Colombia

    NASA Astrophysics Data System (ADS)

    Mesa, O. J.; Poveda, G.

    2006-05-01

    The last two decades have witnessed a marked increase in our knowledge of the causes of interannual hydroclimatic variability and our ability to make predictions. Colombia, located near the seat of the ENSO phenomenon, has been shown to experience negative (positive) anomalies in precipitation in concert with El Niño (La Niña). In general besides the Pacific Ocean, Colombia has climatic influences from the Atlantic Ocean and the Caribbean Sea through the tropical forest of the Amazon basin and the savannas of the Orinoco River, in top of the orographic and hydro-climatic effects introduced by the Andes. As in various other countries of the region, hydro-electric power contributes a large proportion (75 %) of the total electricity generation in Colombia. Also, most agriculture is rain-fed dependant, and domestic water supply relies mainly on surface waters from creeks and rivers. Besides, various vector borne tropical diseases intensify in response to rain and temperature changes. Therefore, there is a direct connection between climatic fluctuations and national and regional economies. This talk specifically presents different forecasts of average monthly stream flows for the inflow into the largest reservoir used for hydropower generation in Colombia, and illustrates the potential economic savings of such forecasts. Because of planning of the reservoir operation, the most appropriated time scale for this application is the annual to interannual. Fortunately, this corresponds to the scale at which hydroclimate variability understanding has improved significantly. Among the different possibilities we have explored: traditional statistical ARIMA models, multiple linear regression, natural and constructed analogue models, the linear inverse model, neural network models, the non-parametric regression splines (MARS) model, regime dependant Markovian models and one we termed PREBEO, which is based on spectral bands decomposition using wavelets. Most of the methods make

  9. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non

  10. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    NASA Astrophysics Data System (ADS)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  11. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 10 2013-04-01 2013-04-01 false Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...

  12. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...

  13. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...

  14. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 10 2012-04-01 2012-04-01 false Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...

  15. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 10 2014-04-01 2013-04-01 true Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...

  16. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    PubMed

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers.

  17. Gender Differences in Gifted and Average-Ability Students: Comparing Girls' and Boys' Achievement, Self-Concept, Interest, and Motivation in Mathematics

    ERIC Educational Resources Information Center

    Preckel, Franzis; Goetz, Thomas; Pekrun, Reinhard; Kleine, Michael

    2008-01-01

    This article investigates gender differences in 181 gifted and 181 average-ability sixth graders in achievement, academic self-concept, interest, and motivation in mathematics. Giftedness was conceptualized as nonverbal reasoning ability and defined by a rank of at least 95% on a nonverbal reasoning subscale of the German Cognitive Abilities Test.…

  18. Coping Strategies Applied to Comprehend Multistep Arithmetic Word Problems by Students with Above-Average Numeracy Skills and Below-Average Reading Skills

    ERIC Educational Resources Information Center

    Nortvedt, Guri A.

    2011-01-01

    This article discusses how 13-year-old students with above-average numeracy skills and below-average reading skills cope with comprehending word problems. Compared to other students who are proficient in numeracy and are skilled readers, these students are more disadvantaged when solving single-step and multistep arithmetic word problems. The…

  19. Recursive Averaging

    ERIC Educational Resources Information Center

    Smith, Scott G.

    2015-01-01

    In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…

  20. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample (N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  1. 75 FR 22164 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-27

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... this notice in the Federal Register that the United States City Average All Items Consumer Price Index for All Urban Consumers (1967=100) increased 106.6 percent from its 1984 annual average of 311.1...

  2. 77 FR 23283 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-18

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... this notice in the Federal Register that the United States City Average All Items Consumer Price Index for All Urban Consumers (1967 = 100) increased 116.6 percent from its 1984 annual average of 311.1...

  3. 76 FR 31991 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-02

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... this notice in the Federal Register that the United States City Average All Items Consumer Price Index for All Urban Consumers (1967 = 100) increased 110.0 percent from its 1984 annual average of 311.1...

  4. 28 CFR 505.2 - Annual determination of average cost of incarceration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MANAGEMENT AND ADMINISTRATION COST OF INCARCERATION FEE § 505.2 Annual determination of average cost of... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Annual determination of average cost of... average cost of incarceration. This calculation is reviewed annually and the revised figure is...

  5. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  6. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  7. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  8. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  9. 13 CFR 120.829 - Job Opportunity average a CDC must maintain.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average...

  10. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  11. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  12. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  13. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  14. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  15. Digital filter suppresses effects of nonstatistical noise bursts on multichannel scaler digital averaging systems

    NASA Technical Reports Server (NTRS)

    Goodman, L. S.; Salter, F. O.

    1968-01-01

    Digital filter suppresses the effects of nonstatistical noise bursts on data averaged over multichannel scaler. Interposed between the sampled channels and the digital averaging system, it uses binary logic circuitry to compare the number of counts per channel with the average number of counts per channel.

  16. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year...

  17. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year...

  18. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year...

  19. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year...

  20. 12 CFR 702.105 - Weighted-average life of investments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§...