Pate, William; Charlton, Michael; Wellington, Carl
2013-01-01
Occupational noise exposure is a recognized hazard for employees working near equipment and processes that generate high levels of sound pressure. High sound pressure levels have the potential to result in temporary or permanent alteration in hearing perception. The cleaning of cages used to house laboratory research animals is a process that uses equipment capable of generating high sound pressure levels. The purpose of this research study was to assess occupational exposure to sound pressure levels for employees operating cage decontamination equipment. This study reveals the potential for overexposure to hazardous noise as defined by the Occupational Safety and Health Administration (OSHA) permissible exposure limit and consistent surpassing of the OSHA action level. These results emphasize the importance of evaluating equipment and room design when acquiring new cage decontamination equipment in order to minimize employee exposure to potentially hazardous noise pressure levels.
Ouyang, Gangfeng; Zhao, Wennan; Bragg, Leslie; Qin, Zhipei; Alaee, Mehran; Pawliszyn, Janusz
2007-06-01
In this study, three types of solid-phase microextraction (SPME) passive samplers, including a fiber-retracted device, a polydimethylsiloxane (PDMS)-rod and a PDMS-membrane, were evaluated to determine the time weighted average (TWA) concentrations of polycyclic aromatic hydrocarbons (PAHs) in Hamilton Harbor (the western tip of Lake Ontario, ON, Canada). Field trials demonstrated that these types of SPME samplers are suitable for the long-term monitoring of organic pollutants in water. These samplers possess all of the advantages of SPME: they are solvent-free, sampling, extraction and concentration are combined into one step, and they can be directly injected into a gas chromatograph (GC) for analysis without further treatment. These samplers also address the additional needs of a passive sampling technique: they are economical, easy to deploy, and the TWA concentrations of target analytes can be obtained with one sampler. Moreover, the mass uptake of these samplers is independent of the face velocity, or the effect can be calibrated, which is desirable for long-term field sampling, especially when the convection conditions of the sampling environment are difficult to measure and calibrate. Among the three types of SPME samplers that were tested, the PDMS-membrane possesses the highest surface-to-volume ratio, which results in the highest sensitivity and mass uptake and the lowest detection level.
Time weighted average concentration monitoring based on thin film solid phase microextraction.
Ahmadi, Fardin; Sparham, Chris; Boyaci, Ezel; Pawliszyn, Janusz
2017-03-02
Time weighted average (TWA) passive sampling with thin film solid phase microextraction (TF-SPME) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for collection, identification, and quantification of benzophenone-3, benzophenone-4, 2-phenylbenzimidazole-5-sulphonic acid, octocrylene, and triclosan in the aquatic environment. Two types of TF-SPME passive samplers, including a retracted thin film device using a hydrophilic lipophilic balance (HLB) coating, and an open bed configuration with an octadecyl silica-based (C18) coating, were evaluated in an aqueous standard generation (ASG) system. Laboratory calibration results indicated that the thin film retracted device using HLB coating is suitable to determine TWA concentrations of polar analytes in water, with an uptake that was linear up to 70 days. In open bed form, a one-calibrant kinetic calibration technique was accomplished by loading benzophenone3-d5 as calibrant on the C18 coating to quantify all non-polar compounds. The experimental results showed that the one-calibrant kinetic calibration technique can be used for determination of classes of compounds in cases where deuterated counterparts are either not available or expensive. The developed passive samplers were deployed in wastewater-dominated reaches of the Grand River (Kitchener, ON) to verify their feasibility for determination of TWA concentrations in on-site applications. Field trials results indicated that these devices are suitable for long-term and short-term monitoring of compounds varying in polarity, such as UV blockers and biocide compounds in water, and the data were in good agreement with literature data.
Uncertainty and variability in historical time-weighted average exposure data.
Davis, Adam J; Strom, Daniel J
2008-02-01
Beginning around 1940, private companies began processing of uranium and thorium ore, compounds, and metals for the Manhattan Engineer District and later the U.S. Atomic Energy Commission (AEC). Personnel from the AEC's Health and Safety Laboratory (HASL) visited many of the plants to assess worker exposures to radiation and radioactive materials. They developed a time-and-task approach to estimating "daily weighted average" (DWA) concentrations of airborne uranium, thorium, radon, and radon decay products. While short-term exposures greater than 10(5) dpm m(-3) of uranium and greater than 10(5) pCi L(-1) of radon were observed, DWA concentrations were much lower. The HASL-reported DWA values may be used as inputs for dose reconstruction in support of compensation decisions, but they have no numerical uncertainties associated with them. In this work, Monte Carlo methods are used retrospectively to assess the uncertainty and variability in the DWA values for 63 job titles from five different facilities that processed U, U ore, Th, or 226Ra-222Rn between 1948 and 1955. Most groups of repeated air samples are well described by lognormal distributions. Combining samples associated with different tasks often results in a reduction of the geometric standard deviation (GSD) of the DWA to less than those GSD values typical of individual tasks. Results support the assumption of a GSD value of 5 when information on uncertainty in DWA exposures is unavailable. Blunders involving arithmetic, transposition, and transcription are found in many of the HASL reports. In 5 out of the 63 cases, these mistakes result in overestimates of DWA values by a factor of 2 to 2.5, and in 2 cases DWA values are underestimated by factors of 3 to 10.
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler
Juarez-Galan, Juan M; Valor, Ignacio
2009-04-10
A new cryogenic integrative air sampler (patent application number 08/00669), able to overcome many of the limitations in current volatile organic compounds and odour sampling methodologies is presented. The sample is spontaneously collected in a universal way at 15 mL/min, selectively dried (reaching up to 95% of moisture removal) and stored under cryogenic conditions. The sampler performance was tested under time weighted average (TWA) conditions, sampling 100L of air over 5 days for determination of NH(3), H(2)S, and benzene, toluene, ethylbenzene and xylenes (BTEX) in the ppm(v) range. Recovery was 100% (statistically) for all compounds, with a concentration factor of 5.5. Furthermore, an in-field evaluation was done by monitoring the TWA inmission levels of BTEX and dimethylethylamine (ppb(v) range) in an urban area with the developed technology and comparing the results with those monitored with a commercial graphitised charcoal diffusive sampler. The results obtained showed a good statistical agreement between the two techniques.
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-09
... Rouge 8-Hour Ozone Nonattainment Area; Determination of Attainment of the 8-Hour Ozone Standard AGENCY... (BR) moderate 8- hour ozone nonattainment area has attained the 1997 8-hour ozone National Ambient Air... air monitoring data that show the area has monitored attainment of the 1997 8-hour ozone NAAQS for...
40 CFR 51.906 - Redesignation to nonattainment following initial designations for the 8-hour NAAQS.
Code of Federal Regulations, 2014 CFR
2014-07-01
... following initial designations for the 8-hour NAAQS. 51.906 Section 51.906 Protection of Environment... Standard § 51.906 Redesignation to nonattainment following initial designations for the 8-hour NAAQS. For any area that is initially designated attainment or unclassifiable for the 8-hour NAAQS and that...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... AGENCY 40 CFR Part 52 Approval and Promulgation of Air Quality Implementation Plans; Ohio; 1997 8-Hour Ozone Maintenance Plan Revision; Motor Vehicle Emissions Budgets for the Ohio Portion of the Wheeling... Act, EPA is proposing to approve the request by Ohio to revise the 1997 8-hour ozone maintenance...
40 CFR 51.906 - Redesignation to nonattainment following initial designations for the 8-hour NAAQS.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality... subsequently redesignated to nonattainment for the 8-hour ozone NAAQS, any absolute, fixed date applicable...
40 CFR 51.906 - Redesignation to nonattainment following initial designations for the 8-hour NAAQS.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality... subsequently redesignated to nonattainment for the 8-hour ozone NAAQS, any absolute, fixed date applicable...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-01
... Ozone Nonattainment Area; MD AGENCY: Environmental Protection Agency (EPA). ACTION: Proposed rule. SUMMARY: EPA is proposing to determine that the Baltimore moderate 8- hour ozone nonattainment area (the Baltimore Area) did not attain the 1997 8-hour ozone national ambient air quality standard (NAAQS) by...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-04
... Ozone Nonattainment Area; Correction AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule..., area from marginal to moderate for the 1997 8-hour ozone nonattainment area by operation of law. This....311. The reclassification of the Atlanta Area from marginal to moderate for the 1997 8-hour...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... AGENCY 40 CFR Part 52 Approval and Promulgation of Implementation Plans; Atlanta, Georgia 1997 8-Hour..., 2009, to address the reasonable further progress (RFP) plan requirements for the Atlanta, Georgia 1997 8-hour ozone national ambient air quality standards (NAAQS) nonattainment area. The Atlanta,...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-24
... AGENCY 40 CFR Part 52 Approval and Promulgation of Implementation Plans: Atlanta, Georgia 1997 8-Hour... Atlanta, Georgia 1997 8-hour ozone national ambient air quality standards (NAAQS) nonattainment area. EPA... Planning Branch, U.S. Environmental Protection Agency Region 4, 61 Forsyth Street SW., Atlanta,...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
... AGENCY 40 CFR Part 52 Approval and Promulgation of Air Quality Implementation Plans; Virginia... Virginia's State Implementation Plan (SIP) revision submitted by the Virginia Department of Environmental... model (MOVES2010a). The revised MVEBs continue to demonstrate maintenance of the 1997 8-hour...
EPA Approves Redesignation of Knoxville Area to Attainment for the 2008 8-Hour Ozone Standard
(07/13/15 - ATLANTA ) - Today, the U.S. Environmental Protection Agency announced that it is taking final action to approve the state of Tennessee's request to redesignate the Knoxville area to attainment for the 2008 8-hour ozone standard. This actio
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
...EPA is taking direct final action to approve a state implementation plan (SIP) revision, submitted by the State of Georgia, through the Georgia Environmental Protection Division (GA EPD), on October 21, 2009, to address the reasonable further progress (RFP) plan requirements for the Atlanta, Georgia 1997 8-hour ozone national ambient air quality standards (NAAQS) nonattainment area. The......
40 CFR 51.915 - What emissions inventory requirements apply under the 8-hour NAAQS?
Code of Federal Regulations, 2014 CFR
2014-07-01
... nonattainment area subject only to title I, part D, subpart 1 of the Act in accordance with § 51.902(b), the... emissions inventories for these areas, the ozone-relevant data element requirements under 40 CFR part 51... apply under the 8-hour NAAQS? 51.915 Section 51.915 Protection of Environment ENVIRONMENTAL...
40 CFR 51.915 - What emissions inventory requirements apply under the 8-hour NAAQS?
Code of Federal Regulations, 2011 CFR
2011-07-01
... AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.915 What... emissions inventories for these areas, the ozone-relevant data element requirements under 40 CFR part...
40 CFR 51.915 - What emissions inventory requirements apply under the 8-hour NAAQS?
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.915 What... emissions inventories for these areas, the ozone-relevant data element requirements under 40 CFR part...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
... AGENCY 40 CFR Part 52 Approval and Promulgation of Implementation Plans; New Jersey; 8- hour Ozone... Implementation Plan (SIP) for ozone involving the control of volatile organic compounds (VOCs). The proposed SIP... ozone. DATES: Comments must be received on or before August 23, 2010. ADDRESSES: Submit your...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... AGENCY 40 CFR Part 52 Approval and Promulgation of Implementation Plans; New Jersey; 8- Hour Ozone... Plan (SIP) for ozone involving the control of volatile organic compounds (VOCs). The SIP revision... ozone. DATES: Effective Date: This rule is effective on January 21, 2011. ADDRESSES: EPA has...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... 51.918, a final determination that the area has met the 1997 8-hour ozone standard suspends the state... nonattainment area has attained the 1997 8-hour ozone NAAQS. This determination, in accordance with 40 CFR 51... (MO-IL) metropolitan nonattainment area has attained the 1997 8-hour National Ambient Air...
40 CFR 51.914 - What new source review requirements apply for 8-hour ozone nonattainment areas?
Code of Federal Regulations, 2011 CFR
2011-07-01
... apply for 8-hour ozone nonattainment areas? 51.914 Section 51.914 Protection of Environment... OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.914 What new source review requirements apply for 8-hour ozone nonattainment areas?...
40 CFR 51.914 - What new source review requirements apply for 8-hour ozone nonattainment areas?
Code of Federal Regulations, 2013 CFR
2013-07-01
... apply for 8-hour ozone nonattainment areas? 51.914 Section 51.914 Protection of Environment... OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.914 What new source review requirements apply for 8-hour ozone nonattainment areas?...
40 CFR 51.914 - What new source review requirements apply for 8-hour ozone nonattainment areas?
Code of Federal Regulations, 2012 CFR
2012-07-01
... apply for 8-hour ozone nonattainment areas? 51.914 Section 51.914 Protection of Environment... OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.914 What new source review requirements apply for 8-hour ozone nonattainment areas?...
Code of Federal Regulations, 2011 CFR
2011-07-01
... demonstration requirements apply for purposes of the 8-hour ozone NAAQS? 51.908 Section 51.908 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... purposes of the 8-hour ozone NAAQS? (a) What is the attainment demonstration requirement for an...
40 CFR 51.914 - What new source review requirements apply for 8-hour ozone nonattainment areas?
Code of Federal Regulations, 2014 CFR
2014-07-01
... apply for 8-hour ozone nonattainment areas? 51.914 Section 51.914 Protection of Environment... OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.914 What new source review requirements apply for 8-hour ozone nonattainment areas?...
Code of Federal Regulations, 2013 CFR
2013-07-01
... demonstration requirements apply for purposes of the 8-hour ozone NAAQS? 51.908 Section 51.908 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... purposes of the 8-hour ozone NAAQS? (a) What is the attainment demonstration requirement for an...
40 CFR 51.914 - What new source review requirements apply for 8-hour ozone nonattainment areas?
Code of Federal Regulations, 2010 CFR
2010-07-01
... apply for 8-hour ozone nonattainment areas? 51.914 Section 51.914 Protection of Environment... OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.914 What new source review requirements apply for 8-hour ozone nonattainment areas?...
Code of Federal Regulations, 2012 CFR
2012-07-01
... demonstration requirements apply for purposes of the 8-hour ozone NAAQS? 51.908 Section 51.908 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... purposes of the 8-hour ozone NAAQS? (a) What is the attainment demonstration requirement for an...
Code of Federal Regulations, 2010 CFR
2010-07-01
... demonstration requirements apply for purposes of the 8-hour ozone NAAQS? 51.908 Section 51.908 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... purposes of the 8-hour ozone NAAQS? (a) What is the attainment demonstration requirement for an...
Code of Federal Regulations, 2014 CFR
2014-07-01
... demonstration requirements apply for purposes of the 8-hour ozone NAAQS? 51.908 Section 51.908 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... purposes of the 8-hour ozone NAAQS? (a) What is the attainment demonstration requirement for an...
40 CFR 52.1393 - Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the 1997 8-hour ozone and PM2.5 NAAQS. 52.1393 Section 52.1393 Protection of Environment ENVIRONMENTAL... (CONTINUED) Montana § 52.1393 Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS. The... Section 110(a)(2)(D)(i) for the 8-hour ozone and PM2.5 NAAQS promulgated in July 1997. The...
40 CFR 52.1393 - Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the 1997 8-hour ozone and PM2.5 NAAQS. 52.1393 Section 52.1393 Protection of Environment ENVIRONMENTAL... (CONTINUED) Montana § 52.1393 Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS. The... Section 110(a)(2)(D)(i) for the 8-hour ozone and PM2.5 NAAQS promulgated in July 1997. The...
40 CFR 52.1393 - Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the 1997 8-hour ozone and PM2.5 NAAQS. 52.1393 Section 52.1393 Protection of Environment ENVIRONMENTAL... (CONTINUED) Montana § 52.1393 Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS. The... Section 110(a)(2)(D)(i) for the 8-hour ozone and PM2.5 NAAQS promulgated in July 1997. The...
40 CFR 52.1393 - Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the 1997 8-hour ozone and PM2.5 NAAQS. 52.1393 Section 52.1393 Protection of Environment ENVIRONMENTAL... (CONTINUED) Montana § 52.1393 Interstate Transport Declaration for the 1997 8-hour ozone and PM2.5 NAAQS. The... Section 110(a)(2)(D)(i) for the 8-hour ozone and PM2.5 NAAQS promulgated in July 1997. The...
40 CFR 51.913 - How do the section 182(f) NOX exemption provisions apply for the 8-hour NAAQS?
Code of Federal Regulations, 2014 CFR
2014-07-01
... provisions apply for the 8-hour NAAQS? 51.913 Section 51.913 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.913... petition the Administrator for an exemption from NOX obligations under section 182(f) for any...
40 CFR 51.913 - How do the section 182(f) NOX exemption provisions apply for the 8-hour NAAQS?
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.913... designated nonattainment for the 8-hour ozone NAAQS and for any area in a section 184 ozone transport...
40 CFR 51.913 - How do the section 182(f) NOX exemption provisions apply for the 8-hour NAAQS?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.913... designated nonattainment for the 8-hour ozone NAAQS and for any area in a section 184 ozone transport...
40 CFR 52.2499 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. 52.2499 Section 52.2499 Protection of Environment ENVIRONMENTAL PROTECTION...) Washington § 52.2499 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On January 17, 2007,...
40 CFR 51.916 - What are the requirements for an Ozone Transport Region under the 8-hour NAAQS?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 2 2013-07-01 2013-07-01 false What are the requirements for an Ozone... IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.916 What are the requirements for an Ozone Transport Region under the 8-hour NAAQS? (a) In...
40 CFR 50.10 - National 8-hour primary and secondary ambient air quality standards for ozone.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ambient air quality standards for ozone. 50.10 Section 50.10 Protection of Environment ENVIRONMENTAL....10 National 8-hour primary and secondary ambient air quality standards for ozone. (a) The level of the national 8-hour primary and secondary ambient air quality standards for ozone, measured by...
40 CFR 52.387 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-hour ozone and PM2.5 NAAQS. 52.387 Section 52.387 Protection of Environment ENVIRONMENTAL PROTECTION... § 52.387 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On March 13, 2007, the State...)(D)(i) interstate transport requirements of the Clean Air Act for the 1997 8-hour ozone and...
40 CFR 51.916 - What are the requirements for an Ozone Transport Region under the 8-hour NAAQS?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 2 2014-07-01 2014-07-01 false What are the requirements for an Ozone... IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.916 What are the requirements for an Ozone Transport Region under the 8-hour NAAQS? (a) In...
40 CFR 52.387 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-hour ozone and PM2.5 NAAQS. 52.387 Section 52.387 Protection of Environment ENVIRONMENTAL PROTECTION... § 52.387 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On March 13, 2007, the State...)(D)(i) interstate transport requirements of the Clean Air Act for the 1997 8-hour ozone and...
40 CFR 52.387 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-hour ozone and PM2.5 NAAQS. 52.387 Section 52.387 Protection of Environment ENVIRONMENTAL PROTECTION... § 52.387 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On March 13, 2007, the State...)(D)(i) interstate transport requirements of the Clean Air Act for the 1997 8-hour ozone and...
40 CFR 52.387 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-hour ozone and PM2.5 NAAQS. 52.387 Section 52.387 Protection of Environment ENVIRONMENTAL PROTECTION... § 52.387 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On March 13, 2007, the State...)(D)(i) interstate transport requirements of the Clean Air Act for the 1997 8-hour ozone and...
40 CFR 52.97 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 3 2011-07-01 2011-07-01 false Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. 52.97 Section 52.97 Protection of Environment ENVIRONMENTAL PROTECTION....97 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On February 7, 2008, the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-24
... AGENCY 40 CFR Part 51 RIN 2060-AP30 Proposed Rule To Implement the 1997 8-Hour Ozone National Ambient Air Quality Standard: New Source Review Anti-Backsliding Provisions for Former 1-Hour Ozone Standard AGENCY... designated nonattainment for the 1997 8-hour ozone national ambient air quality standard (NAAQS). The...
40 CFR 52.387 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-hour ozone and PM2.5 NAAQS. 52.387 Section 52.387 Protection of Environment ENVIRONMENTAL PROTECTION... § 52.387 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On March 13, 2007, the State...)(D)(i) interstate transport requirements of the Clean Air Act for the 1997 8-hour ozone and...
40 CFR 52.97 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 3 2013-07-01 2013-07-01 false Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. 52.97 Section 52.97 Protection of Environment ENVIRONMENTAL PROTECTION....97 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On February 7, 2008, the...
40 CFR 50.10 - National 8-hour primary and secondary ambient air quality standards for ozone.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ambient air quality standards for ozone. 50.10 Section 50.10 Protection of Environment ENVIRONMENTAL....10 National 8-hour primary and secondary ambient air quality standards for ozone. (a) The level of the national 8-hour primary and secondary ambient air quality standards for ozone, measured by...
40 CFR 51.916 - What are the requirements for an Ozone Transport Region under the 8-hour NAAQS?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 2 2011-07-01 2011-07-01 false What are the requirements for an Ozone... IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.916 What are the requirements for an Ozone Transport Region under the 8-hour NAAQS? (a) In...
40 CFR 52.2499 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 5 2012-07-01 2012-07-01 false Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. 52.2499 Section 52.2499 Protection of Environment ENVIRONMENTAL PROTECTION...) Washington § 52.2499 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On January 17, 2007,...
40 CFR 51.916 - What are the requirements for an Ozone Transport Region under the 8-hour NAAQS?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 2 2012-07-01 2012-07-01 false What are the requirements for an Ozone... IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.916 What are the requirements for an Ozone Transport Region under the 8-hour NAAQS? (a) In...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-20
... AGENCY 40 CFR Part 51 RIN 2060-AP30 Rule To Implement the 1997 8-Hour Ozone National Ambient Air Quality Standard: New Source Review Anti-Backsliding Provisions for Former 1-Hour Ozone Standard--Public Hearing... is announcing a public hearing to be held for the proposed ``Rule to Implement the 1997 8-Hour...
40 CFR 50.10 - National 8-hour primary and secondary ambient air quality standards for ozone.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ambient air quality standards for ozone. 50.10 Section 50.10 Protection of Environment ENVIRONMENTAL....10 National 8-hour primary and secondary ambient air quality standards for ozone. (a) The level of the national 8-hour primary and secondary ambient air quality standards for ozone, measured by...
40 CFR 50.10 - National 8-hour primary and secondary ambient air quality standards for ozone.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ambient air quality standards for ozone. 50.10 Section 50.10 Protection of Environment ENVIRONMENTAL....10 National 8-hour primary and secondary ambient air quality standards for ozone. (a) The level of the national 8-hour primary and secondary ambient air quality standards for ozone, measured by...
40 CFR 51.916 - What are the requirements for an Ozone Transport Region under the 8-hour NAAQS?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 2 2010-07-01 2010-07-01 false What are the requirements for an Ozone... IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.916 What are the requirements for an Ozone Transport Region under the 8-hour NAAQS? (a) In...
40 CFR 50.10 - National 8-hour primary and secondary ambient air quality standards for ozone.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ambient air quality standards for ozone. 50.10 Section 50.10 Protection of Environment ENVIRONMENTAL....10 National 8-hour primary and secondary ambient air quality standards for ozone. (a) The level of the national 8-hour primary and secondary ambient air quality standards for ozone, measured by...
40 CFR 52.2499 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. 52.2499 Section 52.2499 Protection of Environment ENVIRONMENTAL PROTECTION...) Washington § 52.2499 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On January 17, 2007,...
40 CFR 52.97 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 3 2012-07-01 2012-07-01 false Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. 52.97 Section 52.97 Protection of Environment ENVIRONMENTAL PROTECTION....97 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On February 7, 2008, the...
40 CFR 52.97 - Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 3 2010-07-01 2010-07-01 false Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. 52.97 Section 52.97 Protection of Environment ENVIRONMENTAL PROTECTION....97 Interstate Transport for the 1997 8-hour ozone and PM2.5 NAAQS. On February 7, 2008, the...
Sjölin, K E; Nyholm, K K
1980-05-01
The correlations of beta-aminoisobutyric acid values in 8 hour and 24 hour urinary samples from 23 healthy persons were determined. beta-AIB in the 8 hour urinary samples was measured by gas chromatography and the 24 hour excretion was calculated from the results of three 8 hour determinations. Simultaneous determinations of urinary creatinine were performed by Jaffe's reaction. Based on the 8 hour values of urinary beta-AIB the results demonstrated a constant excretion of beta-aminoisobutyric acid within the 24 hour periods in both low and high excretors. The precision in distinguishing low and high 24 hour excretors of beta-AIB by using 8 hour values was 91%. If 8 hour values of beta-AIB were related to creatinine the same precision for this calculated ratio was 96.5%. However, for high excretors of beta-AIB, failures were 24.5% by using the 8 hour excretion of beta-AIB as indicator, but only 6.5% by using the ratio.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-15
... Vehicle Emission Budgets for Transportation Conformity Purposes AGENCY: Environmental Protection Agency... the Knoxville, Tennessee 1997 8-Hour Ozone Maintenance Plan are adequate for transportation conformity... used for transportation conformity determinations until EPA has affirmatively found them adequate. As...
Code of Federal Regulations, 2011 CFR
2011-07-01
... of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National...) Submit an 8-hour ozone attainment demonstration no later than 1 year following designations...
Code of Federal Regulations, 2010 CFR
2010-07-01
... of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National...) Submit an 8-hour ozone attainment demonstration no later than 1 year following designations...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-18
...EPA proposes to approve the State Implementation Plan (SIP) revision submitted by the Commonwealth of Virginia for the purpose of adding the 2008 8-hour ozone National Ambient Air Quality Standard (NAAQS) of 0.075 parts per million (ppm), related reference conditions, and updating the list of appendices under ``Documents Incorporated by Reference.'' In the Final Rules section of this Federal......
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... .067 42-045-0002 Delaware/Pennsylvania 2009 .065 42-091-0013 Montgomery/Pennsylvania......... 2009 .070... standard (NAAQS). This extension is based in part on air quality data recorded during the 2009 ozone season. Specifically, the Philadelphia Area's 4th highest daily 8-hour monitored ozone value during the 2009...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
.... 40 CFR 51.907 sets forth how sections 172(a)(2)(C) and 181(a)(5) apply to an area subject to the 1997 8-hour ozone NAAQS. Under 40 CFR 51.907, an area will meet the requirement of section 172(a)(2)(C... Baltimore Moderate Nonattainment Area AGENCY: Environmental Protection Agency (EPA). ACTION: Direct...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
...-Hour Ozone Nonattainment Area; Texas AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule... nonattainment area failed to attain the 1997 8-hour ozone national ambient air quality standard (NAAQS or... Federal Regulations (CFR) for moderate nonattainment areas. This final determination is based on...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-11
... Appendix A can be found in the ``Quality Assurance Handbook for Air Pollution Measurement Systems,'' volume...).) List of Subjects in 40 CFR Part 81 Environmental protection, Air pollution control, National parks... Baltimore nonattainment area, which is classified as moderate for the 1997 8-hour ozone National Ambient...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... Ozone Standard AGENCY: U.S. Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY: EPA is... demonstrate attainment of the 1997 8-hour ozone national ambient air quality standards (NAAQS) in the Phoenix... the SIP elements required for ozone nonattainment areas under title I, part D, subpart 1 of the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... Ozone National Ambient Air Quality Standard AGENCY: Environmental Protection Agency (EPA). ACTION... 1997 8-hour ozone national ambient air quality standards (NAAQS). Specifically, EPA is proposing that... nitrogen oxides (NO X )] that contribute to ground-level ozone concentrations. B. What should I consider...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... Ozone Nonattainment Area A. Background on the 1997 8-Hour Ozone NAAQS Ground-level ozone pollution is... by many types of pollution sources including on- and off-road motor vehicles and engines, power plants and industrial facilities, and smaller area sources such as lawn and garden equipment and...
Code of Federal Regulations, 2010 CFR
2010-07-01
... suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? 51.918... 8-hour Ozone National Ambient Air Quality Standard § 51.918 Can any SIP planning requirements be suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? Upon...
Code of Federal Regulations, 2013 CFR
2013-07-01
... designation for the Las Vegas, NV, 8-hour ozone nonattainment area? 51.917 Section 51.917 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... ozone nonattainment area? The Las Vegas, NV, 8-hour ozone nonattainment area (designated on September...
Code of Federal Regulations, 2014 CFR
2014-07-01
... designation for the Las Vegas, NV, 8-hour ozone nonattainment area? 51.917 Section 51.917 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... ozone nonattainment area? The Las Vegas, NV, 8-hour ozone nonattainment area (designated on September...
Code of Federal Regulations, 2013 CFR
2013-07-01
... suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? 51.918... 8-hour Ozone National Ambient Air Quality Standard § 51.918 Can any SIP planning requirements be suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? Upon...
Code of Federal Regulations, 2014 CFR
2014-07-01
... suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? 51.918... 8-hour Ozone National Ambient Air Quality Standard § 51.918 Can any SIP planning requirements be suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? Upon...
Code of Federal Regulations, 2012 CFR
2012-07-01
... suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? 51.918... 8-hour Ozone National Ambient Air Quality Standard § 51.918 Can any SIP planning requirements be suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? Upon...
Code of Federal Regulations, 2011 CFR
2011-07-01
... suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? 51.918... 8-hour Ozone National Ambient Air Quality Standard § 51.918 Can any SIP planning requirements be suspended in 8-hour ozone nonattainment areas that have air quality data that meets the NAAQS? Upon...
Code of Federal Regulations, 2012 CFR
2012-07-01
... designation for the Las Vegas, NV, 8-hour ozone nonattainment area? 51.917 Section 51.917 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... ozone nonattainment area? The Las Vegas, NV, 8-hour ozone nonattainment area (designated on September...
Code of Federal Regulations, 2010 CFR
2010-07-01
... designation for the Las Vegas, NV, 8-hour ozone nonattainment area? 51.917 Section 51.917 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... ozone nonattainment area? The Las Vegas, NV, 8-hour ozone nonattainment area (designated on September...
Code of Federal Regulations, 2011 CFR
2011-07-01
... designation for the Las Vegas, NV, 8-hour ozone nonattainment area? 51.917 Section 51.917 Protection of..., AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient... ozone nonattainment area? The Las Vegas, NV, 8-hour ozone nonattainment area (designated on September...
Observation of gamma rays with a 4.8 hour periodicity from CYG X-3
NASA Technical Reports Server (NTRS)
Lamb, R. C.; Fichtel, C. E.; Hartman, R. C.; Kniffen, D. A.; Thompson, D. J.
1976-01-01
Energetic (E35 MeV) Gamma rays were observed from Cyg X-3 with the SAS-2 Gamma ray telescope. They are modulated at the 4.8 sup h period observed in the X-ray and infrared regions, and within the statistical error are in phase with this emission. The flux above 100 MeV has an average value of (4.4 + or - 1.1)x 10 to the -6 power/sq cm/sec. If the distance to Cyg X-3 is 10 kpcs, this flux implies a luminosity of more than 10 to the 37th power ergs/s if the radiation is isotropic and about 10 to the 36th power ergs/s if the radiation is restricted to a cone of one steradian, as it might be in a pulsar.
Hughes, R.L.; Yonas, H.; Gur, D.; Latchaw, R.
1989-06-01
Cerebral blood flow mapping with stable xenon-enhanced computed tomography (Xe/CT) was performed in conjunction with conventional computed tomography (CT) within the first 8 hours after the onset of symptoms in seven patients with cerebral infarction. Six patients had hemispheric infarctions, and one had a progressive brainstem infarction. Three patients with very low (less than 10 ml/100 g/min) blood flow in an anatomic area appropriate for the neurologic deficit had no clinical improvement by the time of discharge from the hospital; follow-up CT scans of these three patients confirmed infarction in the area of very low blood flow. Three patients with moderate blood flow reductions (15-45 ml/100 g/min) in the appropriate anatomic area had significant clinical improvement from their initial deficits and had normal follow-up CT scans. One patient studied 8 hours after stroke had increased blood flow (hyperemia) in the appropriate anatomic area and made no clinical recovery.
40 CFR 52.1989 - Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-hour ozone NAAQS and 1997 PM2.5 NAAQS. 52.1989 Section 52.1989 Protection of Environment ENVIRONMENTAL... (CONTINUED) Oregon § 52.1989 Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS. (a... 8-hour ozone NAAQS and 1997 PM2.5 NAAQS. The SIP revision also meets the requirements of Clean...
40 CFR 52.1989 - Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-hour ozone NAAQS and 1997 PM2.5 NAAQS. 52.1989 Section 52.1989 Protection of Environment ENVIRONMENTAL... (CONTINUED) Oregon § 52.1989 Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS. (a... 8-hour ozone NAAQS and 1997 PM2.5 NAAQS. The SIP revision also meets the requirements of Clean...
40 CFR 52.1989 - Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-hour ozone NAAQS and 1997 PM2.5 NAAQS. 52.1989 Section 52.1989 Protection of Environment ENVIRONMENTAL... (CONTINUED) Oregon § 52.1989 Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS. (a... 8-hour ozone NAAQS and 1997 PM2.5 NAAQS. The SIP revision also meets the requirements of Clean...
40 CFR 52.1989 - Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS. 52.1989 Section 52.1989 Protection of Environment ENVIRONMENTAL... (CONTINUED) Oregon § 52.1989 Interstate Transport for the 1997 8-hour ozone NAAQS and 1997 PM2.5 NAAQS....
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.907 For an area...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Provisions for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.907 For an area...
Code of Federal Regulations, 2014 CFR
2014-07-01
... for Implementation of 8-hour Ozone National Ambient Air Quality Standard § 51.907 For an area that... 40 Protection of Environment 2 2014-07-01 2014-07-01 false For an area that fails to attain the 8... the CAA? 51.907 Section 51.907 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...
Code of Federal Regulations, 2014 CFR
2014-07-01
... NAAQS to the 1997 8-hour NAAQS and what are the anti-backsliding provisions? 51.905 Section 51.905... National Ambient Air Quality Standard § 51.905 How do areas transition from the 1-hour NAAQS to the 1997 8... implement the applicable requirements as defined in § 51.900(f), except as provided in paragraph...
Sather, Mark E; Cavender, Kevin
2016-07-13
In the last 30 years ambient ozone concentrations have notably decreased in the South Central U.S. Yet, current ambient ozone concentrations measured over the past three years 2013-2015 in this area of the U.S. are not meeting the U.S. 2015 8 hour ozone standard of 70 parts per billion (ppb). This paper provides an update on long-term trends analyses of ambient 8 hour ozone and ozone precursor monitoring data collected over the past 30 years (1986-2015) in four South Central U.S. cities, following up on two previously published reviews of 20 and 25 year trends for these cities. All four cities have benefitted from national ozone precursor controls put in place during the 1990s and 2000s involving cleaner vehicles (vehicle fleet turnover/replacement over time), cleaner fuels, cleaner gasoline and diesel engines, and improved inspection/maintenance programs for existing vehicles. Additional ozone precursor emission controls specific to each city are detailed in this paper. The controls have resulted in impressive ambient ozone and ambient ozone precursor concentration reductions in the four South Central U.S. cities over the past 30 years, including 31-70% ambient nitrogen oxides (NOx) concentration declines from historical peaks to the present, 43-72% volatile organic compound (VOC) concentration declines from historical peaks to the present, a related 45-76% VOC reactivity decline for a subset of VOC species from historical peaks to the present, and an 18-38 ppb reduction in city 8 hour ozone design value concentrations. A new challenge for each of the four South Central U.S. cities will be meeting the U.S. 2015 8 hour ozone standard of 70 ppb.
Llorca, Julio; Gutiérrez, Cristina; Capilla, Elisabeth; Tortajada, Rafael; Sanjuán, Lorena; Fuentes, Alicia; Valor, Ignacio
2009-07-31
Two innovative integrative samplers have been developed enabling high sampling rates unaffected by turbulences (thus avoiding the use of performance reference compounds) and with negligible lag time values. The first, called the constantly stirred sorbent (CSS) consists of a rotator head that holds the sorbent. The rotation speed given to the head generates a constant turbulence around the sorbent making it independent of the external hydrodynamics. The second, called the continuous flow integrative sampler (CFIS) consists of a small peristaltic pump which produces a constant flow through a glass cell. The sorbent is located inside this cell. Although different sorbents can be used, poly(dimethylsiloxane) PDMS under the commercial twister format (typically used for stir bar sorptive extraction) was evaluated for the sampling of six polycyclic aromatic hydrocarbons and three organochlorine pesticides. These new devices have many analogies with passive samplers but cannot truly be defined as such since they need a small energy supply of around 0.5 W supplied by a battery. Sampling rates from 181 x 10(-3) to 791 x 10(-3) L/day were obtained with CSS and 18 x 10(-3) to 53 x 10(-3) with CFIS. Limits of detection for these devices are in the range from 0.3 to 544 pg/L with a precision below 20%. An in field evaluation for both devices was carried out for a 5 days sampling period in the outlet of a waste water treatment plant with comparable results to those obtained with a classical sampling method.
Abbott, Carla J.; Choe, Tiffany E.; Lusardi, Theresa A.; Burgoyne, Claude F.; Wang, Lin; Fortune, Brad
2014-01-01
Purpose. To compare in vivo retinal nerve fiber layer thickness (RNFLT) and axonal transport at 1 and 2 weeks after an 8-hour acute IOP elevation in rats. Methods. Forty-seven adult male Brown Norway rats were used. Procedures were performed under anesthesia. The IOP was manometrically elevated to 50 mm Hg or held at 15 mm Hg (sham) for 8 hours unilaterally. The RNFLT was measured by spectral-domain optical coherence tomography. Anterograde and retrograde axonal transport was assessed from confocal scanning laser ophthalmoscopy imaging 24 hours after bilateral injections of 2 μL 1% cholera toxin B-subunit conjugated to AlexaFluor 488 into the vitreous or superior colliculi, respectively. Retinal ganglion cell (RGC) and microglial densities were determined using antibodies against Brn3a and Iba-1. Results. The RNFLT in experimental eyes increased from baseline by 11% at 1 day (P < 0.001), peaked at 19% at 1 week (P < 0.0001), remained 11% thicker at 2 weeks (P < 0.001), recovered at 3 weeks (P > 0.05), and showed no sign of thinning at 6 weeks (P > 0.05). There was no disruption of anterograde transport at 1 week (superior colliculi fluorescence intensity, 75.3 ± 7.9 arbitrary units [AU] for the experimental eyes and 77.1 ± 6.7 AU for the control eyes) (P = 0.438) or 2 weeks (P = 0.188). There was no obstruction of retrograde transport at 1 week (RCG density, 1651 ± 153 per mm2 for the experimental eyes and 1615 ± 135 per mm2 for the control eyes) (P = 0.63) or 2 weeks (P = 0.25). There was no loss of Brn3a-positive RGC density at 6 weeks (P = 0.74) and no increase in microglial density (P = 0.92). Conclusions. Acute IOP elevation to 50 mm Hg for 8 hours does not cause a persisting axonal transport deficit at 1 or 2 weeks or a detectable RNFLT or RGC loss by 6 weeks but does lead to transient RNFL thickening that resolves by 3 weeks. PMID:24398096
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Horvath, S.M.; Bedi, J.F. )
1989-10-01
Seventeen non-smoking young men served as subjects to determine the alteration in carboxyhemoglobin (COHb) concentrations during exposure to 0 or 9 ppm carbon monoxide for 8 hours (CO) at sea level or an altitude of 2134 meters (7000 feet) in a hypobaric chamber. Nine subjects rested during the exposure and 8 exercised for 10 minutes of each exposure hour at a mean ventilation of 25 L (BTPS). All subjects performed a maximal aerobic capacity test at the completion of their respective exposures. Carboxyhemoglobin concentrations fell in all subjects during their exposures to 0 ppm CO at sea level or 2134 m. During the 8-h exposures to 9 ppm CO, COHb rose linearly from approximately 0.2 percent to 0.7 percent. No significant differences in uptake were found whether the subjects were resting or intermittently exercising during their 8-h exposures. COHb levels attained were similar at both sea level and 2134 m. Maximal aerobic capacity was reduced approximately 7-10 percent consequent to altitude exposure during 0 ppm CO exposures. These values were not altered following exposure for 8 h to 9 ppm CO in either the resting or exercising subjects.
Huang, Jianyin; Bennett, William W; Welsh, David T; Teasdale, Peter R
2016-12-08
Commercially-available AMI-7001 anion exchange and CMI-7000 cation exchange membranes were utilised as binding layers for DGT measurements of NO3-N and NH4-N in freshwaters. These ion exchange membranes are easier to prepare and handle than DGT binding layers consisting of hydrogels cast with ion exchange resins. The membranes showed good uptake and elution efficiencies for both NO3-N and NH4-N. The membrane-based DGTs are suitable for pH 3.5-8.5 and ionic strength ranges (0.0001-0.014 and 0.0003-0.012 mol L(-1) as NaCl for the AMI-7001 and CMI-7000 membrane, respectively) typical of most natural freshwaters. The binding membranes had high intrinsic binding capacities for NO3-N and NH4-N of 911 ± 88 μg and 3512 ± 51 μg, respectively. Interferences from the major competing ions for membrane-based DGTs are similar to DGTs employing resin-based binding layers but with slightly different selectivity. This different selectivity means that the two DGT types can be used in different types of freshwaters. The laboratory and field experiments demonstrated that AMI-DGT and CMI-DGT can be an alternative to A520E-DGT and PrCH-DGT for measuring NO3-N and NH4-N, respectively, as (i) membrane-based DGT have a consistent composition, (ii) avoid the use of toxic chemicals, (iii) provided highly representative results (CDGT : CSOLN between 0.81 and 1.3), and (iv) agreed with resin-based DGTs to within 85-120%.
Development of accumulated heat stress index based on time-weighted function
NASA Astrophysics Data System (ADS)
Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo
2016-05-01
Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
Areal Average Albedo (AREALAVEALB)
Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni
2008-01-01
he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.
States' Average College Tuition.
ERIC Educational Resources Information Center
Eglin, Joseph J., Jr.; And Others
This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…
ERIC Educational Resources Information Center
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Threaded average temperature thermocouple
NASA Technical Reports Server (NTRS)
Ward, Stanley W. (Inventor)
1990-01-01
A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.
Reznik, Ed; Chaudhary, Osman; Segrè, Daniel
2013-01-01
The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076
Haas, C N; Heller, B
1988-01-01
When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211
Bradley, Paul M.; Journey, Celeste; Bringham, Mark E.; Burns, Douglas A.; Button, Daniel T.; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (p = 0.78; p = 0.003) and annual FMeHg basin yield (p = 0.66; p = 0.026). Improved correlation (p = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.
Bradley, Paul M; Journey, Celeste A; Brigham, Mark E; Burns, Douglas A; Button, Daniel T; Riva-Murray, Karen
2013-01-01
To assess inter-comparability of fluvial mercury (Hg) observations at substantially different scales, Hg concentrations, yields, and bivariate-relations were evaluated at nested-basin locations in the Edisto River, South Carolina and Hudson River, New York. Differences between scales were observed for filtered methylmercury (FMeHg) in the Edisto (attributed to wetland coverage differences) but not in the Hudson. Total mercury (THg) concentrations and bivariate-relationships did not vary substantially with scale in either basin. Combining results of this and a previously published multi-basin study, fish Hg correlated strongly with sampled water FMeHg concentration (ρ = 0.78; p = 0.003) and annual FMeHg basin yield (ρ = 0.66; p = 0.026). Improved correlation (ρ = 0.88; p < 0.0001) was achieved with time-weighted mean annual FMeHg concentrations estimated from basin-specific LOADEST models and daily streamflow. Results suggest reasonable scalability and inter-comparability for different basin sizes if wetland area or related MeHg-source-area metrics are considered.
Iida, Hidehiro; Bloomfield, P.M.; Miura, Shuichi
1995-03-01
A system has been developed to rapidly calculate images of parametric rate constants, without acquiring dynamic frame data for clinical positron emission tomography (PET). This method is based on the weighted-integration algorithms for the two- and three-compartment models, and hardware developments (real-time operation and a large cache memory system) in a PET scanner, Headtome-IV, which enables the acquisition of multiple sinograms with independent weight integration functions. Following the administration of the radio-tracer, the scan is initiated to collect multiple time-weighted, integrated sinograms with three different weight functions. These sinograms are reconstructed and the images, with the arterial blood data, are inserted into the operational equations to provide parametric rate constant images. The implementation of this method has been checked in H{sub 2} {sup 15}O and {sup 18}F-fluorophenylalanine ({sup 18}FPhe) studies based on a two-compartment model, and in a {sup 18}F-fluorodeoxyglucose ({sup 18}FDG) study based on the three-compartment model. A volunteer study, completed for each compound, yielded results consistent with those produced by existing nonlinear fitting methods. Thus, this system has been developed capable of generating rapidly quantitative, physiological images, without dynamic data acquisition, which will be of great advantage to PET in the clinical environment. This system would also be of great advantage in the new generation high-resolution PET tomography, which acquire data in a 3-D, septaless mode.
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Temperature averaging thermal probe
NASA Technical Reports Server (NTRS)
Kalil, L. F.; Reinhardt, V. (Inventor)
1985-01-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
Temperature averaging thermal probe
NASA Astrophysics Data System (ADS)
Kalil, L. F.; Reinhardt, V.
1985-12-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
Normally Expected Aberrations in the 8-hour Dynamic EKG
NASA Technical Reports Server (NTRS)
Fleck, R. L.; Arnoldi, L. B.; Townsend, J. C.; Tonesk, X.
1970-01-01
The establishment of norms for interpreting long term dynamic electrocardiograms is attempted by correlating a completely disease symptom and cardiac risk factor free sample with a non-pure sample in the direction of normality on various variables. Out of a population of 362 subjects exposed to dynamic electrocardiogram testing, a discrimination between normals and abnormals in terms of traditional risk factors was observed. The two groups differed significantly on the following variables: cholesterol, smoking, systolic blood pressure, white blood count, fasting blood sugar, uric acid, resting EKG, year of birth, and coronary insufficiency.
NASA Astrophysics Data System (ADS)
Samuvel, K.; Ramachandran, K.
2016-05-01
BaTi0.5CO0.5O3 (BTCO) nanoparticles were prepared by the solid state reaction technique using different starting materials and the microstructure examined by XRD, FESEM, BDS and VSM. X-ray diffraction and electron diffraction patterns showed that the nanoparticles were the tetragonal BTCO phase. The BTCO nanoparticles prepared from the starting materials of as prepared titanium-oxide, Cobalt -oxide and barium carbonate have spherical grain morphology, an average size of 65 nm and a fairly narrow size distribution. The nano-scale presence and the formation of the tetragonal perovskite phase as well as the crystallinity were detected using the mentioned techniques. Dielectric properties of the samples were measured at different frequencies. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. The doped BTCO samples exhibited low loss factor at 1 kHz and 1 MHz frequencies respectively.
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
2011-01-01
Background Approximately one third of New Zealand children and young people are overweight or obese. A similar proportion (33%) do not meet recommendations for physical activity, and 70% do not meet recommendations for screen time. Increased time being sedentary is positively associated with being overweight. There are few family-based interventions aimed at reducing sedentary behavior in children. The aim of this trial is to determine the effects of a 24 week home-based, family oriented intervention to reduce sedentary screen time on children's body composition, sedentary behavior, physical activity, and diet. Methods/Design The study design is a pragmatic two-arm parallel randomized controlled trial. Two hundred and seventy overweight children aged 9-12 years and primary caregivers are being recruited. Participants are randomized to intervention (family-based screen time intervention) or control (no change). At the end of the study, the control group is offered the intervention content. Data collection is undertaken at baseline and 24 weeks. The primary trial outcome is child body mass index (BMI) and standardized body mass index (zBMI). Secondary outcomes are change from baseline to 24 weeks in child percentage body fat; waist circumference; self-reported average daily time spent in physical and sedentary activities; dietary intake; and enjoyment of physical activity and sedentary behavior. Secondary outcomes for the primary caregiver include change in BMI and self-reported physical activity. Discussion This study provides an excellent example of a theory-based, pragmatic, community-based trial targeting sedentary behavior in overweight children. The study has been specifically designed to allow for estimation of the consistency of effects on body composition for Māori (indigenous), Pacific and non-Māori/non-Pacific ethnic groups. If effective, this intervention is imminently scalable and could be integrated within existing weight management programs. Trial
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Vibrational averages along thermal lines
NASA Astrophysics Data System (ADS)
Monserrat, Bartomeu
2016-01-01
A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Achronal averaged null energy condition
Graham, Noah; Olum, Ken D.
2007-09-15
The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Code of Federal Regulations, 2010 CFR
2010-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2012 CFR
2012-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2011 CFR
2011-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2013 CFR
2013-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2014 CFR
2014-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Designing Digital Control Systems With Averaged Measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1990-01-01
Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.
Bayesian Model Averaging for Propensity Score Analysis.
Kaplan, David; Chen, Jianshen
2014-01-01
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Cosmological ensemble and directional averages of observables
Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com
2015-07-01
We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Cell averaging Chebyshev methods for hyperbolic problems
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Harten, Ami
1990-01-01
A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
Whatever Happened to the Average Student?
ERIC Educational Resources Information Center
Krause, Tom
2005-01-01
Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... class or subclass: Credit = (Average Standard − Emission Level) × (Total Annual Production) × (Useful Life) Deficit = (Emission Level − Average Standard) × (Total Annual Production) × (Useful Life) (l....000 Where: FELi = The FEL to which the engine family is certified. ULi = The useful life of the...
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Analogue Divider by Averaging a Triangular Wave
NASA Astrophysics Data System (ADS)
Selvam, Krishnagiri Chinnathambi
2017-03-01
A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
Average shape of transport-limited aggregates.
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z
2005-08-12
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Average-passage flow model development
NASA Technical Reports Server (NTRS)
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Bimetal sensor averages temperature of nonuniform profile
NASA Technical Reports Server (NTRS)
Dittrich, R. T.
1968-01-01
Instrument that measures an average temperature across a nonuniform temperature profile under steady-state conditions has been developed. The principle of operation is an application of the expansion of a solid material caused by a change in temperature.
Rotational averaging of multiphoton absorption cross sections
NASA Astrophysics Data System (ADS)
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections.
Friese, Daniel H; Beerepoot, Maarten T P; Ruud, Kenneth
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Radial averages of astigmatic TEM images.
Fernando, K Vince
2008-10-01
The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
The generic modeling fallacy: Average biomechanical models often produce non-average results!
Cook, Douglas D; Robertson, Daniel J
2016-11-07
Computational biomechanics models constructed using nominal or average input parameters are often assumed to produce average results that are representative of a target population of interest. To investigate this assumption a stochastic Monte Carlo analysis of two common biomechanical models was conducted. Consistent discrepancies were found between the behavior of average models and the average behavior of the population from which the average models׳ input parameters were derived. More interestingly, broadly distributed sets of non-average input parameters were found to produce average or near average model behaviors. In other words, average models did not produce average results, and models that did produce average results possessed non-average input parameters. These findings have implications on the prevalent practice of employing average input parameters in computational models. To facilitate further discussion on the topic, the authors have termed this phenomenon the "Generic Modeling Fallacy". The mathematical explanation of the Generic Modeling Fallacy is presented and suggestions for avoiding it are provided. Analytical and empirical examples of the Generic Modeling Fallacy are also given.
Averaged controllability of parameter dependent conservative semigroups
NASA Astrophysics Data System (ADS)
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Attractors and Time Averages for Random Maps
NASA Astrophysics Data System (ADS)
Araujo, Vitor
2006-07-01
Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.
Polarized electron beams at milliampere average current
Poelker, Matthew
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Average power meter for laser radiation
NASA Astrophysics Data System (ADS)
Shevnina, Elena I.; Maraev, Anton A.; Ishanin, Gennady G.
2016-04-01
Advanced metrology equipment, in particular an average power meter for laser radiation, is necessary for effective using of laser technology. In the paper we propose a measurement scheme with periodic scanning of a laser beam. The scheme is implemented in a pass-through average power meter that can perform continuous monitoring during the laser operation in pulse mode or in continuous wave mode and at the same time not to interrupt the operation. The detector used in the device is based on the thermoelastic effect in crystalline quartz as it has fast response, long-time stability of sensitivity, and almost uniform sensitivity dependence on the wavelength.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average length of stay in hospitals.
Egawa, H
1984-03-01
The average length of stay is essentially an important and appropriate index for hospital bed administration. However, from the position that it is not necessarily an appropriate index in Japan, an analysis is made of the difference in the health care facility system between the United States and Japan. Concerning the length of stay in Japanese hospitals, the median appeared to better represent the situation. It is emphasized that in order for the average length of stay to become an appropriate index, there is need to promote regional health, especially facility planning.
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Average thermal characteristics of solar wind electrons
NASA Technical Reports Server (NTRS)
Montgomery, M. D.
1972-01-01
Average solar wind electron properties based on a 1 year Vela 4 data sample-from May 1967 to May 1968 are presented. Frequency distributions of electron-to-ion temperature ratio, electron thermal anisotropy, and thermal energy flux are presented. The resulting evidence concerning heat transport in the solar wind is discussed.
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
How Young Is Standard Average European?
ERIC Educational Resources Information Center
Haspelmath, Martin
1998-01-01
An analysis of Standard Average European, a European linguistic area, looks at 11 of its features (definite, indefinite articles, have-perfect, participial passive, antiaccusative prominence, nominative experiencers, dative external possessors, negation/negative pronouns, particle comparatives, A-and-B conjunction, relative clauses, verb fronting…
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan
2013-11-01
Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data were collected from five normal mice. Noisy data with signal-to-noise ratios (SNR) that varied between five and 30 (before averaging) were then simulated. The DTI indices, including relative anisotropy (RA), trace of diffusion tensor (TR), axial diffusivity (λ║), and radial diffusivity (λ⊥), derived from the k-avg, m-avg, and no-avg, were then compared in the corpus callosum white matter, cortex gray matter, and the ventricles. We found that k-avg and m-avg enhanced the SNR of DWI with no significant differences. However, k-avg produced lower RA in the white matter and higher RA in the gray matter, compared to the m-avg and no-avg, regardless of SNR. The latter two produced similar DTI quantifications. We concluded that k-avg is less preferred for DTI brain imaging.
Polarized electron beams at milliampere average current
Poelker, M.
2013-11-07
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Disk-Averaged Synthetic Spectra of Mars
NASA Astrophysics Data System (ADS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong ,William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
On the ensemble averaging of PIC simulations
NASA Astrophysics Data System (ADS)
Codur, R. J. B.; Tsung, F. S.; Mori, W. B.
2016-10-01
Particle-in-cell simulations are used ubiquitously in plasma physics to study a variety of phenomena. They can be an efficient tool for modeling the Vlasov or Vlasov Fokker Planck equations in multi-dimensions. However, the PIC method actually models the Klimontovich equation for finite size particles. The Vlasov Fokker Planck equation can be derived as the ensemble average of the Klimontovich equation. We present results of studying Landau damping and Stimulated Raman Scattering using PIC simulations where we use identical ``drivers'' but change the random number generator seeds. We show that even for cases where a plasma wave is excited below the noise in a single simulation that the plasma wave can clearly be seen and studied if an ensemble average over O(10) simulations is made. Comparison between the results from an ensemble average and the subtraction technique are also presented. In the subtraction technique two simulations, one with the other without the ``driver'' are conducted with the same random number generator seed and the results are subtracted. This work is supported by DOE, NSF, and ENSC (France).
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Modern average global sea-surface temperature
Schweitzer, Peter N.
1993-01-01
The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.
A simple algorithm for averaging spike trains.
Julienne, Hannah; Houghton, Conor
2013-02-25
Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.
Liu, Chun; Croft, Quentin P. P.; Kalidhar, Swati; Brooks, Jerome T.; Herigstad, Mari; Smith, Thomas G.; Dorrington, Keith L.
2013-01-01
Dexamethasone ameliorates the severity of acute mountain sickness (AMS) but it is unknown whether it obtunds normal physiological responses to hypoxia. We studied whether dexamethasone enhanced or inhibited the ventilatory, cardiovascular, and pulmonary vascular responses to sustained (8 h) hypoxia. Eight healthy volunteers were studied, each on four separate occasions, permitting four different protocols. These were: dexamethasone (20 mg orally) beginning 2 h before a control period of 8 h of air breathing; dexamethasone with 8 h of isocapnic hypoxia (end-tidal Po2 = 50 Torr); placebo with 8 h of air breathing; and placebo with 8 h of isocapnic hypoxia. Before and after each protocol, the following were determined under both euoxic and hypoxic conditions: ventilation; pulmonary artery pressure (estimated using echocardiography to assess maximum tricuspid pressure difference); heart rate; and cardiac output. Plasma concentrations of erythropoietin (EPO) were also determined. Dexamethasone had no early (2-h) effect on any variable. Both dexamethasone and 8 h of hypoxia increased euoxic values of ventilation, pulmonary artery pressure, and heart rate, together with the ventilatory sensitivity to acute hypoxia. These effects were independent and additive. Eight hours of hypoxia, but not dexamethasone, increased the sensitivity of pulmonary artery pressure to acute hypoxia. Dexamethasone, but not 8 h of hypoxia, increased both cardiac output and systemic arterial pressure. Dexamethasone abolished the rise in EPO induced by 8 h of hypoxia. In summary, dexamethasone enhances ventilatory acclimatization to hypoxia. Thus, dexamethasone in AMS may improve oxygenation and thereby indirectly lower pulmonary artery pressure. PMID:23393065
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-20
... AGENCY 40 CFR Part 52 Approval and Promulgation of Air Quality Implementation Plans; Virginia... Simulator emissions model (MOVES2010a). DATES: This correcting amendment is effective December 20, 2012 and... OF IMPLEMENTATION PLANS 0 1. The authority citation for 40 CFR part 52 continues to read as...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... restricted by statute. Certain other material, such as copyrighted material, is not placed on the Internet...: Definitions For the purpose of this document, we are giving meaning to certain words or initials as...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... Area. Signal preemption for MARTA Atlanta 6/17/96 4/26/99. routes 15 and 23. Improve and expand service... for Control Atlanta 1997 8- 10/21/2009........ 09/28/2013. of VOC Emissions from Reactor Hour Ozone Processes and Distillation Nonattainment Operations in Synthetic Organic Area. Chemical...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
... Plans for Transportation Conformity Purposes AGENCY: Environmental Protection Agency (EPA). ACTION... conformity determinations. Illinois submitted a redesignation request and maintenance plan for the Illinois... must use the MVEBs from the submitted ozone maintenance plan for future transportation...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-08
... National Ambient Air Quality Standard, EPA ICR No. 2236.03, OMB Control No. 2060-0594 AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: In compliance with the Paperwork Reduction Act (PRA) (44 U.S.C. 3501 et seq.), this document announces that EPA is planning to submit a request to renew...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-06
... Emissions Simulator emissions model (MOVES2010a). This action is being taken under the Clean Air Act (CAA... transportation conformity process when using MOVES2010a, the existing VOC MVEBS were not revised in this SIP... independently of the assessment process; (3) that demonstrate a clear, imminent and substantial danger to...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... Emissions Simulator (MOVES) emissions model. Ohio submitted the SIP revision request to EPA on December 7... modeling and participated in the consultation process. The Federal Highway Administration and the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-26
..., mechanical or other technological collection techniques or other forms of information technology, e.g., by... Protection Agency, T.W. Alexander Drive, Research Triangle Park, NC 27711; telephone number: (919)...
Defeathering of broiler carcasses subjected to delayed scalding 1, 2, 4, and 8 hours after slaughter
Technology Transfer Automated Retrieval System (TEKTRAN)
With implementation of farm slaughter, scalding and defeathering could be delayed for a minimum of 2 to 4 h. This research evaluated the potential for delaying scalding and defeathering up to 8 h after slaughter. Following 12 h feed withdrawal broilers were cooped and transported to the pilot plan...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-31
... agency certifies that the rule will not have a significant economic impact on a substantial number of... jurisdictions. For purposes of assessing the impacts of today's proposed rule on small entities, small entity is... its field. After considering the economic impacts of today's proposed rule on small entities,...
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Local average height distribution of fluctuating interfaces
NASA Astrophysics Data System (ADS)
Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.
Average neutronic properties of prompt fission products
Foster, D.G. Jr.; Arthur, E.D.
1982-02-01
Calculations of the average neutronic properties of the ensemble of fission products producted by fast-neutron fission of /sup 235/U and /sup 239/Pu, where the properties are determined before the first beta decay of any of the fragments, are described. For each case we approximate the ensemble by a weighted average over 10 selected nuclides, whose properties we calculate using nuclear-model parameters deduced from the systematic properties of other isotopes of the same elements as the fission fragments. The calculations were performed primarily with the COMNUC and GNASH statistical-model codes. The results, available in ENDF/B format, include cross sections, angular distributions of neutrons, and spectra of neutrons and photons, for incident-neutron energies between 10/sup -5/ eV and 20 MeV. Over most of this energy range, we find that the capture cross section of /sup 239/Pu fission fragments is systematically a factor of two to five greater than for /sup 235/U fission fragments.
Local average height distribution of fluctuating interfaces.
Smith, Naftali R; Meerson, Baruch; Sasorov, Pavel V
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1+1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1+1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2+1 dimensions.
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Lagrangian averaging, nonlinear waves, and shock regularization
NASA Astrophysics Data System (ADS)
Bhat, Harish S.
In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Comprehensive time average digital holographic vibrometry
NASA Astrophysics Data System (ADS)
Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, Pavel; Vojtíšek, Petr; Václavík, Jan
2016-12-01
This paper presents a method that simultaneously deals with drawbacks of time-average digital holography: limited measurement range, limited spatial resolution, and quantitative analysis of the measured Bessel fringe patterns. When the frequency of the reference wave is shifted by an integer multiple of frequency at which the object oscillates, the measurement range of the method can be shifted either to smaller or to larger vibration amplitudes. In addition, phase modulation of the reference wave is used to obtain a sequence of phase-modulated fringe patterns. Such fringe patterns can be combined by means of phase-shifting algorithms, and amplitudes of vibrations can be straightforwardly computed. This approach independently calculates the amplitude values in every single pixel. The frequency shift and phase modulation are realized by proper control of Bragg cells and therefore no additional hardware is required.
High average power linear induction accelerator development
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.
Angle-averaged Compton cross sections
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
The Average-Value Correspondence Principle
NASA Astrophysics Data System (ADS)
Goyal, Philip
2007-12-01
In previous work [1], we have presented an attempt to derive the finite-dimensional abstract quantum formalism from a set of physically comprehensible assumptions. In this paper, we continue the derivation of the quantum formalism by formulating a correspondence principle, the Average-Value Correspondence Principle, that allows relations between measurement outcomes which are known to hold in a classical model of a system to be systematically taken over into the quantum model of the system, and by using this principle to derive many of the correspondence rules (such as operator rules, commutation relations, and Dirac's Poisson bracket rule) that are needed to apply the abstract quantum formalism to model particular physical systems.
Average prime-pair counting formula
NASA Astrophysics Data System (ADS)
Korevaar, Jaap; Riele, Herman Te
2010-04-01
Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.
Calculating Free Energies Using Average Force
NASA Technical Reports Server (NTRS)
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Average oxidation state of carbon in proteins.
Dick, Jeffrey M
2014-11-06
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.
Average oxidation state of carbon in proteins
Dick, Jeffrey M.
2014-01-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Global Average Brightness Temperature for April 2003
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Figure 1
This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.
The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
Gupta, Tejpal; Jalali, Rakesh; Goswami, Savita; Nair, Vimoj; Moiyadi, Aliasgar; Epari, Sridhar; Sarin, Rajiv
2012-08-01
Purpose: To report on acute toxicity, longitudinal cognitive function, and early clinical outcomes in children with average-risk medulloblastoma. Methods and Materials: Twenty children {>=}5 years of age classified as having average-risk medulloblastoma were accrued on a prospective protocol of hyperfractionated radiation therapy (HFRT) alone. Radiotherapy was delivered with two daily fractions (1 Gy/fraction, 6 to 8 hours apart, 5 days/week), initially to the neuraxis (36 Gy/36 fractions), followed by conformal tumor bed boost (32 Gy/32 fractions) for a total tumor bed dose of 68 Gy/68 fractions over 6 to 7 weeks. Cognitive function was prospectively assessed longitudinally (pretreatment and at specified posttreatment follow-up visits) with the Wechsler Intelligence Scale for Children to give verbal quotient, performance quotient, and full-scale intelligence quotient (FSIQ). Results: The median age of the study cohort was 8 years (range, 5-14 years), representing a slightly older cohort. Acute hematologic toxicity was mild and self-limiting. Eight (40%) children had subnormal intelligence (FSIQ <85), including 3 (15%) with mild mental retardation (FSIQ 56-70) even before radiotherapy. Cognitive functioning for all tested domains was preserved in children evaluable at 3 months, 1 year, and 2 years after completion of HFRT, with no significant decline over time. Age at diagnosis or baseline FSIQ did not have a significant impact on longitudinal cognitive function. At a median follow-up time of 33 months (range, 16-58 months), 3 patients had died (2 of relapse and 1 of accidental burns), resulting in 3-year relapse-free survival and overall survival of 83.5% and 83.2%, respectively. Conclusion: HFRT without upfront chemotherapy has an acceptable acute toxicity profile, without an unduly increased risk of relapse, with preserved cognitive functioning in children with average-risk medulloblastoma.
Code of Federal Regulations, 2011 CFR
2011-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2013 CFR
2013-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2010 CFR
2010-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2014 CFR
2014-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry
NASA Astrophysics Data System (ADS)
de Kat, Roeland
2015-11-01
Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...
40 CFR 1033.710 - Averaging emission credits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more...
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...
Westphal, M; Frazier, E; Miller, M C
1979-01-01
A five-year review of accounting data at a university hospital shows that immediately following institution of concurrent PSRO admission and length of stay review of Medicare-Medicaid patients, there was a significant decrease in length of stay and a fall in average charges generated per patient against the inflationary trend. Similar changes did not occur for the non-Medicare-Medicaid patients who were not reviewed. The observed changes occurred even though the review procedure rarely resulted in the denial of services to patients, suggesting an indirect effect of review.
Code of Federal Regulations, 2014 CFR
2014-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment... Carbon-Related Exhaust Emissions § 600.510-12 Calculation of average fuel economy and average carbon.... (iv) (2) Average carbon-related exhaust emissions will be calculated to the nearest one gram per...
Cost averaging techniques for robust control of flexible structural systems
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.; Crawley, Edward F.
1991-01-01
Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.
Patient and hospital characteristics associated with average length of stay.
Shi, L
1996-01-01
This article examines the relationship between patient, hospital characteristics, and hospital average length of stay controlling for major disease categories. A constellation of patient and physician factors were found to be significantly associated with average hospital length of stay.
Synthesis of Averaged Circuit Models for Switched Power Converters
1989-11-01
November 1989 LIDS-P-1930 Synthesis of Averaged Circuit Models for Switched Power Converters * Seth R. Sanders George C. Verghese Abstract Averaged... circuit models for switching power converters are useful for purposes of analysis and obtaining engineering intuition into the operation of these...switched circuits . This paper develops averaged circuit models for switching converters using an in-place averaging method. The method proceeds in a
76 FR 57081 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2010 was $28,284. The average annual cost to confine an inmate in a Community Corrections...
76 FR 6161 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
... No: 2011-2363] DEPARTMENT OF JUSTICE Bureau of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an...
78 FR 16711 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2011 was $28,893.40. The average annual cost to confine an inmate in a Community...
7 CFR 760.640 - National average market price.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...
7 CFR 760.640 - National average market price.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average Market Price (NAMP) using the best sources available, as determined by the Deputy Administrator,...
Averaging and Globalising Quotients of Informetric and Scientometric Data.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1996-01-01
Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…
7 CFR 51.577 - Average midrib length.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib... attachment at the base to the first node....
7 CFR 51.577 - Average midrib length.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib... attachment at the base to the first node....
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards....2561 Average moisture content. (a) Determining average moisture content of the lot is not a...
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards....2561 Average moisture content. (a) Determining average moisture content of the lot is not a...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average...
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average...
40 CFR 1042.710 - Averaging emission credits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families...
Perturbation resilience and superiorization methodology of averaged mappings
NASA Astrophysics Data System (ADS)
He, Hongjin; Xu, Hong-Kun
2017-04-01
We first prove the bounded perturbation resilience for the successive fixed point algorithm of averaged mappings, which extends the string-averaging projection and block-iterative projection methods. We then apply the superiorization methodology to a constrained convex minimization problem where the constraint set is the intersection of fixed point sets of a finite family of averaged mappings.
Sample Size Bias in Judgments of Perceptual Averages
ERIC Educational Resources Information Center
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.
2014-01-01
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…
20 CFR 404.221 - Computing your average monthly wage.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
Robust Morphological Averages in Three Dimensions for Anatomical Atlas Construction
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Bloch, Isabelle; Schmitt, Francis
2004-09-01
We present original methods for obtaining robust, anatomical shape-based averages of features of the human head anatomy from a normal population. Our goals are computerized atlas construction with representative anatomical features and morphopometry for specific populations. A method for true-morphological averaging is proposed, consisting of a suitable blend of shape-related information for N objects to obtain a progressive average. It is made robust by penalizing, in a morphological sense, the contributions of features less similar to the current average. Morphological error and similarity, as well as penalization, are based on the same paradigm as the morphological averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation in § 60.1935.... If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... Reference Method 19 in appendix A of this part, section 4.1, to calculate the daily arithmetic average...
Adaptive face coding and discrimination around the average face.
Rhodes, Gillian; Maloney, Laurence T; Turner, Jenny; Ewing, Louise
2007-03-01
Adaptation paradigms highlight the dynamic nature of face coding and suggest that identity is coded relative to an average face that is tuned by experience. In low-level vision, adaptive coding can enhance sensitivity to differences around the adapted level. We investigated whether sensitivity to differences around the average face is similarly enhanced. Converging evidence from three paradigms showed no enhancement. Discrimination of small interocular spacing differences was not better for faces close to the average (Study 1). Nor was perceived similarity reduced for face pairs close to (spanning) the average (Study 2). On the contrary, these pairs were judged most similar. Maximum likelihood perceptual difference scaling (Studies 3 and 4) confirmed that sensitivity to differences was reduced, not enhanced, around the average. We conclude that adaptive face coding does not enhance discrimination around the average face.
TIME INVARIANT MULTI ELECTRODE AVERAGING FOR BIOMEDICAL SIGNALS.
Orellana, R Martinez; Erem, B; Brooks, D H
2013-12-31
One of the biggest challenges in averaging ECG or EEG signals is to overcome temporal misalignments and distortions, due to uncertain timing or complex non-stationary dynamics. Standard methods average individual leads over a collection of epochs on a time-sample by time-sample basis, even when multi-electrode signals are available. Here we propose a method that averages multi electrode recordings simultaneously by using spatial patterns and without relying on time or frequency.
Conditionally-averaged structures in wall-bounded turbulent flows
NASA Technical Reports Server (NTRS)
Guezennec, Yann G.; Piomelli, Ugo; Kim, John
1987-01-01
The quadrant-splitting and the wall-shear detection techniques were used to obtain ensemble-averaged wall layer structures. The two techniques give similar results for Q4 events, but the wall-shear method leads to smearing of Q2 events. Events were found to maintain their identity for very long times. The ensemble-averaged structures scale with outer variables. Turbulence producing events were associated with one dominant vortical structure rather than a pair of counter-rotating structures. An asymmetry-preserving averaging scheme was devised that allowed a picture of the average structure which more closely resembles the instantaneous one, to be obtained.
Light-cone averaging in cosmology: formalism and applications
NASA Astrophysics Data System (ADS)
Gasperini, M.; Marozzi, G.; Nugier, F.; Veneziano, G.
2011-07-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ``geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ``redshift drift'' in a generic inhomogeneous Universe.
Light-cone averaging in cosmology: formalism and applications
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F. E-mail: giovanni.marozzi@college-de-france.fr E-mail: gabriele.veneziano@cern.ch
2011-07-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe.
78 FR 49770 - Annual Determination of Average Cost of Incarceration
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-15
... of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal... annual cost to confine an inmate in a Community Corrections Center for Fiscal Year 2012 was $27,003...
Hadley circulations for zonally averaged heating centered off the equator
NASA Technical Reports Server (NTRS)
Lindzen, Richard S.; Hou, Arthur Y.
1988-01-01
Consistent with observations, it is found that moving peak heating even 2 deg off the equator leads to profound asymmetries in the Hadley circulation, with the winter cell amplifying greatly and the summer cell becoming negligible. It is found that the annually averaged Hadley circulation is much larger than the circulation forced by the annually averaged heating.
40 CFR 63.652 - Emissions averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... emissions average. This must include any Group 1 emission points to which the reference control technology.... (c) The following emission points can be used to generate emissions averaging credits if control was... agrees has a higher nominal efficiency than the reference control technology. Information on the...
40 CFR 63.652 - Emissions averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... emissions average. This must include any Group 1 emission points to which the reference control technology.... (c) The following emission points can be used to generate emissions averaging credits if control was... agrees has a higher nominal efficiency than the reference control technology. Information on the...
A Simple Geometrical Derivation of the Spatial Averaging Theorem.
ERIC Educational Resources Information Center
Whitaker, Stephen
1985-01-01
The connection between single phase transport phenomena and multiphase transport phenomena is easily accomplished by means of the spatial averaging theorem. Although different routes to the theorem have been used, this paper provides a route to the averaging theorem that can be used in undergraduate classes. (JN)
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
40 CFR 63.503 - Emissions averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.503 Emissions averaging... emissions averages. (2) Compliance with the provisions of this section may be based on either organic HAP or... (a)(3)(ii) of this section. (i) The organic HAP used as the calibration gas for Method 25A, 40...
7 CFR 701.117 - Average adjusted gross income limitation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Average adjusted gross income limitation. 701.117 Section 701.117 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... Conservation Program § 701.117 Average adjusted gross income limitation. To be eligible for payments...
Analytic computation of average energy of neutrons inducing fission
Clark, Alexander Rich
2016-08-12
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
7 CFR 51.577 - Average midrib length.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Average midrib length. 51.577 Section 51.577... (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Celery Definitions § 51.577 Average... measured from the point of attachment at the base to the first node....
7 CFR 51.577 - Average midrib length.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Average midrib length. 51.577 Section 51.577... (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Celery Definitions § 51.577 Average... measured from the point of attachment at the base to the first node....
Delineating the Average Rate of Change in Longitudinal Models
ERIC Educational Resources Information Center
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... moisture content determination. (a) Determining average moisture content of the lot is not a requirement...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
75 FR 78157 - Farmer and Fisherman Income Averaging
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-15
... computing income tax liability. The regulations reflect changes made by the American Jobs Creation Act of...) relating to the averaging of farm and fishing income in computing tax liability. A notice of proposed... to compute current year (election year) income tax liability under section 1 by averaging, over...
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
LANDSAT-4 horizon scanner full orbit data averages
NASA Technical Reports Server (NTRS)
Stanley, J. P.; Bilanow, S.
1983-01-01
Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.
Time domain averaging based on fractional delay filter
NASA Astrophysics Data System (ADS)
Wu, Wentao; Lin, Jing; Han, Shaobo; Ding, Xianghui
2009-07-01
For rotary machinery, periodic components in signals are always extracted to investigate the condition of each rotating part. Time domain averaging technique is a traditional method used to extract those periodic components. Originally, a phase reference signal is required to ensure all the averaged segments are with the same initial phase. In some cases, however, there is no phase reference; we have to establish some efficient algorithms to synchronize the segments before averaging. There are some algorithms available explaining how to perform time domain averaging without using phase reference signal. However, those algorithms cannot eliminate the phase error completely. Under this background, a new time domain averaging algorithm that has no phase error theoretically is proposed. The performance is improved by incorporating the fractional delay filter. The efficiency of the proposed algorithm is validated by some simulations.
Average cross-responses in correlated financial markets
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas
2016-09-01
There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.
Sample size bias in retrospective estimates of average duration.
Smith, Andrew R; Rule, Shanon; Price, Paul C
2017-03-25
People often estimate the average duration of several events (e.g., on average, how long does it take to drive from one's home to his or her office). While there is a great deal of research investigating estimates of duration for a single event, few studies have examined estimates when people must average across numerous stimuli or events. The current studies were designed to fill this gap by examining how people's estimates of average duration were influenced by the number of stimuli being averaged (i.e., the sample size). Based on research investigating the sample size bias, we predicted that participants' judgments of average duration would increase as the sample size increased. Across four studies, we demonstrated a sample size bias for estimates of average duration with different judgment types (numeric estimates and comparisons), study designs (between and within-subjects), and paradigms (observing images and performing tasks). The results are consistent with the more general notion that psychological representations of magnitudes in one dimension (e.g., quantity) can influence representations of magnitudes in another dimension (e.g., duration).
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
Inversion of the circular averages transform using the Funk transform
NASA Astrophysics Data System (ADS)
Evren Yarman, Can; Yazıcı, Birsen
2011-06-01
The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering.
ERIC Educational Resources Information Center
Saleh, Mohammad; Lazonder, Ard W.; Jong, Ton de
2007-01-01
Average-ability students often do not take full advantage of learning in mixed-ability groups because they hardly engage in the group interaction. This study examined whether structuring collaboration by group roles and ground rules for helping behavior might help overcome this participatory inequality. In a plant biology course, heterogeneously…
Halberstadt, Jamin; Rhodes, Gillian
2003-03-01
Average faces are attractive. We sought to distinguish whether this preference is an adaptation for finding high-quality mates (the direct selection account) or whether it reflects more general information-processing mechanisms. In three experiments, we examined the attractiveness of birds, fish, and automobiles whose averageness had been manipulated using digital image manipulation techniques common in research on facial attractiveness. Both manipulated averageness and rated averageness were strongly associated with attractiveness in all three stimulus categories. In addition, for birds and fish, but not for automobiles, the correlation between subjective averageness and attractiveness remained significant when the effect of subjective familiarity was partialled out. The results suggest that at least two mechanisms contribute to the attractiveness of average exemplars. One is a general preference for familiar stimuli, which contributes to the appeal of averageness in all three categories. The other is a preference for averageness per se, which was found for birds and fish, but not for automobiles, and may reflect a preference for features signaling genetic quality in living organisms, including conspecifics.
Time average vibration fringe analysis using Hilbert transformation
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-10-20
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Experimental demonstration of squeezed-state quantum averaging
Lassen, Mikael; Madsen, Lars Skovgaard; Andersen, Ulrik L.; Sabuncu, Metin; Filip, Radim
2010-08-15
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented harmonic mean yields a lower value than the corresponding value obtained for the standard arithmetic-mean strategy. The effect of quantum averaging is experimentally tested for squeezed and thermal states as well as for uncorrelated and partially correlated noise sources. The harmonic-mean protocol can be used to efficiently stabilize a set of squeezed-light sources with statistically fluctuating noise levels.
Sample Selected Averaging Method for Analyzing the Event Related Potential
NASA Astrophysics Data System (ADS)
Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki
The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.
Homelessness prevention in New York City: On average, it works.
Goodman, Sarena; Messeri, Peter; O'Flaherty, Brendan
2016-03-01
This study evaluates the community impact of the first four years of Homebase, a homelessness prevention program in New York City. Family shelter entries decreased on average in the neighborhoods in which Homebase was operating. Homebase effects appear to be heterogeneous, and so different kinds of averages imply different-sized effects. The (geometric) average decrease in shelter entries was about 5% when census tracts are weighted equally, and 11% when community districts (which are much larger) are weighted equally. This study also examines the effect of foreclosures. Foreclosures are associated with more shelter entries in neighborhoods that usually do not send large numbers of families to the shelter system.
Homelessness prevention in New York City: On average, it works
Goodman, Sarena; Messeri, Peter; O'Flaherty, Brendan
2016-01-01
This study evaluates the community impact of the first four years of Homebase, a homelessness prevention program in New York City. Family shelter entries decreased on average in the neighborhoods in which Homebase was operating. Homebase effects appear to be heterogeneous, and so different kinds of averages imply different-sized effects. The (geometric) average decrease in shelter entries was about 5% when census tracts are weighted equally, and 11% when community districts (which are much larger) are weighted equally. This study also examines the effect of foreclosures. Foreclosures are associated with more shelter entries in neighborhoods that usually do not send large numbers of families to the shelter system. PMID:26941543
[Average number of living children of the members of parliament].
Toros, A
1989-01-01
"This study compares the average number of living children of the members of the parliament [in Turkey] with the average number of living children of the general public as found in the 1988 Population and Health Survey. The findings indicate that the average number of living children of the members of the parliament [is] substantially lower than that of the general public. Under the light of these findings the members of the parliament are invited not to refrain from speeches promoting family planning in Turkey." (SUMMARY IN ENG)
Average waiting time in FDDI networks with local priorities
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.
Correction for spatial averaging in laser speckle contrast analysis
Thompson, Oliver; Andrews, Michael; Hirst, Evan
2011-01-01
Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623
Development and evaluation of a hybrid averaged orbit generator
NASA Technical Reports Server (NTRS)
Mcclain, W. D.; Long, A. C.; Early, L. W.
1978-01-01
A rapid orbit generator based on a first-order application of the Generalized Method of Averaging has been developed for the Research and Development (R&D) version of the Goddard Trajectory Determination System (GTDS). The evaluation of the averaged equations of motion can use both numerically averaged and recursively evaluated, analytically averaged perturbation models. These equations are numerically integrated to obtain the secular and long-period motion. Factors affecting efficient orbit prediction are discussed and guidelines are presented for treatment of each major perturbation. Guidelines for obtaining initial mean elements compatible with the theory are presented. An overview of the orbit generator is presented and comparisons with high precision methods are given.
Average local ionization energy generalized to correlated wavefunctions
Ryabinkin, Ilya G.; Staroverov, Viktor N.
2014-08-28
The average local ionization energy function introduced by Politzer and co-workers [Can. J. Chem. 68, 1440 (1990)] as a descriptor of chemical reactivity has a limited utility because it is defined only for one-determinantal self-consistent-field methods such as the Hartree–Fock theory and the Kohn–Sham density-functional scheme. We reinterpret the negative of the average local ionization energy as the average total energy of an electron at a given point and, by rewriting this quantity in terms of reduced density matrices, arrive at its natural generalization to correlated wavefunctions. The generalized average local electron energy turns out to be the diagonal part of the coordinate representation of the generalized Fock operator divided by the electron density; it reduces to the original definition in terms of canonical orbitals and their eigenvalues for one-determinantal wavefunctions. The discussion is illustrated with calculations on selected atoms and molecules at various levels of theory.
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
The origin of consistent protein structure refinement from structural averaging.
Park, Hahnbeom; DiMaio, Frank; Baker, David
2015-06-02
Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state.
Does subduction zone magmatism produce average continental crust
NASA Technical Reports Server (NTRS)
Ellam, R. M.; Hawkesworth, C. J.
1988-01-01
The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.
Use of a Correlation Coefficient for Conditional Averaging.
1997-04-01
data. Selection of the sine function period and a correlation coefficient threshold are discussed. Also examined are the effects of the period and...threshold level on the number of ensembles captured for inclusion for conditional averaging. Both the selection of threshold correlation coefficient and the...A method of collecting ensembles for conditional averaging is presented that uses data collected from a plane mixing layer. The correlation
Flavor Physics Data from the Heavy Flavor Averaging Group (HFAG)
The Heavy Flavor Averaging Group (HFAG) was established at the May 2002 Flavor Physics and CP Violation Conference in Philadelphia, and continues the LEP Heavy Flavor Steering Group's tradition of providing regular updates to the world averages of heavy flavor quantities. Data are provided by six subgroups that each focus on a different set of heavy flavor measurements: B lifetimes and oscillation parameters, Semi-leptonic B decays, Rare B decays, Unitarity triangle parameters, B decays to charm final states, and Charm Physics.
Modelling and designing digital control systems with averaged measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1988-01-01
An account is given of the control systems engineering methods applicable to the design of digital feedback controllers for aerospace deterministic systems in which the output, rather than being an instantaneous measure of the system at the sampling instants, instead represents an average measure of the system over the time interval between samples. The averaging effect can be included during the modeling of the plant, thereby obviating the iteration of design/simulation phases.
Scalable Robust Principal Component Analysis using Grassmann Averages.
Hauberg, Soren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael
2015-12-23
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.
Scalable Robust Principal Component Analysis Using Grassmann Averages.
Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J
2016-11-01
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.
Approximate average head models for EEG source imaging.
Valdés-Hernández, Pedro A; von Ellenrieder, Nicolás; Ojeda-Gonzalez, Alejandro; Kochen, Silvia; Alemán-Gómez, Yasser; Muravchik, Carlos; Valdés-Sosa, Pedro A
2009-12-15
We examine the performance of approximate models (AM) of the head in solving the EEG inverse problem. The AM are needed when the individual's MRI is not available. We simulate the electric potential distribution generated by cortical sources for a large sample of 305 subjects, and solve the inverse problem with AM. Statistical comparisons are carried out with the distribution of the localization errors. We propose several new AM. These are the average of many individual realistic MRI-based models, such as surface-based models or lead fields. We demonstrate that the lead fields of the AM should be calculated considering source moments not constrained to be normal to the cortex. We also show that the imperfect anatomical correspondence between all cortices is the most important cause of localization errors. Our average models perform better than a random individual model or the usual average model in the MNI space. We also show that a classification based on race and gender or head size before averaging does not significantly improve the results. Our average models are slightly better than an existing AM with shape guided by measured individual electrode positions, and have the advantage of not requiring such measurements. Among the studied models, the Average Lead Field seems the most convenient tool in large and systematical clinical and research studies demanding EEG source localization, when MRI are unavailable. This AM does not need a strict alignment between head models, and can therefore be easily achieved for any type of head modeling approach.
Demonstration of a Model Averaging Capability in FRAMES
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Castleton, K. J.
2009-12-01
Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-15
... AGENCY 40 CFR Part 52 Approval and Promulgation of Air Quality Implementation Plans; Ohio; Canton... Implementation Plan (SIP) under the Clean Air Act to replace the previously approved motor vehicle emissions budgets with budgets developed using EPA's Motor Vehicle Emissions Simulator (MOVES) emissions model....
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... AGENCY 40 CFR Part 52 Approval and Promulgation of Air Quality Implementation Plans; Ohio; Lima 1997 8... Implementation Plan (SIP) to replace the previously approved motor vehicle emissions budgets with budgets developed using EPA's Motor Vehicle Emissions Simulator (MOVES) emissions model. Ohio submitted the...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-21
..., consists of: Cecil County in Maryland; Bucks, Chester, Delaware, Montgomery and Philadelphia Counties in... Philadelphia-Wilmin- Atlantic Ci, PA-NJ-MD-DE (Cecil County) to read as follows: Sec. 81.321 Maryland... * * * * * * * Philadelphia-Wilmin-Atlantic Ci, PA-NJ-MD-DE: Cecil County Nonattainment..... Subpart 2/...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-18
... NAAQS for ozone on March 27, 2008 (73 FR 16436). II. Summary of SIP Revision On September 27, 2010, the... Budget under Executive Order 12866 (58 FR 51735, October 4, 1993); Does not impose an information...); Does not have Federalism implications as specified in Executive Order 13132 (64 FR 43255, August...
van Leeuwen, Wessel M A; Kircher, Albert; Dahlgren, Anna; Lützhöft, Margareta; Barnett, Mike; Kecklund, Göran; Åkerstedt, Torbjörn
2013-11-01
Seafarer sleepiness jeopardizes safety at sea and has been documented as a direct or contributing factor in many maritime accidents. This study investigates sleep, sleepiness, and neurobehavioral performance in a simulated 4 h on/8 h off watch system as well as the effects of a single free watch disturbance, simulating a condition of overtime work, resulting in 16 h of work in a row and a missed sleep opportunity. Thirty bridge officers (age 30 ± 6 yrs; 29 men) participated in bridge simulator trials on an identical 1-wk voyage in the North Sea and English Channel. The three watch teams started respectively with the 00-04, the 04-08, and the 08-12 watches. Participants rated their sleepiness every hour (Karolinska Sleepiness Scale [KSS]) and carried out a 5-min psychomotor vigilance test (PVT) test at the start and end of every watch. Polysomnography (PSG) was recorded during 6 watches in the first and the second half of the week. KSS was higher during the first (mean ± SD: 4.0 ± 0.2) compared with the second (3.3 ± 0.2) watch of the day (p < 0.001). In addition, it increased with hours on watch (p < 0.001), peaking at the end of watch (4.1 ± 0.2). The free watch disturbance increased KSS profoundly (p < 0.001): from 4.2 ± 0.2 to 6.5 ± 0.3. PVT reaction times were slower during the first (290 ± 6 ms) compared with the second (280 ± 6 ms) watch of the day (p < 0.001) as well as at the end of the watch (289 ± 6 ms) compared with the start (281 ± 6 ms; p = 0.001). The free watch disturbance increased reaction times (p < 0.001) from 283 ± 5 to 306 ± 7 ms. Similar effects were observed for PVT lapses. One third of all participants slept during at least one of the PSG watches. Sleep on watch was most abundant in the team working 00-04 and it increased following the free watch disturbance. This study reveals that-within a 4 h on/8 h off shift system-subjective and objective sleepiness peak during the night and early morning watches, coinciding with a time frame in which relatively many maritime accidents occur. In addition, we showed that overtime work strongly increases sleepiness. Finally, a striking amount of participants fell asleep while on duty.
77 FR 28423 - Final Rule To Implement the 1997 8-Hour Ozone National Ambient Air Quality Standard...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-14
... Areas b. Timing of SIP Submission Under Subpart 2 Classification c. Timing of Attainment Date d. Data.... Required Required No. 182(a)(3)(B)). Subpart 2 RACT for VOCs and NOX (Sec. Not Required Required No. 182(b... Review (Sec. Required Required No. 182(a)(2)(C), (a)(4), (b)(5)). Vehicle I/M (Sec. 182(a)(2)(B),...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-04
... Plan's Motor Vehicle Emissions Budgets for Transportation Conformity Purposes; State of Colorado AGENCY... Attainment Plan (hereafter ``Denver/NFR Ozone Attainment Plan'') are adequate for transportation conformity... budgets for future transportation conformity determinations once this finding becomes effective....
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
.../Pennsylvania 2009 .065 42-091-0013 Montgomery/Pennsylvania......... 2009 .070 42-101-0004 Philadelphia..., quality-assured air quality data recorded during the 2009 ozone season. In accordance with requirements... the 2009 ozone season at each monitor in the area is less than 0.084 parts per million (ppm). If...
77 FR 43521 - Final Rule To Implement the 1997 8-Hour Ozone National Ambient Air Quality Standard...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-25
... Designation \\a\\ Category/classification Designated area Date \\1\\ Type Date \\1\\ Type Amador and Calaveras Cos., CA: (Central Mountain Cos.) Amador County Nonattainment 6/13/12 Subpart 2/Moderate. Calaveras County.... * * * * * * * Mariposa and Tuolumne Cos., CA: (Southern Mountain Counties) Mariposa County Nonattainment 6/13/12...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
... means EPA will not know your identity or contact information unless you provide it in the body of your... in the public docket and made available on the Internet. If you submit an electronic comment, EPA... the Internet and will be publicly available only in hard copy form. Publicly available...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-01
... Emission Budgets for Transportation Conformity Purposes AGENCY: Environmental Protection Agency (EPA... Commission on Environmental Quality (TCEQ) are adequate for transportation conformity purposes. As a result of EPA's finding, the BPA area must use these budgets for future conformity determinations for...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-01
... Vehicle Emission Budgets for Transportation Conformity Purposes AGENCY: Environmental Protection Agency... Commission on Environmental Quality (TCEQ) are adequate for transportation conformity purposes. As a result of EPA's finding, the DFW area must use these budgets for future conformity determinations....
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-16
... Emission Budgets for Transportation Conformity Purposes AGENCY: Environmental Protection Agency (EPA... the Louisiana Department of Environmental Quality (LDEQ) are adequate for transportation conformity purposes. As a result of EPA's finding, the Baton Rouge area must use these budgets for future...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-24
... Maintenance Plans for Transportation Conformity Purposes AGENCY: Environmental Protection Agency (EPA). ACTION... ozone nonattainment area are adequate for use in transportation conformity determinations. Ohio... ozone maintenance plan for future transportation conformity determinations. DATES: This finding...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... Motor Vehicle Emissions Simulator (MOVES) emissions model. Ohio submitted the SIP revision request to... consultation process. The Federal Highway Administration and the Ohio Department of Transportation have taken...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-09
... rulemaking for this reclassification, pursuant to section 182(b)(3)(A) of the Act. DATES: Comments must be...'' before submitting comments. E-mail: Mr. Guy Donaldson at donaldson.guy@epa.gov . Please also send a copy... nonattainment area. Section 181(b)(2)(A) of the Act requires that EPA determine, based on the area's...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-18
... below, including its estimated burden and cost to the public. An Agency may not conduct or sponsor and a... and administrative costs estimates apply to every designated nonattainment area. The burden estimated... burden: 11,667 hours (per year). Burden is defined at 5 CFR 1320.3(b). Total estimated cost:...
Technology Transfer Automated Retrieval System (TEKTRAN)
One factor that could impact the feasibility of commercial on-farm slaughter of broilers is the time delay from on-farm slaughter to scalding and defeathering in the commercial plant that could be 4 h or more. This experiment evaluated feather retention force (FRF) in broilers that were slaughtered ...
Technology Transfer Automated Retrieval System (TEKTRAN)
The implementation of on farm slaughter could eliminate potential animal welfare issues associated with cooping, transport, dumping, and shackling live broilers. This research evaluated evisceration efficiency and the microbiological implications of delaying scalding and defeathering for up to 8 h a...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-15
... Specialist, Control Strategies Section, Air Programs Branch (AR-18J), Environmental Protection Agency, Region... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION...: Environmental Protection Agency (EPA). ACTION: Direct final rule. SUMMARY: Under the Clean Air Act (CAA), EPA...
Technology Transfer Automated Retrieval System (TEKTRAN)
Long chain omega-3 fatty acids are important in nutrition and disease management. Flavored emulsified fish oil supplements provide an alternative to encapsulated fish oils. Oil in water emulsions may offer an advantage in bio-availability of the fatty acids. Chylomicrons transport triglyceride from...
Eastern Texas Air Quality Forecasting System to Support TexAQS-II and 8-hour Ozone Modeling
NASA Astrophysics Data System (ADS)
Byun, D. W.
2005-12-01
The main objective of the Second Texas Air Quality Study (TexAQS-II) for 2005 and 2006 is to understand emissions and processes associated with the formation and transport of ozone and regional haze in Texas. The target research area is the more populated eastern half of the state, roughly from Interstate 35 eastward. Accurate meteorological and photochemical modeling efforts are essential to support this study and further enhance modeling efforts for establishing the State Implementation Plan (SIP) by Texas Commission on Environmental Quality (TCEQ). An air quality forecasting (AQF) system for Eastern Texas has been developed to provide these data and to further facilitate retrospective simulations to allow for model improvement and increased understanding of ozone episodes and emissions. We perform two-day air quality forecasting simulations with the 12-km Eastern Texas regional domain, and the 4-km Houston-Galveston area (HGA) domain utilizing a 48-CPU Beowulf Linux computer system. The dynamic boundary conditions are provided by the 36-km resolution conterminous US (CONUS) domain CMAQ simulations. Initial meteorological conditions are provided by the daily ETA forecast results. The results of individual runs are stored and made available to researchers and state and local officials via internet to study the patterns of air quality and its relationship to weather conditions and emissions. The data during the pre- and post-processing stages are in tens of gigabytes and must be managed efficiently during both the actual real-time and the subsequent computation periods. The nature of these forecasts and the time at which the initial data is available necessitates that models be executed within tight deadlines. A set of complex operational scripts is used to allow automatic operation of the data download, sequencing processors, performing graphical analysis, building database archives, and presenting on the web.
Exact Averaging of Stochastic Equations for Flow in Porous Media
Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi
2008-03-15
It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.
Average Soil Water Retention Curves Measured by Neutron Radiography
Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun; Voisin, Sophie; Bilheux, Hassina Z; Horita, Juske; Hussey, Dan
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Simple Moving Average: A Method of Reporting Evolving Complication Rates.
Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J
2016-09-01
Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].
Spatially averaged flow over a wavy boundary revisited
McLean, S.R.; Wolfe, S.R.; Nelson, J.M.
1999-01-01
Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.
Model Averaging for Improving Inference from Causal Diagrams.
Hamra, Ghassan B; Kaufman, Jay S; Vahratian, Anjel
2015-08-11
Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as "wish bias". Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives.
Perceptual Averaging in Individuals with Autism Spectrum Disorder
Corbett, Jennifer E.; Venuti, Paola; Melcher, David
2016-01-01
There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task) despite poor accuracy in recalling individual circle sizes (member task). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment. PMID:27872602
The average distances in random graphs with given expected degrees
NASA Astrophysics Data System (ADS)
Chung, Fan; Lu, Linyuan
2002-12-01
Random graph theory is used to examine the "small-world phenomenon"; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n/log , where is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/k for some fixed exponent . For the case of > 3, we prove that the average distance of the power law graphs is almost surely of order log n/log β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc/log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core.
Hill, R Jedd; Smith, Philip A
2015-01-01
Carbon dioxide (CO2) makes up a relatively small percentage of atmospheric gases, yet when used or produced in large quantities as a gas, a liquid, or a solid (dry ice), substantial airborne exposures may occur. Exposure to elevated CO2 concentrations may elicit toxicity, even with oxygen concentrations that are not considered dangerous per se. Full-shift sampling approaches to measure 8-hr time weighted average (TWA) CO2 exposures are used in many facilities where CO2 gas may be present. The need to assess rapidly fluctuating CO2 levels that may approach immediately dangerous to life or health (IDLH) conditions should also be a concern, and several methods for doing so using fast responding measurement tools are discussed in this paper. Colorimetric detector tubes, a non-dispersive infrared (NDIR) detector, and a portable Fourier transform infrared (FTIR) spectroscopy instrument were evaluated in a laboratory environment using a flow-through standard generation system and were found to provide suitable accuracy and precision for assessing rapid fluctuations in CO2 concentration, with a possible effect related to humidity noted only for the detector tubes. These tools were used in the field to select locations and times for grab sampling and personal full-shift sampling, which provided laboratory analysis data to confirm IDLH conditions and 8-hr TWA exposure information. Fluctuating CO2 exposures are exemplified through field work results from several workplaces. In a brewery, brief CO2 exposures above the IDLH value occurred when large volumes of CO2-containing liquid were released for disposal, but 8-hr TWA exposures were not found to exceed the permissible level. In a frozen food production facility nearly constant exposure to CO2 concentrations above the permissible 8-hr TWA value were seen, as well as brief exposures above the IDLH concentration which were associated with specific tasks where liquid CO2 was used. In a poultry processing facility the use of dry
The Conservation of Area Integrals in Averaging Transformations
NASA Astrophysics Data System (ADS)
Kuznetsov, E. D.
2010-06-01
It is shown for the two-planetary version of the weakly perturbed two-body problem that, in a system defined by a finite part of a Poisson expansion of the averaged Hamiltonian, only one of the three components of the area vector is conserved, corresponding to the longitudes measuring plane. The variability of the other two components is demonstrated in two ways. The first is based on calculating the Poisson bracket of the averaged Hamiltonian and the components of the area vector written in closed form. In the second, an echeloned Poisson series processor (EPSP) is used when calculating the Poisson bracket. The averaged Hamiltonian is taken with accuracy to second order in the small parameter of the problem, and the components of the area vector are expanded in a Poisson series.
Time-average based on scaling law in anomalous diffusions
NASA Astrophysics Data System (ADS)
Kim, Hyun-Joo
2015-05-01
To solve the obscureness in measurement brought about from the weak ergodicity breaking appeared in anomalous diffusions, we have suggested the time-averaged mean squared displacement (MSD) /line{δ 2 (τ )}τ with an integral interval depending linearly on the lag time τ. For the continuous time random walk describing a subdiffusive behavior, we have found that /line{δ 2 (τ )}τ ˜ τ γ like that of the ensemble-averaged MSD, which makes it be possible to measure the proper exponent values through time-average in experiments like a single molecule tracking. Also, we have found that it has originated from the scaling nature of the MSD at an aging time in anomalous diffusion and confirmed them through numerical results of the other microscopic non-Markovian model showing subdiffusions and superdiffusions with the origin of memory enhancement.
Testing averaged cosmology with type Ia supernovae and BAO data
NASA Astrophysics Data System (ADS)
Santos, B.; Coley, A. A.; Chandrachani Devi, N.; Alcaniz, J. S.
2017-02-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Averaged initial Cartesian coordinates for long lifetime satellite studies
NASA Technical Reports Server (NTRS)
Pines, S.
1975-01-01
A set of initial Cartesian coordinates, which are free of ambiguities and resonance singularities, is developed to study satellite mission requirements and dispersions over long lifetimes. The method outlined herein possesses two distinct advantages over most other averaging procedures. First, the averaging is carried out numerically using Gaussian quadratures, thus avoiding tedious expansions and the resulting resonances for critical inclinations, etc. Secondly, by using the initial rectangular Cartesian coordinates, conventional, existing acceleration perturbation routines can be absorbed into the program without further modifications, thus making the method easily adaptable to the addition of new perturbation effects. The averaged nonlinear differential equations are integrated by means of a Runge Kutta method. A typical step size of several orbits permits rapid integration of long lifetime orbits in a short computing time.
Stochastic averaging and sensitivity analysis for two scale reaction networks
NASA Astrophysics Data System (ADS)
Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.
2016-02-01
In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
Genuine non-self-averaging and ultraslow convergence in gelation
NASA Astrophysics Data System (ADS)
Cho, Y. S.; Mazza, M. G.; Kahng, B.; Nagler, J.
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
Evolution of the average avalanche shape with the universality class.
Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J
2013-01-01
A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics.
Time-average TV holography for vibration fringe analysis
Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2009-06-01
Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.
Evolution of the average avalanche shape with the universality class
Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J
2013-01-01
A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571
The Spectral Form Factor Is Not Self-Averaging
Prange, R.
1997-03-01
The form factor, k(t), is the spectral statistic which best displays nonuniversal quasiclassical deviations from random matrix theory. Recent estimations of k(t) for a single spectrum found interesting new effects of this type. It was supposed that k(t) is {ital self-averaging} and thus did not require an ensemble average. We here argue that this supposition sometimes fails and that for many important systems an ensemble average is essential to see detailed properties of k(t). In other systems, notably the nontrivial zeros of Riemann zeta function, it will be possible to see the nonuniversal properties by an analysis of a single spectrum. {copyright} {ital 1997} {ital The American Physical Society}
Cascade of failures in interdependent networks with different average degree
NASA Astrophysics Data System (ADS)
Cheng, Zunshui; Cao, Jinde; Hayat, Tasawar
2014-12-01
Most of modern systems are coupled by two sub-networks and therefore should be modeled as interdependent networks. The study towards robustness of interdependent networks becomes interesting and significant. In this paper, mainly by numerical simulations, the robustness of interdependent Erdös-Rényi (ER) networks and interdependent scale-Free (SF) networks coupled by two sub-networks with different average degree are investigated. First, we study the robustness of interdependent networks under random attack. Second, we study the robustness of interdependent networks under targeted attack on high or low degree nodes, and find that interdependent networks with different average degree are significantly different from those interdependent networks with equal average degree.
Size and emotion averaging: costs of dividing attention after all.
Brand, John; Oriet, Chris; Tottenham, Laurie Sykes
2012-03-01
Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.
Exact solution to the averaging problem in cosmology.
Wiltshire, David L
2007-12-21
The exact solution of a two-scale Buchert average of the Einstein equations is derived for an inhomogeneous universe that represents a close approximation to the observed universe. The two scales represent voids, and the bubble walls surrounding them within which clusters of galaxies are located. As described elsewhere [New J. Phys. 9, 377 (2007)10.1088/1367-2630/9/10/377], apparent cosmic acceleration can be recognized as a consequence of quasilocal gravitational energy gradients between observers in bound systems and the volume-average position in freely expanding space. With this interpretation, the new solution presented here replaces the Friedmann solutions, in representing the average evolution of a matter-dominated universe without exotic dark energy, while being observationally viable.
A Spectral Estimate of Average Slip in Earthquakes
NASA Astrophysics Data System (ADS)
Boatwright, J.; Hanks, T. C.
2014-12-01
We demonstrate that the high-frequency acceleration spectral level ao of an ω-square source spectrum is directly proportional to the average slip of the earthquake ∆u divided by the travel time to the station r/βao = 1.37 Fs (β/r) ∆uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.
Average coherence and its typicality for random mixed quantum states
NASA Astrophysics Data System (ADS)
Zhang, Lin
2017-04-01
The Wishart ensemble is a useful and important random matrix model used in diverse fields. By realizing induced random mixed quantum states as a Wishart ensemble with fixed unit trace, using matrix integral technique we give a fast track to the average coherence for random mixed quantum states induced via partial-tracing of the Haar-distributed bipartite pure states. As a direct consequence of this result, we get a compact formula for the average subentropy of random mixed states. These compact formulae extend our previous work.
Probing turbulence intermittency via autoregressive moving-average models
NASA Astrophysics Data System (ADS)
Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele
2014-12-01
We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index Υ that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that Υ is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that Υ is a suitable index to reconstruct intermittency in experimental turbulent fields.
Collision and average velocity effects on the ratchet pinch
Vlad, M.; Benkadda, S.
2008-03-15
A ratchet-type average velocity V{sup R} appears for test particles moving in a stochastic potential and a magnetic field that is space dependent. This model is developed by including particle collisions and an average velocity. We show that these components of the motion can destroy the ratchet velocity but they also can produce significant increase of V{sup R}, depending on the parameters. The amplification of the ratchet pinch is a nonlinear effect that appears in the presence of trajectory eddying.
AMPERE AVERAGE CURRENT PHOTOINJECTOR AND ENERGY RECOVERY LINAC.
BEN-ZVI,I.; BURRILL,A.; CALAGA,R.; ET AL.
2004-08-17
High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. We describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode, an accelerator cavity, both capable of producing of the order of one ampere average current and plans for an ERL based on these units.
Compact expressions for spherically averaged position and momentum densities
NASA Astrophysics Data System (ADS)
Crittenden, Deborah L.; Bernard, Yves A.
2009-08-01
Compact expressions for spherically averaged position and momentum density integrals are given in terms of spherical Bessel functions (jn) and modified spherical Bessel functions (in), respectively. All integrals required for ab initio calculations involving s, p, d, and f-type Gaussian functions are tabulated, highlighting a neat isomorphism between position and momentum space formulae. Spherically averaged position and momentum densities are calculated for a set of molecules comprising the ten-electron isoelectronic series (Ne-CH4) and the eighteen-electron series (Ar-SiH4, F2-C2H6).
Quantum State Discrimination Using the Minimum Average Number of Copies
NASA Astrophysics Data System (ADS)
Slussarenko, Sergei; Weston, Morgan M.; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M.; Pryde, Geoff J.
2017-01-01
In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we introduce a new state discrimination task: minimizing the average resources for a fixed admissible error probability. We show that this new task is not performed optimally by previously known strategies, and derive and experimentally test a detection scheme that performs better.
Spatial average ambiguity function for array radar with stochastic signals
NASA Astrophysics Data System (ADS)
Zha, Guofeng; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang
2016-03-01
For analyzing the spatial resolving performance of multi-transmitter single-receiver (MTSR) array radar with stochastic signals, the spatial average ambiguity function (SAAF) is introduced based on the statistical average theory. The analytic expression of SAAF and the corresponding resolutions in vertical range and in horizontal range are derived. Since spatial resolving performance is impacted by many parameters including signal modulation schemes, signal bandwidth, array aperture's size and target's spatial position, comparisons are implemented to analyze these influences. Simulation results are presented to validate the whole analysis.
Averaged energy inequalities for the nonminimally coupled classical scalar field
Fewster, Christopher J.; Osterbrink, Lutz W.
2006-08-15
The stress-energy tensor for the classical nonminimally coupled scalar field is known not to satisfy the pointwise energy conditions of general relativity. In this paper we show, however, that local averages of the classical stress-energy tensor satisfy certain inequalities. We give bounds for averages along causal geodesics and show, e.g., that in Ricci-flat background spacetimes, ANEC and AWEC are satisfied. Furthermore we use our result to show that in the classical situation we have an analogue to the phenomenon of quantum interest. These results lay the foundations for analogous energy inequalities for the quantized nonminimally coupled fields, which will be discussed elsewhere.
An averaging analysis of discrete-time indirect adaptive control
NASA Technical Reports Server (NTRS)
Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.
1988-01-01
An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.
High average power scaleable thin-disk laser
Beach, Raymond J.; Honea, Eric C.; Bibeau, Camille; Payne, Stephen A.; Powell, Howard; Krupke, William F.; Sutton, Steven B.
2002-01-01
Using a thin disk laser gain element with an undoped cap layer enables the scaling of lasers to extremely high average output power values. Ordinarily, the power scaling of such thin disk lasers is limited by the deleterious effects of amplified spontaneous emission. By using an undoped cap layer diffusion bonded to the thin disk, the onset of amplified spontaneous emission does not occur as readily as if no cap layer is used, and much larger transverse thin disks can be effectively used as laser gain elements. This invention can be used as a high average power laser for material processing applications as well as for weapon and air defense applications.
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Factors Affecting Noise Levels of High-Speed Handpieces
2012-06-01
office communication and increase patient anxiety. Purpose: To determine if three noise-reducing techniques utilized in larger scale , non- dental...hearing loss may cause confusion, fear, and loneliness , and that sometimes hearing loss is accompanied by dizziness, which would be a handicap in the...employee noise exposures equal or exceed an 8- hour time-weighted average sound level (TWA) of 85 decibels measured on the A scale (slow response) or
Clinical Assessment of the Noise Immune Stethoscope aboard a U.S. Navy Carrier
2011-11-01
time of writing this technical report. This current technical report addresses the second issue: data collection by end-user clinicians in real-world...hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...George Washington (CVN-73), another Nimitz-class carrier like the Carl Vinson, reported an 8 hour time -weighted average (TWA) of 84.2 dBA in the medical
Averaging processes in granular flows driven by gravity
NASA Astrophysics Data System (ADS)
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
40 CFR 63.652 - Emissions averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... annual credits and debits in the Periodic Reports as specified in § 63.655(g)(8). Every fourth Periodic... reported in the next Periodic Report. (iii) The following procedures and equations shall be used to..., dimensionless (see table 33 of subpart G). P=Weighted average rack partial pressure of organic HAP's...
HIGH AVERAGE POWER UV FREE ELECTRON LASER EXPERIMENTS AT JLAB
Douglas, David; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle D; Tennant, Christopher; Williams, Gwyn
2012-07-01
Having produced 14 kW of average power at {approx}2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.
Average characteristics and activity dependence of the subauroral polarization stream
NASA Astrophysics Data System (ADS)
Foster, J. C.; Vo, H. B.
2002-12-01
Data from the Millstone Hill incoherent scatter radar taken over two solar cycles (1979-2000) are examined to determine the average characteristics of the disturbance convection electric field in the midlatitude ionosphere. Radar azimuth scans provide a regular database of ionospheric plasma convection observations spanning auroral and subauroral latitudes, and these scans have been examined for all local times and activity conditions.We examine the occurrence and characteristics of a persistent secondary westward convection peak which lies equatorward of the auroral two-cell convection. Individual scans and average patterns of plasma flow identify and characterize this latitudinally broad and persistent subauroral polarization stream (SAPS), which spans the nightside from dusk to the early morning sector for all Kp greater than 4. Premidnight, the SAPS westward convection lies equatorward of L = 4 (60° invariant latitude, Λ), spans 3°-5° of latitude, and has an average peak amplitude of >900 m/s. In the predawn sector, SAPS is seen as a region of antisunward convection equatorward of L = 3 (55° Λ), spanning ˜3° of latitude, with an average peak amplitude of 400 m/s.
40 CFR 63.503 - Emissions averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... applied as a pollution prevention project, or a pollution prevention measure, where the control achieves a... measures are used to control five or more of the emission points included in the emissions average. (B) If... pollution prevention measures are used to control five or more of the emission points included in...
40 CFR 63.503 - Emissions averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... applied as a pollution prevention project, or a pollution prevention measure, where the control achieves a... measures are used to control five or more of the emission points included in the emissions average. (B) If... pollution prevention measures are used to control five or more of the emission points included in...
40 CFR 63.1332 - Emissions averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... additional emission points if pollution prevention measures are used to control five or more of the emission... five additional emission points if pollution prevention measures are used to control five or more of... averaging credits if control was applied after November 15, 1990, and if sufficient information is...
40 CFR 63.503 - Emissions averaging provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... applied as a pollution prevention project, or a pollution prevention measure, where the control achieves a... measures are used to control five or more of the emission points included in the emissions average. (B) If... pollution prevention measures are used to control five or more of the emission points included in...
40 CFR 63.1332 - Emissions averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the emission... section describe the emission points that may be used to generate emissions averaging credits if...
40 CFR 63.150 - Emissions averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... control device, a recovery device applied as a pollution prevention project, or a pollution prevention... Group 1 emission points to which the reference control technology (defined in § 63.111 of this subpart... following emission points can be used to generate emissions averaging credits, if control was applied...
40 CFR 63.150 - Emissions averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... control device, a recovery device applied as a pollution prevention project, or a pollution prevention... Group 1 emission points to which the reference control technology (defined in § 63.111 of this subpart... following emission points can be used to generate emissions averaging credits, if control was applied...
40 CFR 63.1332 - Emissions averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the emission... section describe the emission points that may be used to generate emissions averaging credits if...
40 CFR 63.1332 - Emissions averaging provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... additional emission points if pollution prevention measures are used to control five or more of the emission... five additional emission points if pollution prevention measures are used to control five or more of... averaging credits if control was applied after November 15, 1990, and if sufficient information is...
40 CFR 63.150 - Emissions averaging provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... control device, a recovery device applied as a pollution prevention project, or a pollution prevention... Group 1 emission points to which the reference control technology (defined in § 63.111 of this subpart... following emission points can be used to generate emissions averaging credits, if control was applied...
34 CFR 668.196 - Average rates appeals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668... determine that you qualify, we notify you of that determination at the same time that we notify you of your... determine that you meet the requirements for an average rates appeal. (Approved by the Office of...
Formulation of Maximized Weighted Averages in URTURIP Technique
2001-10-25
Formulation of Maximized Weighted Averages in URTURIP Technique Bruno Migeon, Philippe Deforge, Pierre Marché Laboratoire Vision et Robotique ...Organization Name(s) and Address(es) Laboratoire Vision et Robotique 63, avenue de Lattre de Tassigny, 18020 Bourges Cedex - France Performing Organization
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2010 CFR
2010-04-01
... methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...
Punching Wholes into Parts, or Beating the Percentile Averages.
ERIC Educational Resources Information Center
Carwile, Nancy R.
1990-01-01
Presents a facetious, ingenious resolution to the percentile dilemma concerning above- and below-average test scores. If schools enrolled the same number of pigs as students and tested both groups, the pigs would fill up the bottom half and all children would rank in the top 50 percent. However, some wrinkles need to be ironed out! (MLH)
40 CFR 63.1332 - Emissions averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1332 Emissions averaging... based on either organic HAP or TOC. (3) For the purposes of these provisions, whenever Method 18, 40 CFR... (a)(3)(i) and (a)(3)(ii) of this section. (i) The organic HAP used as the calibration gas for...
Average formation length of hadrons in a string model
NASA Astrophysics Data System (ADS)
Grigoryan, L.
2010-04-01
The space-time scales of the hadronization process in the framework of the string model are investigated. It is shown that the average formation lengths of pseudoscalar mesons, produced in semi-inclusive deep inelastic scattering of leptons on different targets, depend on their electrical charges. In particular, the average formation lengths of positively charged hadrons are larger than those of negatively charged ones. This statement is fulfilled for all scaling functions used, for z (the fraction of the virtual photon energy transferred to the detected hadron) larger than 0.15, for all nuclear targets, and for any value of the Björken scaling variable xBj. In all cases, the main mechanism is direct production of pseudoscalar mesons. Including in consideration an additional mechanism of production resulting in decay of resonances leads to a decrease in average formation lengths. It is shown that the average formation lengths of positively (negatively) charged mesons are slowly increasing (decreasing) functions of xBj. The results obtained can be important, in particular, for understanding of the hadronization process in the nuclear environment.
27 CFR 19.249 - Average effective tax rate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Average effective tax rate. 19.249 Section 19.249 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits Taxes Effective Tax Rates §...
Evaluation of spline and weighted average interpolation algorithms
NASA Astrophysics Data System (ADS)
Eckstein, Barbara Ann
Bivariate polynomial and weighted average interpolations were tested on two data sets. One data set consisted of irregularly spaced Bouguer gravity values. Maps derived from automated interpolation were compared to a manually created map to determine the best computer-generated diagram. For this data set, bivariate polynomial interpolation was inadequate, showing many spurious circular anomalies with extrema greatly exceeding the input values. The greatest distortion occurred near roughly colinear observations and steep field gradients. The computerized map from weighted average interpolation matched the manual map when the number of grid points was roughly nine times the number of input points. Groundwater recharge and discharge rates were used for the second example. The discharge zones are two narrow irrigation ditches, and measurements were along linear traverses. Again, polynomial interpolation produced unreasonably large interpolated values near high field gradients. The weighted average method required a higher ratio of grid points to input data (about 64 to 1) because of the long narrow shape of the discharge zones. The weighted average interpolation method was more reliable than the polynomial method because it was less sensitive to the nature of the data distribution and to the field gradients.
Polyline averaging using distance surfaces: A spatial hurricane climatology
NASA Astrophysics Data System (ADS)
Scheitlin, Kelsey N.; Mesev, Victor; Elsner, James B.
2013-03-01
The US Gulf states are frequently hit by hurricanes, causing widespread damage resulting in economic loss and occasional human fatalities. Current hurricane climatologies and predictive models frequently omit information on the spatial characteristics of hurricane movement—their linear tracks. We investigate the construction of a spatial hurricane climatology that condenses linear tracks to one-dimensional polylines. With the aid of distance surfaces, an average hurricane track is calculated by summing polylines as part of a grid-based algorithm. We demonstrate the procedure on a particularly vulnerable coastline around the city of Galveston in Texas, where the tracks of the closest storms to Galveston are also weighted by an inverse distance function. Track averaging is also applied as a means of interpolating possible paths of historical storms where records are sporadic observations, and sometimes anecdotal. We offer the average track as a convenient regional summary of expected hurricane movement. The average track, together with other hurricane attributes, also provides a means to assess the expected local vulnerability of property and environmental damage.
Cognitive Patterns of "Retarded" and Below-Average Readers.
ERIC Educational Resources Information Center
Leong, Che K.
1980-01-01
The cognitive patterns of 58 "retarded" and 38 below-average readers were compared with controls, according to Luria's simultaneous and successive modes of information processing. Factor Analysis showed different cognitive patterns for disabled and nondisabled readers. Reading skills, rather than cognitive ability, were shown to be…
Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.
1977-02-01
maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to
Advising Students about Required Grade-Point Averages
ERIC Educational Resources Information Center
Moore, W. Kent
2006-01-01
Sophomores interested in professional colleges with grade-point average (GPA) standards for admission to upper division courses will need specific and realistic information concerning the requirements. Specifically, those who fall short of the standard must assess the likelihood of achieving the necessary GPA for professional program admission.…
Average subentropy, coherence and entanglement of random mixed quantum states
NASA Astrophysics Data System (ADS)
Zhang, Lin; Singh, Uttam; Pati, Arun K.
2017-02-01
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
47 CFR 80.759 - Average terrain elevation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage §...
47 CFR 80.759 - Average terrain elevation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage §...
Reducing Noise by Repetition: Introduction to Signal Averaging
ERIC Educational Resources Information Center
Hassan, Umer; Anwar, Muhammad Sabieh
2010-01-01
This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2011 CFR
2011-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2012 CFR
2012-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2013 CFR
2013-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2014 CFR
2014-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2010 CFR
2010-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
Speckle averaging system for laser raster-scan image projection
Tiszauer, Detlev H.; Hackel, Lloyd A.
1998-03-17
The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.
The method of averages applied to the KS differential equations
NASA Technical Reports Server (NTRS)
Graf, O. F., Jr.; Mueller, A. C.; Starke, S. E.
1977-01-01
A new approach for the solution of artificial satellite trajectory problems is proposed. The basic idea is to apply an analytical solution method (the method of averages) to an appropriate formulation of the orbital mechanics equations of motion (the KS-element differential equations). The result is a set of transformed equations of motion that are more amenable to numerical solution.
AVERAGE ANNUAL SOLAR UV DOSE OF THE CONTINENTAL US CITIZEN
The average annual solar UV dose of US citizens is not known, but is required for relative risk assessments of skin cancer from UV-emitting devices. We solved this problem using a novel approach. The EPA's "National Human Activity Pattern Survey" recorded the daily ou...
Fully variational average atom model with ion-ion correlations.
Starrett, C E; Saumon, D
2012-02-01
An average atom model for dense ionized fluids that includes ion correlations is presented. The model assumes spherical symmetry and is based on density functional theory, the integral equations for uniform fluids, and a variational principle applied to the grand potential. Starting from density functional theory for a mixture of classical ions and quantum mechanical electrons, an approximate grand potential is developed, with an external field being created by a central nucleus fixed at the origin. Minimization of this grand potential with respect to electron and ion densities is carried out, resulting in equations for effective interaction potentials. A third condition resulting from minimizing the grand potential with respect to the average ion charge determines the noninteracting electron chemical potential. This system is coupled to a system of point ions and electrons with an ion fixed at the origin, and a closed set of equations is obtained. Solution of these equations results in a self-consistent electronic and ionic structure for the plasma as well as the average ionization, which is continuous as a function of temperature and density. Other average atom models are recovered by application of simplifying assumptions.
Improvements in Dynamic GPS Positions Using Track Averaging
1999-08-01
Global Positioning System ( GPS ), Precise Positioning System (PPS) solution under dynamic...SUBJECT TERMS 15. NUMBER OF GPS , Global Positioning System , Dynamic Positioning PAGES 31 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY... Global Positioning System ( GPS ), Precise Positioning System (PPS) solution under dynamic conditions through averaging is investigated. Static
Evolution of the average steepening factor for nonlinearly propagating waves.
Muhlestein, Michael B; Gee, Kent L; Neilsen, Tracianne B; Thomas, Derek C
2015-02-01
Difficulties arise in attempting to discern the effects of nonlinearity in near-field jet-noise measurements due to the complicated source structure of high-velocity jets. This article describes a measure that may be used to help quantify the effects of nonlinearity on waveform propagation. This measure, called the average steepening factor (ASF), is the ratio of the average positive slope in a time waveform to the average negative slope. The ASF is the inverse of the wave steepening factor defined originally by Gallagher [AIAA Paper No. 82-0416 (1982)]. An analytical description of the ASF evolution is given for benchmark cases-initially sinusoidal plane waves propagating through lossless and thermoviscous media. The effects of finite sampling rates and measurement noise on ASF estimation from measured waveforms are discussed. The evolution of initially broadband Gaussian noise and signals propagating in media with realistic absorption are described using numerical and experimental methods. The ASF is found to be relatively sensitive to measurement noise but is a relatively robust measure for limited sampling rates. The ASF is found to increase more slowly for initially Gaussian noise signals than for initially sinusoidal signals of the same level, indicating the average distortion within noise waveforms occur more slowly.
Robust Representations for Face Recognition: The Power of Averages
ERIC Educational Resources Information Center
Burton, A. Mike; Jenkins, Rob; Hancock, Peter J. B.; White, David
2005-01-01
We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal…
Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works
NASA Astrophysics Data System (ADS)
Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha
2015-04-01
Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (φ'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the
The Average Quality Factors by TEPC for Charged Particles
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Nikjoo, Hooshang; Cucinotta, Francis A.
2004-01-01
The quality factor used in radiation protection is defined as a function of LET, Q(sub ave)(LET). However, tissue equivalent proportional counters (TEPC) measure the average quality factors as a function of lineal energy (y), Q(sub ave)(Y). A model of the TEPC response for charged particles considers energy deposition as a function of impact parameter from the ion s path to the volume, and describes the escape of energy out of sensitive volume by delta-rays and the entry of delta rays from the high-density wall into the low-density gas-volume. A common goal for operational detectors is to measure the average radiation quality to within accuracy of 25%. Using our TEPC response model and the NASA space radiation transport model we show that this accuracy is obtained by a properly calibrated TEPC. However, when the individual contributions from trapped protons and galactic cosmic rays (GCR) are considered; the average quality factor obtained by TEPC is overestimated for trapped protons and underestimated for GCR by about 30%, i.e., a compensating error. Using TEPC's values for trapped protons for Q(sub ave)(y), we obtained average quality factors in the 2.07-2.32 range. However, Q(sub ave)(LET) ranges from 1.5-1.65 as spacecraft shielding depth increases. The average quality factors for trapped protons on STS-89 demonstrate that the model of the TEPC response is in good agreement with flight TEPC data for Q(sub ave)(y), and thus Q(sub ave)(LET) for trapped protons is overestimated by TEPC. Preliminary comparisons for the complete GCR spectra show that Q(sub ave)(LET) for GCR is approximately 3.2-4.1, while TEPC measures 2.9-3.4 for QQ(sub ave)(y), indicating that QQ(sub ave)(LET) for GCR is underestimated by TEPC.
High average power diode pumped solid state lasers for CALIOPE
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.
Averaged universe confronted with cosmological observations: A fully covariant approach
NASA Astrophysics Data System (ADS)
Wijenayake, Tharake; Lin, Weikang; Ishak, Mustapha
2016-10-01
One of the outstanding problems in general relativistic cosmology is that of the averaging, that is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaître-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-known question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of macroscopic gravity. We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted ΩA . We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full cosmic microwave background analysis from Planck temperature anisotropy and polarization data, the supernova data from Union 2.1, the galaxy power spectrum from WiggleZ, the weak lensing tomography shear-shear cross correlations from the CFHTLenS survey, and the baryonic acoustic oscillation data from 6Df, SDSS DR7, and BOSS DR9. We find that -0.0155 ≤ΩA≤0 (at the 68% C.L.), thus providing a tight upper bound on the backreaction term. We also find that the term is strongly correlated with cosmological parameters, such ΩΛ, σ8, and H0. While small, a backreaction density parameter of a few percent should be kept in consideration along with other systematics for precision cosmology.
Condition monitoring of gearboxes using synchronously averaged electric motor signals
NASA Astrophysics Data System (ADS)
Ottewill, J. R.; Orkisz, M.
2013-07-01
Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different
High-average-power diode-pumped Yb: YAG lasers
Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B
1999-10-01
A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2005-01-01
Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... or Before August 30, 1999 Model Rule-Continuous Emission Monitoring § 60.1755 How do I convert my 1.... If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction...
NASA Astrophysics Data System (ADS)
Farmer, W. Michael
1990-09-01
An understanding of how broad-band transmittance is affected by the atmosphere is crucial to accurately predicting how broad-band sensors such as FLIRs will perform. This is particularly true for sensors required to function in an environment where countermeasures such as smokes/obscurants have been used to limit sensor performance. A common method of estimating the attenuation capabilities of smokes/obscurants released in the atmosphere to defeat broad-band sensors is to use a band averaged extinction coefficient with concentration length values in the Beer-Bouguer transmission law. This approach ignores the effects of source spectra, sensor response, and normal atmospheric attenuation, and can lead to results for band averages of the relative transmittance that are significantly different from those obtained using the source spectra, sensor response, and normal atmospheric transmission. In this paper we discuss the differences that occur in predicting relative transmittance as a function of concentration length using band-averaged mass extinction coefficients or computing the band-averaged transmittance as a function of source spectra. Two examples are provided to illustrate the differences in results. The first example is applicable to 8- to l4-um band transmission through natural fogs. The second example considers 3- to 5-um transmission through phosphorus smoke produced at 17% and 90% relative humidity. The results show major differences in the prediction of concentration length values by the two methods when the relative transmittance falls below about 20%.
NASA Astrophysics Data System (ADS)
Aarthi, G.; Prabu, K.; Reddy, G. Ramachandra
2017-02-01
The average spectral efficiency (ASE) is investigated for the free space optical (FSO) communications employing On-Off keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems with and without pointing errors over the Gamma-Gamma (GG) channels. Additionally, the impact of aperture averaging on the ASE is explored. The influence of different turbulence conditions along with varying receiver aperture has been studied and analyzed. For the considered system, the exact average channel capacity (ACC) expressions are derived using Meijer G function. Results reveal that when pointing errors are introduced, there is a significant reduction in the ASE performance. The enhancement in the ASE can be achieved with an increase in the receiver aperture across various turbulence regimes and reducing the beam radius in the presence of pointing errors, but the rate of increment of ASE reduces with a larger diameter and it is saturated finally. The coherent OWC system provides better ASE performance of 49 bits/s/Hz at the average transmitted optical power of 5 dBm with an aperture diameter of 10 cm and 34 bits/s/Hz without and with pointing errors under strong turbulence respectively.
Code of Federal Regulations, 2012 CFR
2012-07-01
... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA... SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed...
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Multiple-level defect species evaluation from average carrier decay
NASA Astrophysics Data System (ADS)
Debuf, Didier
2003-10-01
An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.
Averaged model for momentum and dispersion in hierarchical porous media.
Chabanon, Morgan; David, Bertrand; Goyeau, Benoît
2015-08-01
Hierarchical porous media are multiscale systems, where different characteristic pore sizes and structures are encountered at each scale. Focusing the analysis to three pore scales, an upscaling procedure based on the volume-averaging method is applied twice, in order to obtain a macroscopic model for momentum and diffusion-dispersion. The effective transport properties at the macroscopic scale (permeability and dispersion tensors) are found to be explicitly dependent on the mesoscopic ones. Closure problems associated to these averaged properties are numerically solved at the different scales for two types of bidisperse porous media. Results show a strong influence of the lower-scale porous structures and flow intensity on the macroscopic effective transport properties.
Ampere Average Current Photoinjector and Energy Recovery Linac
Ilan Ben-Zvi; A. Burrill; R. Calaga; P. Cameron; X. Chang; D. Gassner; H. Hahn; A. Hershcovitch; H.C. Hseuh; P. Johnson; D. Kayran; J. Kewisch; R. Lambiase; Vladimir N. Litvinenko; G. McIntyre; A. Nicoletti; J. Rank; T. Roser; J. Scaduto; K. Smith; T. Srinivasan-Rao; K.-C. Wu; A. Zaltsman; Y. Zhao; H. Bluem; A. Burger; Mike Cole; A. Favale; D. Holmes; John Rathke; Tom Schultheiss; A. Todd; J. Delayen; W. Funk; L. Phillips; Joe Preble
2004-08-01
High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode, as demonstrated by the spectacular success of the Jefferson Laboratory IR-Demo. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. BNL's Collider-Accelerator Department is pursuing some of these technologies for a different application, that of electron cooling of high-energy hadron beams. I will describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode and an accelerator cavity, both capable of producing of the order of one ampere average current.
Pulsar average waveforms and hollow cone beam models
NASA Technical Reports Server (NTRS)
Backer, D. C.
1975-01-01
An analysis of pulsar average waveforms at radio frequencies from 40 MHz to 15 GHz is presented. The analysis is based on the hypothesis that the observer sees one cut of a hollow-cone beam pattern and that stationary properties of the emission vary over the cone. The distributions of apparent cone widths for different observed forms of the average pulse profiles (single, double/unresolved, double/resolved, triple and multiple) are in modest agreement with a model of a circular hollow-cone beam with random observer-spin axis orientation, a random cone axis-spin axis alignment, and a small range of physical hollow-cone parameters for all objects.
More Voodoo correlations: when average-based measures inflate correlations.
Brand, Andrew; Bradley, Michael T
2012-01-01
A Monte-Carlo simulation was conducted to assess the extent that a correlation estimate can be inflated when an average-based measure is used in a commonly employed correlational design. The results from the simulation reveal that the inflation of the correlation estimate can be substantial, up to 76%. Additionally, data was re-analyzed from two previously published studies to determine the extent that the correlation estimate was inflated due to the use of an averaged based measure. The re-analyses reveal that correlation estimates had been inflated by just over 50% in both studies. Although these findings are disconcerting, we are somewhat comforted by the fact that there is a simple and easy analysis that can be employed to prevent the inflation of the correlation estimate that we have simulated and observed.
Laser Diode Cooling For High Average Power Applications
NASA Astrophysics Data System (ADS)
Mundinger, David C.; Beach, Raymond J.; Benett, William J.; Solarz, Richard W.; Sperry, Verry
1989-06-01
Many applications for semiconductor lasers that require high average power are limited by the inability to remove the waste heat generated by the diode lasers. In order to reduce the cost and complexity of these applications a heat sink package has been developed which is based on water cooled silicon microstructures. Thermal resistivities of less than 0.025°C/01/cm2) have been measured which should be adequate for up to CW operation of diode laser arrays. This concept can easily be scaled to large areas and is ideal for high average power solid state laser pumping. Several packages which illustrate the essential features of this design have been fabricated and tested. The theory of operation will be briefly covered, and several conceptual designs will be described. Also the fabrication and assembly procedures and measured levels of performance will be discussed.
Averaged variational principle for autoresonant Bernstein-Greene-Kruskal modes
Khain, P.; Friedland, L.
2010-10-15
Whitham's averaged variational principle is applied in studying dynamics of formation of autoresonant (continuously phase-locked) Bernstein-Greene-Kruskal (BGK) modes in a plasma driven by a chirped frequency ponderomotive wave. A flat-top electron velocity distribution is used as a model allowing a variational formulation within the water bag theory. The corresponding Lagrangian, averaged over the fast phase variable yields evolution equations for the slow field variables, allows uniform description of all stages of excitation of driven-chirped BGK modes, and predicts modulational stability of these nonlinear phase-space structures. Numerical solutions of the system of slow variational equations are in good agreement with Vlasov-Poisson simulations.
Robust myelin water quantification: averaging vs. spatial filtering.
Jones, Craig K; Whittall, Kenneth P; MacKay, Alex L
2003-07-01
The myelin water fraction is calculated, voxel-by-voxel, by fitting decay curves from a multi-echo data acquisition. Curve-fitting algorithms require a high signal-to-noise ratio to separate T(2) components in the T(2) distribution. This work compared the effect of averaging, during acquisition, to data postprocessed with a noise reduction filter. Forty regions, from five volunteers, were analyzed. A consistent decrease in the myelin water fraction variability with no bias in the mean was found for all 40 regions. Images of the myelin water fraction of white matter were more contiguous and had fewer "holes" than images of myelin water fractions from unfiltered echoes. Spatial filtering was effective for decreasing the variability in myelin water fraction calculated from 4-average multi-echo data.
Thermal effects in high average power optical parametric amplifiers.
Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas
2013-03-01
Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.
Microchannel heatsinks for high average power laser diode arrays
Beach, R.; Benett, B.; Freitas, B.; Ciarlo, D.; Sperry, V.; Comaskey, B.; Emanuel, M.; Solarz, R.; Mundinger, D.
1992-01-01
Detailed performance results and fabrication techniques for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor or even CW operation of fully filled laser diode arrays is enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using a photolithographic pattern definition procedure followed by anisotropic chemical etching. A modular rack-and-stack architecture is adopted for the heatsink design allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel cooled heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that result from coupling the diode light to several nanometers wide absorption features characteristic of leasing ions in crystals.
Microchannel cooled heatsinks for high average power laser diode arrays
Bennett, W.J.; Freitas, B.L.; Ciarlo, D.; Beach, R.; Sutton, S.; Emanuel, M.; Solarz, R.
1993-01-15
Detailed performance results for an efficient and low impedance laser diode array heatsink are presented. High duty factor and even cw operation of fully filled laser diode arrays at high stacking densities are enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using an anisotropic chemical etching process. A modular rack-and-stack architecture is adopted for heatsink design, allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel heatsinks is ideally suited to pump army requirements for high average power crystalline laser because of the stringent temperature demands are required to efficiently couple diode light to several-nanometer-wide absorption features characteristic of lasing ions in crystals.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction
Ahlfors, Seppo P.; Hinrichs, Hermann
2016-01-01
Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196
The B-dot Earth Average Magnetic Field
NASA Technical Reports Server (NTRS)
Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon
2013-01-01
The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.
Averaging of nuclear modulation artefacts in RIDME experiments
NASA Astrophysics Data System (ADS)
Keller, Katharina; Doll, Andrin; Qi, Mian; Godt, Adelheid; Jeschke, Gunnar; Yulikov, Maxim
2016-11-01
The presence of artefacts due to Electron Spin Echo Envelope Modulation (ESEEM) complicates the analysis of dipolar evolution data in Relaxation Induced Dipolar Modulation Enhancement (RIDME) experiments. Here we demonstrate that averaging over the two delay times in the refocused RIDME experiment allows for nearly quantitative removal of the ESEEM artefacts, resulting in potentially much better performance than the so far used methods. The analytical equations are presented and analyzed for the case of electron and nuclear spins S = 1 / 2, I = 1 / 2 . The presented analysis is also relevant for Double Electron Electron Resonance (DEER) and Chirp-Induced Dipolar Modulation Enhancement (CIDME) techniques. The applicability of the ESEEM averaging approach is demonstrated on a Gd(III)-Gd(III) rigid ruler compound in deuterated frozen solution at Q band (35 GHz).
Correct averaging in transmission radiography: Analysis of the inverse problem
NASA Astrophysics Data System (ADS)
Wagner, Michael; Hampel, Uwe; Bieberle, Martina
2016-05-01
Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.
On representation formulas for long run averaging optimal control problem
NASA Astrophysics Data System (ADS)
Buckdahn, R.; Quincampoix, M.; Renault, J.
2015-12-01
We investigate an optimal control problem with an averaging cost. The asymptotic behaviour of the values is a classical problem in ergodic control. To study the long run averaging we consider both Cesàro and Abel means. A main result of the paper says that there is at most one possible accumulation point - in the uniform convergence topology - of the values, when the time horizon of the Cesàro means converges to infinity or the discount factor of the Abel means converges to zero. This unique accumulation point is explicitly described by representation formulas involving probability measures on the state and control spaces. As a byproduct we obtain the existence of a limit value whenever the Cesàro or Abel values are equicontinuous. Our approach allows to generalise several results in ergodic control, and in particular it allows to cope with cases where the limit value is not constant with respect to the initial condition.
Scaling registration of multiview range scans via motion averaging
NASA Astrophysics Data System (ADS)
Zhu, Jihua; Zhu, Li; Jiang, Zutao; Li, Zhongyu; Li, Chen; Zhang, Fan
2016-07-01
Three-dimensional modeling of scene or object requires registration of multiple range scans, which are obtained by range sensor from different viewpoints. An approach is proposed for scaling registration of multiview range scans via motion averaging. First, it presents a method to estimate overlap percentages of all scan pairs involved in multiview registration. Then, a variant of iterative closest point algorithm is presented to calculate relative motions (scaling transformations) for these scan pairs, which contain high overlap percentages. Subsequently, the proposed motion averaging algorithm can transform these relative motions into global motions of multiview registration. In addition, it also introduces the parallel computation to increase the efficiency of multiview registration. Furthermore, it presents the error criterion for accuracy evaluation of multiview registration result, which can make it easy to compare results of different multiview registration approaches. Experimental results carried out with public available datasets demonstrate its superiority over related approaches.
Improved MCMAC with momentum, neighborhood, and averaged trapezoidal output.
Ang, K K; Chai, Q
2000-01-01
An improved modified cerebellar articulation controller (MCMAC) neural control algorithm with better learning and recall processes using momentum, neighborhood learning, and averaged trapezoidal output, is proposed in this paper. The learning and recall processes of MCMAC are investigated using the characteristic surface of MCMAC and the control action exerted in controlling a continuously variable transmission (CVT). Extensive experimental results demonstrate a significant improvement with reduced training time and an extended range of trained MCMAC cells. The improvement in recall process using the averaged trapezoidal output (MCMAC-ATO) are contrasted against the original MCMAC using the square of the Pearson product moment correlation coefficient. Experimental results show that the new recall process has significantly reduced the fluctuations in the control action of the MCMAC and addressed partially the problem associated with the resolution of the MCMAC memory array.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Non-self-averaging in Ising spin glasses and hyperuniversality
NASA Astrophysics Data System (ADS)
Lundow, P. H.; Campbell, I. A.
2016-01-01
Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U22(T ,L ) for the spin glass susceptibility [and for higher moments Un n(T ,L ) ] is reported for dimensions 2 ,3 ,4 ,5 , and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ (T ,L ) as Un n(β ,L ) =[Kdξ (T ,L ) /L ] d and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996), 10.1103/PhysRevLett.77.3700]. Empirically, it is found that the Kd values are independent of d to within the statistics. The maximum values [Unn(T,L ) ] max are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [Unn(T,L ) ] max peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and X Y spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.
Non-self-averaging in Ising spin glasses and hyperuniversality.
Lundow, P H; Campbell, I A
2016-01-01
Ising spin glasses with bimodal and Gaussian near-neighbor interaction distributions are studied through numerical simulations. The non-self-averaging (normalized intersample variance) parameter U_{22}(T,L) for the spin glass susceptibility [and for higher moments U_{nn}(T,L)] is reported for dimensions 2,3,4,5, and 7. In each dimension d the non-self-averaging parameters in the paramagnetic regime vary with the sample size L and the correlation length ξ(T,L) as U_{nn}(β,L)=[K_{d}ξ(T,L)/L]^{d} and so follow a renormalization group law due to Aharony and Harris [Phys. Rev. Lett. 77, 3700 (1996)PRLTAO0031-900710.1103/PhysRevLett.77.3700]. Empirically, it is found that the K_{d} values are independent of d to within the statistics. The maximum values [U_{nn}(T,L)]_{max} are almost independent of L in each dimension, and remarkably the estimated thermodynamic limit critical [U_{nn}(T,L)]_{max} peak values are also practically dimension-independent to within the statistics and so are "hyperuniversal." These results show that the form of the spin-spin correlation function distribution at criticality in the large L limit is independent of dimension within the ISG family. Inspection of published non-self-averaging data for three-dimensional Heisenberg and XY spin glasses the light of the Ising spin glass non-self-averaging results show behavior which appears to be compatible with that expected on a chiral-driven ordering interpretation but incompatible with a spin-driven ordering scenario.
Asymptotic Properties of Some Estimators in Moving Average Models
1975-09-08
consider a different approach due to Durbin (1959), based on approximating the moving average of order .q by an autoregression of order k ( k ~ q). This...method shows good statistical properties. The paper by Durbin does not treat in detail the role of k in the parameters of the limiting normal...k) confirming some of the examples presented by Durbin . The parallel analysis with k = k(T) was also attempted.:> but at this point no complete
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
Average dynamics of a finite set of coupled phase oscillators
Dima, Germán C. Mindlin, Gabriel B.
2014-06-15
We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.
High average power solid state laser power conditioning system
Steinkraus, R.F.
1987-03-03
The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers.
Light-cone averages in a Swiss-cheese universe
NASA Astrophysics Data System (ADS)
Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino
2008-01-01
We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaître-Tolman-Bondi solution of Einstein’s equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w0 and wa follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.
The role of the harmonic vector average in motion integration.
Johnston, Alan; Scarfe, Peter
2013-01-01
The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.
Hydrophone spatial averaging corrections from 1 to 100 MHz
NASA Astrophysics Data System (ADS)
Radulescu, Emil George
The purpose of this work was to develop and experimentally verify a set of robust and readily applicable spatial averaging models to account for ultrasonic hydrophone probe's finite aperture in acoustic field measurements in the frequency range 1--100 MHz. Electronically and mechanically focused acoustic sources of different geometries were considered. The geometries included single element circular sources and rectangular shape transducers that were representative of ultrasound imaging arrays used in clinical diagnostic applications. The field distributions of the acoustic sources were predicted and used in the development of the spatial averaging models. The validity of the models was tested using commercially available hydrophone probes having active element diameters ranging from 50 to 1200 mum. The models yielded guidelines which were applicable to both linear and nonlinear wave propagation conditions. By accounting for hydrophones' finite aperture and correcting the recorded pressure-time waveforms, the models allowed the uncertainty associated with determining the key acoustic output parameters such as: Pulse Intensity Integral (PII) and the intensities derived from it to be minimized. In addition, the work offered a correction factor for the safety indicator Mechanical Index (MI) that is required by AIUM/NEMA standards. The novelty of this research stems primarily from the fact that, to the best of the author's knowledge, such comprehensive set of models and guidelines has not been developed so far. Although different spatial averaging models have already been suggested, they have been limited to circular geometries, linear propagation conditions and conventional, low megahertz medical imaging frequencies, only. Also, the spatial averaging models described here provided the necessary corrections to obtain the true sensitivity versus frequency response during calibration of hydrophone probes up to 100 MHz and allowed for a subsequent development of two novel
Averaging cross section data so we can fit it
Brown, D.
2014-10-23
The ^{56}Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).
Effects of velocity averaging on the shapes of absorption lines
NASA Technical Reports Server (NTRS)
Pickett, H. M.
1980-01-01
The velocity averaging of collision cross sections produces non-Lorentz line shapes, even at densities where Doppler broadening is not apparent. The magnitude of the effects will be described using a model in which the collision broadening depends on a simple velocity power law. The effect of the modified profile on experimental measures of linewidth, shift and amplitude will be examined and an improved approximate line shape will be derived.
Characterizing individual painDETECT symptoms by average pain severity
Sadosky, Alesia; Koduru, Vijaya; Bienen, E Jay; Cappelleri, Joseph C
2016-01-01
Background painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure), a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe), but their ability to discriminate individual item severity has not been evaluated. Methods Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624). Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level. Results A probability >50% for a better outcome (less severe pain) was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain) and highest probability was 76.4% (on cold/heat for mild vs severe pain). The pain radiation item was significant (P<0.05) and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ. Conclusion painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain-severity levels can serve as proxies to determine treatment effects, thus indicating probabilities for more favorable outcomes on pain symptoms. PMID:27555789
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
Noise reduction of video imagery through simple averaging
NASA Astrophysics Data System (ADS)
Vorder Bruegge, Richard W.
1999-02-01
Examiners in the Special Photographic Unit of the Federal Bureau of Investigation Laboratory Division conduct examinations of questioned photographic evidence of all types, including surveillance imagery recorded on film and video tape. A primary type of examination includes side-by- side comparisons, in which unknown objects or people depicted in the questioned images are compared with known objects recovered from suspects or with photographs of suspects themselves. Most imagery received in the SPU for such comparisons originate from time-lapse video or film systems. In such circumstances, the delay between sequential images is so great that standard image summing and/or averaging techniques are useless as a means of improving image detail in questioned subjects or objects without also resorting to processing-intensive pattern reconstruction algorithms. Occasionally, however, the receipt of real-time video imagery will include a questioned object at rest. In such cases, it is possible to use relatively simple image averaging techniques as a means of reducing transient noise in the images, without further compromising the already-poor resolution inherent in most video surveillance images. This paper presents an example of one such case in which multiple images were averaged to reduce the transient noise to a sufficient degree to permit the positive identification of a vehicle based upon the presence of scrape marks and dents on the side of the vehicle.
Variations in Nimbus-7 cloud estimates. Part I: Zonal averages
Weare, B.C. )
1992-12-01
Zonal averages of low, middle, high, and total cloud amount estimates derived from measurements from Nimbus-7 have been analyzed for the six-year period April 1979 through March 1985. The globally and zonally averaged valued of six-year annual means and standard deviations of total cloud amount and a proxy of cloudtop height are illustrated. Separate means for day and night and land and sea are also shown. The globally averaged value of intra-annual variability of total cloud amount is greater than 7%, and that for cloud height is greater than 0.3 km. Those of interannual variability are more than one-third of these values. Important latitudinal differences in variability are illustrated. The dominant empirical orthogonal analyses of the intra-annual variations of total cloud amount and heights show strong annual cycles, indicating that in the tropics increases in total cloud amount of up to about 30% are often accompanied by increases in cloud height of up to 1.2 km. This positive link is also evident in the dominant empirical orthogonal function of interannual variations of a total cloud/cloud height complex. This function shows a large coherent variation in total cloud cover of about 10% coupled with changes in cloud height of about 1.1 km associated with the 1982-83 El Ni[tilde n]o-Southern Oscillation event. 14 refs. 12 figs., 2 tabs.
Local and average behaviour in inhomogeneous superdiffusive media
NASA Astrophysics Data System (ADS)
Vezzani, Alessandro; Burioni, Raffaella; Caniparoli, Luca; Lepri, Stefano
2011-05-01
We consider a random walk on one-dimensional inhomogeneous graphs built from Cantor fractals. Our study is motivated by recent experiments that demonstrated superdiffusion of light in complex disordered materials, thereby termed Lévy glasses. We introduce a geometric parameter α which plays a role analogous to the exponent characterising the step length distribution in random systems. We study the large-time behaviour of both local and average observables; for the latter case, we distinguish two different types of averages, respectively over the set of all initial sites and over the scattering sites only. The 'single long-jump approximation" is applied to analytically determine the different asymptotic behaviour as a function of α and to understand their origin. We also discuss the possibility that the root of the mean square displacement and the characteristic length of the walker distribution may grow according to different power laws; this anomalous behaviour is typical of processes characterised by Lévy statistics and here, in particular, it is shown to influence average quantities.
Design Principles for a Compact High Average Power IR FEL
Lia Merminga; Steve Benson
2001-08-01
Progress in superconducting rf (srf) technology has led to dramatic changes in cryogenic losses, cavity gradients, and microphonic levels. Design principles for a compact high average power Energy Recovery FEL at IR wavelengths, consistent with the state of the art in srf, are outlined, High accelerating gradients, of order 20 MV/m at Q{sub 0}{approx}1x10{sup 10} possible at rf frequencies of 1300 MHz and 1500 MHz, allow for a single-cryomodule linac, with minimum cryogenic losses. Filling every rf bucket, at these high frequencies, results in high average current at relatively low charge per bunch, thereby greatly ameliorating all single bunch phenomena, such as wakefields and coherent synchrotron radiation. These principles are applied to derive self-consistent sets of parameters for 100 kW and 1 MW average power IR FELs and are compared with low frequency solutions. This work supported by U.S. DOE Contract No. DE-AC05-84ER40150, the Commonwealth of Virginia and the Laser Processing Consortium.
On the average uncertainty for systems with nonlinear coupling
NASA Astrophysics Data System (ADS)
Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.
2017-02-01
The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Improvement of scanning radiometer performance by digital reference averaging
NASA Technical Reports Server (NTRS)
Bremer, J. C.
1979-01-01
Most radiometers utilize a calibration technique in which measurements of a known reference are subtracted from measurements of an unknown source so that common-mode bias errors are cancelled. When a radiometer is scanned over a varying scene, it produces a sequence of outputs, each being proportional to the difference between the reference and the corresponding input. A reference averaging technique is presented that employs a simple digital algorithm which exploits the asymmetry between the time-variable scene inputs and the nominally constant reference input by averaging many reference measurements to decrease the statistical uncertainty in the reference value. This algorithm is, therefore, optimized by an asymmetric chopping sequence in which the scene is viewed for more than one-half of the duty cycle (unlike the analog Dicke technique). Reference averaging algorithms are well within the capabilities of small microprocessors. Although this paper develops the technique for microwave radiometry, it may be beneficial for any system which measures a large number of unknowns relative to a known reference in the presence of slowly varying common-mode errors.
Exploring JLA supernova data with improved flux-averaging technique
NASA Astrophysics Data System (ADS)
Wang, Shuang; Wen, Sixiang; Li, Miao
2017-03-01
In this work, we explore the cosmological consequences of the ``Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the (zcut, Δ z) plane, where zcut and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying zcut and varying Δ z, revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is (zcut = 0.6, Δ z=0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at zcut >= 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ωm. In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.
Averaged null energy condition in loop quantum cosmology
Li Lifang; Zhu Jianyang
2009-02-15
Wormholes and time machines are objects of great interest in general relativity. However, to support them it needs exotic matters which are impossible at the classical level. Semiclassical gravity introduces the quantum effects into the stress-energy tensor and constructs many self-consistent wormholes. But they are not traversable due to the averaged null energy condition. Loop quantum gravity (LQG) significantly modifies the Einstein equation in the deep quantum region. If we write the modified Einstein equation in the form of the standard one but with an effective stress-energy tensor, it is convenient to analyze the geometry in LQG through the energy condition. Loop quantum cosmology (LQC), an application of LQG, has an effective stress-energy tensor which violates some kinds of local energy conditions. So it is natural that the inflation emerges in LQC. In this paper, we investigate the averaged null energy condition in LQC in the framework of the effective Hamiltonian, and we find that the effective stress-energy tensor in LQC violates the averaged null energy condition in the massless scalar field coupled model.
Rolling bearing feature frequency extraction using extreme average envelope decomposition
NASA Astrophysics Data System (ADS)
Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli
2016-09-01
The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.
Role of spatial averaging in multicellular gradient sensing
NASA Astrophysics Data System (ADS)
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-06-01
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
Average methods and their applications in differential geometry I
NASA Astrophysics Data System (ADS)
Vincze, Cs.
2015-06-01
In Minkowski geometry the metric features are based on a compact convex body containing the origin in its interior. This body works as a unit ball and its boundary is formed by the unit vectors. Using one-homogeneous extension we have a so-called Minkowski functional to measure the length of vectors. The half of its square is called the energy function. Under some regularity conditions we can introduce an averaged Euclidean inner product by integrating the Hessian matrix of the energy function on the Minkowskian unit sphere. Changing the origin in the interior of the body we have a collection of Minkowskian unit balls together with Minkowski functionals depending on the base points. It is a kind of special Finsler manifolds called a Funk space. Using the previous method we can associate a Riemannian metric as the collection of the averaged Euclidean inner products belonging to different base points. We investigate this procedure in case of Finsler manifolds in general. Central objects of the associated Riemannian structure will be expressed in terms of the canonical data of the Finsler space. We take one more step forward. Randers spaces will be introduced by averaging of the vertical derivatives of the Finslerian fundamental function. The construction will have a crucial role when we apply the general results to Funk spaces together with some contributions to Brickell's conjecture on Finsler manifolds with vanishing curvature tensor Q.
H∞ control of switched delayed systems with average dwell time
NASA Astrophysics Data System (ADS)
Li, Zhicheng; Gao, Huijun; Agarwal, Ramesh; Kaynak, Okyay
2013-12-01
This paper considers the problems of stability analysis and H∞ controller design of time-delay switched systems with average dwell time. In order to obtain less conservative results than what is seen in the literature, a tighter bound for the state delay term is estimated. Based on the scaled small gain theorem and the model transformation method, an improved exponential stability criterion for time-delay switched systems with average dwell time is formulated in the form of convex matrix inequalities. The aim of the proposed approach is to reduce the minimal average dwell time of the systems, which is made possible by a new Lyapunov-Krasovskii functional combined with the scaled small gain theorem. It is shown that this approach is able to tolerate a smaller dwell time or a larger admissible delay bound for the given conditions than most of the approaches seen in the literature. Moreover, the exponential H∞ controller can be constructed by solving a set of conditions, which is developed on the basis of the exponential stability criterion. Simulation examples illustrate the effectiveness of the proposed method.
High Average Power, High Energy Short Pulse Fiber Laser System
Messerly, M J
2007-11-13
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
Noise reduction in elastograms using temporal stretching with multicompression averaging.
Varghese, T; Ophir, J; Céspedes, I
1996-01-01
Elastography uses estimates of the time delay (obtained by cross-correlation) to compute strain estimates in tissue due to quasistatic compression. Because the time delay estimates do not generally occur at the sampling intervals, the location of the cross-correlation peak does not give an accurate estimate of the time delay. Sampling errors in the time-delay estimate are reduced using signal interpolation techniques to obtain subsample time-delay estimates. Distortions of the echo signals due to tissue compression introduce correlation artifacts in the elastogram. These artifacts are reduced by a combination of small compressions and temporal stretching of the postcompression signal. Random noise effects in the resulting elastograms are reduced by averaging several elastograms, obtained from successive small compressions (assuming that the errors are uncorrelated). Multicompression averaging with temporal stretching is shown to increase the signal-to-noise ratio in the elastogram by an order of magnitude, without sacrificing sensitivity, resolution or dynamic range. The strain filter concept is extended in this article to theoretically characterize the performance of multicompression averaging with temporal stretching.
How to Address Measurement Noise in Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Schöniger, A.; Wöhling, T.; Nowak, W.
2014-12-01
When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting
NASA Astrophysics Data System (ADS)
Farmer, W. M.
1991-09-01
A common method of estimating the attenuation capabilities of military smokes/obscurants is to use a band-averaged mass-extinction coefficient with concentration-length values in the Beer-Bouguer transmission law. This approach ignores the effects of source spectra, sensor response, and normal atmospheric attenuation on broadband transmittance characteristics, which can significantly affect broadband transmittance. The differences that can occur in predicting relative transmittance as a function of concentration length by using band-averaged mass-extinction coefficients as opposed to more properly computing the band-averaged transmittance are discussed in this paper. Two examples are provided to illustrate the differences in results. The first example considers 3- to 5-micron and 8- to 14-micron band transmission through natural fogs. The second example considers 3- to 5-micron and 8- to 12-micron transmission through phosphorus-derived smoke (a common military obscurant) produced at 17 percent and at 90 percent relative humidity. Major differences are found in the values of concentration lengths predicted by the two methods when the transmittance relative to an unobscured atmosphere falls below about 20 percent. These results can affect conclusions concerning the detection of targets in smokes screens, smoke concentration lengths required to obscure a target, and radiative transport through polluted atmospheres.
Code of Federal Regulations, 2011 CFR
2011-07-01
... averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.2975 to calculate emissions at 7 percent oxygen. (b) Use Equation 2 in § 60.2975 to calculate the 12-hour rolling averages...
Average waiting time profiles of uniform DQDB model
Rao, N.S.V.; Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D.
1993-09-07
The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
When did the average cosmic ray flux increase?
NASA Technical Reports Server (NTRS)
Nishiizumi, K.; Murty, S. V. S.; Marti, K.; Arnold, J. R.
1985-01-01
A new 129 to 129 Xe method to obtain cosmic ray exposure ages and to study the average cosmic ray flux on a 10 to the 7th power to 10 to the 8th power year time-scale was developed. The method is based on secondary neutron reactions on Te in troilite and the subsequent decay of 129I, the reaction product to stable 129 Xe. The first measurements of 129 I and 129 Xe in aliquot samples of a Cape York troilite sample are reported.
Constructing the Average Natural History of HIV-1 Infection
NASA Astrophysics Data System (ADS)
Diambra, L.; Capurro, A.; Malta, C. P.
2007-05-01
Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.
Low Average Sidelobe Slot Array Antennas for Radiometer Applications
NASA Technical Reports Server (NTRS)
Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.
2012-01-01
In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E
Optical Parametric Amplification for High Peak and Average Power
Jovanovic, Igor
2001-11-26
Optical parametric amplification is an established broadband amplification technology based on a second-order nonlinear process of difference-frequency generation (DFG). When used in chirped pulse amplification (CPA), the technology has been termed optical parametric chirped pulse amplification (OPCPA). OPCPA holds a potential for producing unprecedented levels of peak and average power in optical pulses through its scalable ultrashort pulse amplification capability and the absence of quantum defect, respectively. The theory of three-wave parametric interactions is presented, followed by a description of the numerical model developed for nanosecond pulses. Spectral, temperature and angular characteristics of OPCPA are calculated, with an estimate of pulse contrast. An OPCPA system centered at 1054 nm, based on a commercial tabletop Q-switched pump laser, was developed as the front end for a large Nd-glass petawatt-class short-pulse laser. The system does not utilize electro-optic modulators or multi-pass amplification. The obtained overall 6% efficiency is the highest to date in OPCPA that uses a tabletop commercial pump laser. The first compression of pulses amplified in highly nondegenerate OPCPA is reported, with the obtained pulse width of 60 fs. This represents the shortest pulse to date produced in OPCPA. Optical parametric amplification in {beta}-barium borate was combined with laser amplification in Ti:sapphire to produce the first hybrid CPA system, with an overall conversion efficiency of 15%. Hybrid CPA combines the benefits of high gain in OPCPA with high conversion efficiency in Ti:sapphire to allow significant simplification of future tabletop multi-terawatt sources. Preliminary modeling of average power limits in OPCPA and pump laser design are presented, and an approach based on cascaded DFG is proposed to increase the average power beyond the single-crystal limit. Angular and beam quality effects in optical parametric amplification are modeled
High average power, high current pulsed accelerator technology
Neau, E.L.
1995-05-01
Which current pulsed accelerator technology was developed during the late 60`s through the late 80`s to satisfy the needs of various military related applications such as effects simulators, particle beam devices, free electron lasers, and as drivers for Inertial Confinement Fusion devices. The emphasis in these devices is to achieve very high peak power levels, with pulse lengths on the order of a few 10`s of nanoseconds, peak currents of up to 10`s of MA, and accelerating potentials of up to 10`s of MV. New which average power systems, incorporating thermal management techniques, are enabling the potential use of high peak power technology in a number of diverse industrial application areas such as materials processing, food processing, stack gas cleanup, and the destruction of organic contaminants. These systems employ semiconductor and saturable magnetic switches to achieve short pulse durations that can then be added to efficiently give MV accelerating, potentials while delivering average power levels of a few 100`s of kilowatts to perhaps many megawatts. The Repetitive High Energy Puled Power project is developing short-pulse, high current accelerator technology capable of generating beams with kJ`s of energy per pulse delivered to areas of 1000 cm{sup 2} or more using ions, electrons, or x-rays. Modular technology is employed to meet the needs of a variety of applications requiring from 100`s of kV to MV`s and from 10`s to 100`s of kA. Modest repetition rates, up to a few 100`s of pulses per second (PPS), allow these machines to deliver average currents on the order of a few 100`s of mA. The design and operation of the second generation 300 kW RHEPP-II machine, now being brought on-line to operate at 2.5 MV, 25 kA, and 100 PPS will be described in detail as one example of the new high average power, high current pulsed accelerator technology.
Recent advances in phase shifted time averaging and stroboscopic interferometry
NASA Astrophysics Data System (ADS)
Styk, Adam; Józwik, Michał
2016-08-01
Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.
Measurement of the average φ multiplicity in B meson decay
NASA Astrophysics Data System (ADS)
Aubert, B.; Barate, R.; Boutigny, D.; Gaillard, J.-M.; Hicheur, A.; Karyotakis, Y.; Lees, J. P.; Robbe, P.; Tisserand, V.; Zghiche, A.; Palano, A.; Pompili, A.; Chen, J. C.; Qi, N. D.; Rong, G.; Wang, P.; Zhu, Y. S.; Eigen, G.; Ofte, I.; Stugu, B.; Abrams, G. S.; Borgland, A. W.; Breon, A. B.; Brown, D. N.; Button-Shafer, J.; Cahn, R. N.; Charles, E.; Day, C. T.; Gill, M. S.; Gritsan, A. V.; Groysman, Y.; Jacobsen, R. G.; Kadel, R. W.; Kadyk, J.; Kerth, L. T.; Kolomensky, Yu. G.; Kukartsev, G.; Leclerc, C.; Levi, M. E.; Lynch, G.; Mir, L. M.; Oddone, P. J.; Orimoto, T. J.; Pripstein, M.; Roe, N. A.; Romosan, A.; Ronan, M. T.; Shelkov, V. G.; Telnov, A. V.; Wenzel, W. A.; Ford, K.; Harrison, T. J.; Hawkes, C. M.; Knowles, D. J.; Morgan, S. E.; Penny, R. C.; Watson, A. T.; Watson, N. K.; Goetzen, K.; Held, T.; Koch, H.; Lewandowski, B.; Pelizaeus, M.; Peters, K.; Schmuecker, H.; Steinke, M.; Boyd, J. T.; Chevalier, N.; Cottingham, W. N.; Kelly, M. P.; Latham, T. E.; Mackay, C.; Wilson, F. F.; Abe, K.; Cuhadar-Donszelmann, T.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; Thiessen, D.; Kyberd, P.; McKemey, A. K.; Teodorescu, L.; Blinov, V. E.; Bukin, A. D.; Golubev, V. B.; Ivanchenko, V. N.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Yushkov, A. N.; Best, D.; Bruinsma, M.; Chao, M.; Kirkby, D.; Lankford, A. J.; Mandelkern, M.; Mommsen, R. K.; Roethel, W.; Stoker, D. P.; Buchanan, C.; Hartfiel, B. L.; Gary, J. W.; Layter, J.; Shen, B. C.; Wang, K.; del Re, D.; Hadavand, H. K.; Hill, E. J.; Macfarlane, D. B.; Paar, H. P.; Rahatlou, Sh.; Sharma, V.; Berryhill, J. W.; Campagnari, C.; Dahmes, B.; Kuznetsova, N.; Levy, S. L.; Long, O.; Lu, A.; Mazur, M. A.; Richman, J. D.; Rozen, Y.; Verkerke, W.; Beck, T. W.; Beringer, J.; Eisner, A. M.; Heusch, C. A.; Lockman, W. S.; Schalk, T.; Schmitz, R. E.; Schumm, B. A.; Seiden, A.; Turri, M.; Walkowiak, W.; Williams, D. C.; Wilson, M. G.; Albert, J.; Chen, E.; Dubois-Felsmann, G. P.; Dvoretskii, A.; Erwin, R. J.; Hitlin, D. G.; Narsky, I.; Piatenko, T.; Porter, F. C.; Ryd, A.; Samuel, A.; Yang, S.; Jayatilleke, S.; Mancinelli, G.; Meadows, B. T.; Sokoloff, M. D.; Abe, T.; Blanc, F.; Bloom, P.; Chen, S.; Clark, P. J.; Ford, W. T.; Nauenberg, U.; Olivas, A.; Rankin, P.; Roy, J.; Smith, J. G.; van Hoek, W. C.; Zhang, L.; Harton, J. L.; Hu, T.; Soffer, A.; Toki, W. H.; Wilson, R. J.; Zhang, J.; Altenburg, D.; Brandt, T.; Brose, J.; Colberg, T.; Dickopp, M.; Dubitzky, R. S.; Hauke, A.; Lacker, H. M.; Maly, E.; Müller-Pfefferkorn, R.; Nogowski, R.; Otto, S.; Schubert, J.; Schubert, K. R.; Schwierz, R.; Spaan, B.; Wilden, L.; Bernard, D.; Bonneaud, G. R.; Brochard, F.; Cohen-Tanugi, J.; Grenier, P.; Thiebaux, Ch.; Vasileiadis, G.; Verderi, M.; Khan, A.; Lavin, D.; Muheim, F.; Playfer, S.; Swain, J. E.; Andreotti, M.; Azzolini, V.; Bettoni, D.; Bozzi, C.; Calabrese, R.; Cibinetto, G.; Luppi, E.; Negrini, M.; Piemontese, L.; Sarti, A.; Treadwell, E.; Anulli, F.; Baldini-Ferroli, R.; Biasini, M.; Calcaterra, A.; de Sangro, R.; Falciai, D.; Finocchiaro, G.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Pioppi, M.; Zallo, A.; Buzzo, A.; Capra, R.; Contri, R.; Crosetti, G.; Lo Vetere, M.; Macri, M.; Monge, M. R.; Passaggio, S.; Patrignani, C.; Robutti, E.; Santroni, A.; Tosi, S.; Bailey, S.; Morii, M.; Won, E.; Bhimji, W.; Bowerman, D. A.; Dauncey, P. D.; Egede, U.; Eschrich, I.; Gaillard, J. R.; Morton, G. W.; Nash, J. A.; Sanders, P.; Taylor, G. P.; Grenier, G. J.; Lee, S.-J.; Mallik, U.; Cochran, J.; Crawley, H. B.; Lamsa, J.; Meyer, W. T.; Prell, S.; Rosenberg, E. I.; Yi, J.; Davier, M.; Grosdidier, G.; Höcker, A.; Laplace, S.; Le Diberder, F.; Lepeltier, V.; Lutz, A. M.; Petersen, T. C.; Plaszczynski, S.; Schune, M. H.; Tantot, L.; Wormser, G.; Brigljević, V.; Cheng, C. H.; Lange, D. J.; Simani, M. C.; Wright, D. M.; Bevan, A. J.; Coleman, J. P.; Fry, J. R.; Gabathuler, E.; Gamet, R.; Kay, M.; Parry, R. J.; Payne, D. J.; Sloane, R. J.; Touramanis, C.; Back, J. J.; Cormack, C. M.; Harrison, P. F.; Shorthouse, H. W.; Vidal, P. B.; Brown, C. L.; Cowan, G.; Flack, R. L.; Flaecher, H. U.; George, S.; Green, M. G.; Kurup, A.; Marker, C. E.; McMahon, T. R.; Ricciardi, S.; Salvatore, F.; Vaitsas, G.; Winter, M. A.; Brown, D.; Davis, C. L.; Allison, J.; Barlow, N. R.; Barlow, R. J.; Hart, P. A.; Hodgkinson, M. C.; Jackson, F.; Lafferty, G. D.; Lyon, A. J.; Weatherall, J. H.; Williams, J. C.; Farbin, A.; Jawahery, A.; Kovalskyi, D.; Lae, C. K.; Lillard, V.; Roberts, D. A.; Blaylock, G.; Dallapiccola, C.; Flood, K. T.; Hertzbach, S. S.; Kofler, R.; Koptchev, V. B.; Moore, T. B.; Saremi, S.; Staengle, H.; Willocq, S.; Cowan, R.; Sciolla, G.; Taylor, F.; Yamamoto, R. K.; Mangeol, D. J.; Patel, P. M.; Robertson, S. H.; Lazzaro, A.; Palombo, F.; Bauer, J. M.; Cremaldi, L.; Eschenburg, V.; Godang, R.; Kroeger, R.; Reidy, J.; Sanders, D. A.; Summers, D. J.; Zhao, H. W.; Brunet, S.; Cote-Ahern, D.; Taras, P.; Nicholson, H.; Cartaro, C.; Cavallo, N.; de Nardo, G.; Fabozzi, F.; Gatto, C.; Lista, L.; Paolucci, P.; Piccolo, D.; Sciacca, C.; Baak, M. A.; Raven, G.; Losecco, J. M.; Gabriel, T. A.; Brau, B.; Gan, K. K.; Honscheid, K.; Hufnagel, D.; Kagan, H.; Kass, R.; Pulliam, T.; Wong, Q. K.; Brau, J.; Frey, R.; Potter, C. T.; Sinev, N. B.; Strom, D.; Torrence, E.; Colecchia, F.; Dorigo, A.; Galeazzi, F.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simonetto, F.; Stroili, R.; Tiozzo, G.; Voci, C.; Benayoun, M.; Briand, H.; Chauveau, J.; David, P.; de La Vaissière, Ch.; del Buono, L.; Hamon, O.; John, M. J.; Leruste, Ph.; Ocariz, J.; Pivk, M.; Roos, L.; Stark, J.; T'jampens, S.; Therin, G.; Manfredi, P. F.; Re, V.; Behera, P. K.; Gladney, L.; Guo, Q. H.; Panetta, J.; Angelini, C.; Batignani, G.; Bettarini, S.; Bondioli, M.; Bucci, F.; Calderini, G.; Carpinelli, M.; del Gamba, V.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Marchiori, G.; Martinez-Vidal, F.; Morganti, M.; Neri, N.; Paoloni, E.; Rama, M.; Rizzo, G.; Sandrelli, F.; Walsh, J.; Haire, M.; Judd, D.; Paick, K.; Wagoner, D. E.; Danielson, N.; Elmer, P.; Lu, C.; Miftakov, V.; Olsen, J.; Smith, A. J.; Tanaka, H. A.; Varnes, E. W.; Bellini, F.; Cavoto, G.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Mazzoni, M. A.; Morganti, S.; Pierini, M.; Piredda, G.; Safai Tehrani, F.; Voena, C.; Christ, S.; Wagner, G.; Waldi, R.; Adye, T.; de Groot, N.; Franek, B.; Geddes, N. I.; Gopal, G. P.; Olaiya, E. O.; Xella, S. M.; Aleksan, R.; Emery, S.; Gaidot, A.; Ganzhur, S. F.; Giraud, P.-F.; Hamel de Monchenault, G.; Kozanecki, W.; Langer, M.; Legendre, M.; London, G. W.; Mayer, B.; Schott, G.; Vasseur, G.; Yeche, Ch.; Zito, M.; Purohit, M. V.; Weidemann, A. W.; Yumiceva, F. X.; Aston, D.; Bartoldus, R.; Berger, N.; Boyarski, A. M.; Buchmueller, O. L.; Convery, M. R.; Coupal, D. P.; Dong, D.; Dorfan, J.; Dujmic, D.; Dunwoodie, W.; Field, R. C.; Glanzman, T.; Gowdy, S. J.; Grauges-Pous, E.; Hadig, T.; Halyo, V.; Hryn'ova, T.; Innes, W. R.; Jessop, C. P.; Kelsey, M. H.; Kim, P.; Kocian, M. L.; Langenegger, U.; Leith, D. W.; Libby, J.; Luitz, S.; Luth, V.; Lynch, H. L.; Marsiske, H.; Messner, R.; Muller, D. R.; O'Grady, C. P.; Ozcan, V. E.; Perazzo, A.; Perl, M.; Petrak, S.; Ratcliff, B. N.; Roodman, A.; Salnikov, A. A.; Schindler, R. H.; Schwiening, J.; Simi, G.; Snyder, A.; Soha, A.; Stelzer, J.; Su, D.; Sullivan, M. K.; Va'Vra, J.; Wagner, S. R.; Weaver, M.; Weinstein, A. J.; Wisniewski, W. J.; Wright, D. H.; Young, C. C.; Burchat, P. R.; Edwards, A. J.; Meyer, T. I.; Petersen, B. A.; Roat, C.; Ahmed, M.; Ahmed, S.; Alam, M. S.; Ernst, J. A.; Saeed, M. A.; Saleem, M.; Wappler, F. R.; Bugg, W.; Krishnamurthy, M.; Spanier, S. M.; Eckmann, R.; Kim, H.; Ritchie, J. L.; Schwitters, R. F.; Izen, J. M.; Kitayama, I.; Lou, X. C.; Ye, S.; Bianchi, F.; Bona, M.; Gallo, F.; Gamba, D.; Borean, C.; Bosisio, L.; della Ricca, G.; Dittongo, S.; Grancagnolo, S.; Lanceri, L.; Poropat, P.; Vitale, L.; Vuagnin, G.; Panvini, R. S.; Banerjee, Sw.; Brown, C. M.; Fortin, D.; Jackson, P. D.; Kowalewski, R.; Roney, J. M.; Band, H. R.; Dasu, S.; Datta, M.; Eichenbaum, A. M.; Johnson, J. R.; Kutter, P. E.; Li, H.; Liu, R.; di Lodovico, F.; Mihalyi, A.; Mohapatra, A. K.; Pan, Y.; Prepost, R.; Sekula, S. J.; von Wimmersperg-Toeller, J. H.; Wu, J.; Wu, S. L.; Yu, Z.; Neal, H.
2004-03-01
We present a measurement of the average multiplicity of φ mesons in B0, B0, and B± meson decays. Using 17.6 fb-1 of data taken at the Υ(4S) resonance by the BABAR detector at the PEP-II e+e- storage ring at the Stanford Linear Accelerator Center, we reconstruct φ mesons in the K+K- decay mode and measure B(B→φX)=(3.41±0.06±0.12)%. This is significantly more precise than any previous measurement.