For comprehensive and current results, perform a real-time search at Science.gov.

1

Turnaround Time Modeling for Conceptual Rocket Engines

NASA Technical Reports Server (NTRS)

Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.

Nix, Michael; Staton, Eric J.

2004-01-01

2

Real-time aircraft turnaround operations manager

Aircraft turnaround operations are the activities conducted to prepare an inbound aircraft at an airport for a following outbound flight that is scheduled for the same aircraft. The activities of aircraft turnaround operations include both the inbound and outbound exchanges of passengers, crew, catering services, cargo and baggage handling. Hence, appears the importance of following up on all activities done

M. Abd Allah Makhloof; M. Elsayed Waheed; Usama A. El-Raouf Badawi

2012-01-01

3

Turnaround Time and Market Capacity in Contract Cheating

ERIC Educational Resources Information Center

Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…

Wallace, Melisa J.; Newton, Philip M.

2014-01-01

4

Do picture archiving and communication systems improve Report turnaround times?

Radiology departments are beginning to embrace new technologies to decrease operating budgets and improve services. One of\\u000a these technologies is the picture archiving and communication system (PACS). PACS, through immediate availability of images\\u000a to the radiologist, promises to decrease turnaround times of reports to the clinician. The purpose of this study was to determine\\u000a if this technology actually decreases the

Amit Mehta; Keith Dreyer; Giles Boland; Mark Frank

2000-01-01

5

Factors that impact turnaround time of surgical pathology specimens in an academic institution.

Turnaround time of laboratory results is important for customer satisfaction. The College of American Pathologists' checklist requires an analytic turnaround time of 2 days or less for most routine cases and lets every hospital define what a routine specimen is. The objective of this study was to analyze which factors impact turnaround time of nonbiopsy surgical pathology specimens. We calculated the turnaround time from receipt to verification of results (adjusted for weekends and holidays) for all nonbiopsy surgical specimens during a 2-week period. Factors studied included tissue type, number of slides per case, decalcification, immunohistochemistry, consultations with other pathologists, and diagnosis. Univariate and multivariate analyses were performed. A total of 713 specimens were analyzed, 551 (77%) were verified within 2 days and 162 (23%) in 3 days or more. Lung, gastrointestinal, breast, and genitourinary specimens showed the highest percentage of cases being signed out in over 3 days. Diagnosis of malignancy (including staging of the neoplasia), consultation with other pathologists, having had a frozen section, and use of immunohistochemical stains were significantly associated with increased turnaround time in univariate analysis. Decalcification was not associated with increased turnaround time. In multivariate analysis, consultation with other pathologists, use of immunohistochemistry, diagnosis of malignancy, and the number of slides studied continued to be significantly associated with prolonged turnaround time. Our findings suggest that diagnosis of malignancy is central to significantly prolonging the turnaround time for surgical pathology specimens, thus institutions that serve cancer centers will have longer turnaround time than those that do not. PMID:22406373

Patel, Samip; Smith, Jennifer B; Kurbatova, Ekaterina; Guarner, Jeannette

2012-09-01

6

24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.

Code of Federal Regulations, 2010 CFR

... 2010-04-01 false Indicator #1, vacancy rate and unit...DEVELOPMENT PUBLIC HOUSING MANAGEMENT ASSESSMENT PROGRAM § 901.10 Indicator #1, vacancy rate and unit turnaround time. This indicator examines the vacancy...

2010-04-01

7

Reducing the turnaround time for the Shuttle Orbiter main propulsion system at Kennedy Space Center

NASA Technical Reports Server (NTRS)

This paper presents steps currently being implemented to reduce the processing time of the Orbiter Main Propulsion System. Given the projected increase in launch rates of the Space Transportation System (STS) in the operational era, average turnaround time from arrival at Kennedy Space Center to subsequent launch will have to be significantly reduced. In many STS flows to date, a pacing system has been the Main Propulsion System consisting of the three Space Shuttle Main Engines (SSME's) and the Main Propulsion Subsystem (MPS) connecting the SSME's to the Orbiter/ground and Orbiter/External Tank interfaces. This paper summarizes procedural, hardware, software, and requirements changes being incorporated at Kennedy Space Center (KSC) to reduce the processing time required for the MPS/SSME systems. Specific examples from each category are cited to illustrate the impact of the improvements on MPS/SSME processing.

Bilardo, V. J., Jr.

1983-01-01

8

Turnaround Prediction with Stochastic Process Times and Airport Specific Delay Pattern

Turnaround Prediction with Stochastic Process Times and Airport Specific Delay Pattern An approach process conflict (e.g.: fueling end time and boarding start time overlap). Currently, these conflicts be accepted in a future 4D high precision flight trajectory environment. There are various process management

9

Insertable system for fast turnaround time microwave experiments in a dilution refrigerator.

Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence. PMID:23020391

Ong, Florian R; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian

2012-09-01

10

Insertable system for fast turnaround time microwave experiments in a dilution refrigerator

NASA Astrophysics Data System (ADS)

Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence.

Ong, Florian R.; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian

2012-09-01

11

The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality. PMID:23028540

Plaza, Stephen M.; Scheffer, Louis K.; Saunders, Mathew

2012-01-01

12

Diagnostic Accuracy and Turnaround Time of the Xpert MTB/RIF Assay in Routine Clinical Practice

The Xpert MTB/RIF assay was introduced for timely and accurate detection of tuberculosis (TB). The aim of this study was to determine the diagnostic accuracy and turnaround time (TAT) of Xpert MTB/RIF assay in clinical practice in South Korea. We retrospectively reviewed the medical records of patients in whom Xpert MTB/RIF assay using sputum were requested. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for the diagnosis of pulmonary tuberculosis (PTB) and detection of rifampicin resistance were calculated. In addition, TAT of Xpert MTB/RIF assay was compared with those of other tests. Total 681 patients in whom Xpert MTB/RIF assay was requested were included in the analysis. The sensitivity, specificity, PPV and NPV of Xpert MTB/RIF assay for diagnosis of PTB were 79.5% (124/156), 100.0% (505/505), 100.0% (124/124) and 94.0% (505/537), respectively. Those for the detection of rifampicin resistance were 57.1% (8/14), 100.0% (113/113), 100.0% (8/8) and 94.9% (113/119), respectively. The median TAT of Xpert MTB/RIF assay to the report of results and results confirmed by physicians in outpatient settings were 0 (0–1) and 6 (3–7) days, respectively. Median time to treatment after initial evaluation was 7 (4–9) days in patients with Xpert MTB/RIF assay, but was 21 (7–33.5) days in patients without Xpert MTB/RIF assay. Xpert MTB/RIF assay showed acceptable sensitivity and excellent specificity for the diagnosis of PTB and detection of rifampicin resistance in areas with intermediate TB burden. Additionally, the assay decreased time to the initiation of anti-TB drugs through shorter TAT. PMID:24204834

Kwak, Nakwon; Choi, Sun Mi; Lee, Jinwoo; Park, Young Sik; Lee, Chang-Hoon; Lee, Sang-Min; Yoo, Chul-Gyu; Kim, Young Whan; Han, Sung Koo; Yim, Jae-Joon

2013-01-01

13

Turnaround in cyclic cosmology.

It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion. PMID:17359014

Baum, Lauris; Frampton, Paul H

2007-02-16

14

Random time averaged diffusivities for Lévy walks

NASA Astrophysics Data System (ADS)

We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.

Froemberg, D.; Barkai, E.

2013-07-01

15

Turnaround Momentum Still Fragile

ERIC Educational Resources Information Center

The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a…

Klein, Alyson

2012-01-01

16

Piezoelectrically actuated time-averaged atomic microtraps

NASA Astrophysics Data System (ADS)

We present a scheme for creating tight and adiabatic time-averaged atom-traps through the piezoelectric actuation of nanomagnetic structures. We show that potentials formed by the circular translation of magnetic structures have several advantages over conventional rotating-field techniques, particularly for high trap frequencies. As the magnitude of the actuation is changed, the trapping potential can be changed adiabatically between harmonic 3D confinement and a toroidal trap.

West, A. D.; Wade, C. G.; Weatherill, K. J.; Hughes, I. G.

2012-07-01

17

Time-dependent angularly averaged inverse transport

This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.

Guillaume Bal; Alexandre Jollivet

2009-02-19

18

The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3} fusion methods using DWPF simulated glass (part 1) and (to the extent possible) using DWPF radioactive glass (part 2). (3) Recommend specific equipment to implement the new methods and work with SRNL's Equipment Engineering Section (EES) to ensure compatibility of the equipment with the DWPF shielded cells. The focus of this report is major task activity No.2 (parts 1 and 2). The other two major task activities are to be addressed separately. The measurements supporting task activity No.1 are being conducted at the DWPF. EES is currently designing and fabricating the DWPF cell equipment needed for the new method as part of major task activity No.3.

Edwards, Thomas

2005-09-01

19

Human Capital in Turnaround Schools

ERIC Educational Resources Information Center

Finding, keeping and supporting great educators presents the single biggest challenge to successful school turnarounds. Without teachers and administrators who bring the needed combination of skills and passion, nothing else will achieve the desired effect. The turnaround model supported by the U.S. Department of Education School Improvement Grant…

Ferris, Kristen

2012-01-01

20

Quiet-time average auroral configuration

The spatial pattern of auroral electron precipitation during magnetically quiet intervals was studied. The result is compared with the Feldstein (1963) auroral oval and other reported configurations of auroral optical emission. The precipitation of auroral electrons takes place in an irregularly shaped annular belt encircling the pole. The belt is composed of two parts: the equatorial part (average energy greater

K. Lassen; C. Danielsen; C.-I. Meng

1988-01-01

21

In response to the revision of social medical insurance policy, in which hospital clinics can additionally charge for laboratory testing when the test results are presented to an outpatient in a print-out form on a visiting day, we evaluated laboratory-spending times, so-called turnaround times (TATs). A total of 14,802 outpatients during the period from October 2010 to May 2011 were enrolled. TATs from venipuncture accession to completing blood collection revealed a log-normal distribution with 5 to 6 min of mode and 10(0.95 +/- 0.26) (4.90 to 16.2) min of mean +/- standard deviation. Order waiting time figured a half-normal distribution, 50% tile and 90%-tile being 4 and 16 min, respectively. TATs of blood collection and order waiting time were significantly influenced by days of the week and accession time. Through analysis of TATs from specimen receipt to reporting test results, it became apparent that the tests determined by immunoassay and erythrocyte sedimentation rate (ESR) required more minutes when compared to the remaining tests. Total TATs from venipuncture accession to reporting test results ranged 28 to 29 min (50%-tile) for complete blood count and hemoglobin A1c, whereas those of endocrinology and tumor markers were 65 to 73 min. In conclusion, the tests determined by immunoassay are rate-limiting for rapid reporting efforts in clinical laboratories. Secondly, TATs of blood collection are mostly influenced by order waiting time depending on days of the week and accession time. At present, there is no target value for TATs, however it is important to recognize the necessity to shorten laboratory-spending TATs. PMID:22568083

Teruya, Emi; Yamauchi, Megumi S; Yamane, Nobuhisa; Nakasone, Isamu; Miyagi, Ayano; Nago, Tamami T; Uechi, Kohei; Higashiuesato, Yasushi

2012-03-01

22

Introduction HIV/ AIDS clinics in resource limited settings (RLS) face increasing numbers of patients and workforce shortage [1, 2]. To address these challenges, efficient models of care like pharmacy only visits (POV) and nurse only visits (NOV) are recommended [3]. The Makerere University Joint AIDS Program (MJAP), a PEPFAR funded program providing care to over 42,000 HIV infected adults has implemented the POV model since 2009. In this model, stable patients on antiretroviral therapy (ART) with adherence to ART >95% and Karnofsky score >90% are reviewed by a doctor every four months but visit pharmacy for ART re-fills every two months. A study conducted in August 2011 showed low retention on the POV program with symptomatic diseases, pending CD4 count, complete blood count results, and poor adherence to ART as the major reasons for the non-retention in the POV program. To improve retention on POV, the TAT (Turnaround Time) for laboratory results (the main reason for non-retention in the previous study) was reduced from one month to one week. In August 2012, the study was repeated to assess the effect of reducing TAT on improving retention one year after patients were placed on POV. Materials and Methods A cohort analysis of data from patients in August 2011 and in August 2012 on POV was done. We compared retention of POV before and after reducing the TAT for laboratory results. Results Retention on POV was 12.0% (95% CI 9.50–14.7) among 619 patients in 2011, (70% Females), mean age was 33 years, Standard Deviation (SD) 8.5 compared to 11.1% (95% CI 9.15–13.4) among 888 patients (70% Females), mean age 38.3 years, SD 8.9 in 2012 (p=0.59). The main reasons for non-retention on the POV program in 2012 were poor adherence to ART (23%) and missed clinic appointments (14%). Conclusions Reducing TAT for laboratory test results did not improve retention of stable HIV-infected adults on POV in our clinic. Strategies for improving adherence to ART and keeping clinic appointments need to be employed to balance workload and management of patients without compromising quality of care, patients’ clinical, immunological and adherence outcome. PMID:25394111

Maselle, Edna; Muhanguzi, Asaph; Muhumuza, Simon; Nansubuga, Jeniffer; Nawavvu, Cecilia; Namusobya, Jeniffer; Kamya, Moses R; Semitala, Fred C

2014-01-01

23

Double exposure time-averaged in-line digital holography

NASA Astrophysics Data System (ADS)

Theory, analysis and applications of digital in-line holography are presented for metrological applications. Particularly time averaged in-line digital holography is explored for dynamic characterization of membranes and MEMS diaphragms. The analysis and capability of numerically reconstructed amplitude and phase information from time averaged holograms is presented. Reconstructed amplitude provides the vibration mode shapes by showing the time average fringes that are modulated by zero-order Bessel function, same as in conventional time-averaged holography. However the numerical phase information divided in two parts, the first part represents the surface roughness information of object and is a source of noise for single exposure, and the second part called the time average phase. By using a novel double exposure method, the reconstructed phase information from time averaged holograms can be used for mean static deformation as well for better visualization of time averaged fringes. In case of the vibrating objects with simultaneous mean static deformation, the phase information mixes together and used for precise analysis of vibration behaviors. The use of double exposure method also suppress the noise from the real image wave, caused by overlapping of zero-order term and twin image wave because of in-line geometry. The experimental results are presented for vibrations of aluminum membrane with 10mm in size, and also for a MEMS diaphragm with 6mm in size.

Singh, Vijay Raj; Asundi, Anand; Miao, Jianmin

2007-06-01

24

Time average vibration fringe analysis using Hilbert transformation

Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

2010-10-20

25

Long time Evolution of Quantum Averages Near Stationary Points

We construct explicit expressions for quantum averages in coherent states for a Hamiltonian of degree 4 with a hyperbolic stagnation point. These expressions are valid for all times and "collapse" (i.e., become infinite) along a discrete sequence of times. We compute quantum corrections compared to classical expressions. These corrections become significant over a time period of order C log 1/\\hbar.

Gennady Berman; Misha Vishik

2003-10-06

26

Long time Evolution of Quantum Averages Near Stationary Points

We construct explicit expressions for quantum averages in coherent states for a Hamiltonian of degree 4 with a hyperbolic stagnation point. These expressions are valid for all times and "collapse" (i.e., become infinite) along a discrete sequence of times. We compute quantum corrections compared to classical expressions. These corrections become significant over a time period of order C log 1/\\hbar.

Berman, Gennady P; Berman, Gennady; Vishik, Misha

2003-01-01

27

Long time evolution of quantum averages near stationary points

We construct explicit expressions for quantum averages in coherent states for a Hamiltonian of degree 4 with a hyperbolic stagnation point. These expressions are valid for all times and “collapse” (i.e., become infinite) along a discrete sequence of times. We compute quantum corrections compared to classical expressions. These corrections become significant over a time period of order Clog1?.

Gennady Berman; Misha Vishik

2003-01-01

28

Time-averaged quadratic functionals of a Gaussian process

NASA Astrophysics Data System (ADS)

The characterization of a stochastic process from its single random realization is a challenging problem for most single-particle tracking techniques which survey an individual trajectory of a tracer in a complex or viscoelastic medium. We consider two quadratic functionals of the trajectory: the time-averaged mean-square displacement (MSD) and the time-averaged squared root mean-square displacement (SRMS). For a large class of stochastic processes governed by the generalized Langevin equation with arbitrary frictional memory kernel and harmonic potential, the exact formulas for the mean and covariance of these functionals are derived. The formula for the mean value can be directly used for fitting experimental data, e.g., in optical tweezers microrheology. The formula for the variance (and covariance) allows one to estimate the intrinsic fluctuations of measured (or simulated) time-averaged MSD or SRMS for choosing the experimental setup appropriately. We show that the time-averaged SRMS has smaller fluctuations than the time-averaged MSD, in spite of much broader applications of the latter one. The theoretical results are successfully confirmed by Monte Carlo simulations of the Langevin dynamics. We conclude that the use of the time-averaged SRMS would result in a more accurate statistical analysis of individual trajectories and more reliable interpretation of experimental data.

Grebenkov, Denis S.

2011-06-01

29

Time-average and time-dependent parts of core flow and Peter Olson

Time-average and time-dependent parts of core flow Hagay Amit and Peter Olson April 11, 2005-geostrophic flow. We obtain snap- shot images of core flow at Â¥ year intervals, which we combine to give time-average and time-dependent parts of the motion over this time interval. The most prominent time- average flow

Olson, Peter L.

30

An averaging analysis of discrete-time indirect adaptive control

NASA Technical Reports Server (NTRS)

An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.

Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.

1988-01-01

31

Time-average TV holography for vibration fringe analysis

Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

2009-06-01

32

NASA Astrophysics Data System (ADS)

The general analytic expression for the temporal spectrum of atmospheric scintillation under weak turbulence condition is derived. It was employed to study the characteristics of the temporal spectra for horizontal uniform path and whole layer atmospheric non-uniform path. In the non-uniform path, the H-V turbulence model and the Gaussian wind speed model are utilized. It has been found that when the receiver diameter is larger than Fresnel scale (?L)1/2, the temporal spectrum of the plane wave have a power law behavior with a scaling index - 17/3 in high-frequency range. The change of the turbulence strength has little influence on the shape of the temporal spectrum. Based on the characteristics of the temporal spectrum, the aperture-averaging and time-averaging effects on scintillation were analyzed in the frequency domain.

Shen, Hong; Yu, Longkun; Fan, Chengyu

2014-11-01

33

Combining Quick-Turnaround and Batch Workloads at Scale

NASA Technical Reports Server (NTRS)

NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

Matthews, Gregory A.

2012-01-01

34

Time-averaged shadow-moiré method for studying vibrations.

A time-averaged shadow-moiré method is presented which permits the determination of the amplitude distribution of the deflection of a plate in steady state vibration. No stroboscope is required, and the recording is done statically. The method is less sensitive than holographic methods and is therefore suitable for studying relatively large amplitudes. PMID:20168777

Hung, Y Y; Liang, C Y; Hovanesian, J D; Durelli, A J

1977-06-01

35

NASA Astrophysics Data System (ADS)

Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.

Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng

2014-04-01

36

Time-averaged adiabatic ring potential for ultracold atoms

We report the experimental realization of a versatile ring trap for ultracold atoms. The ring geometry is created by the time-averaged adiabatic potential resulting from the application of an oscillating magnetic bias field to a rf-dressed quadrupole trap. Lifetimes for a Bose-Einstein condensate in the ring exceed 11s and the ring radius was continuously varied from 50 {mu}m to 262 {mu}m. An efficient method of loading the ring from a conventional time-averaged orbiting potential trap is presented together with a rotation scheme which introduces angular momentum into the system. The ring presents an opportunity to study the superfluid properties of a condensate in a multiply connected geometry and also has applications for matter-wave interferometry.

Sherlock, B. E.; Gildemeister, M.; Owen, E.; Nugent, E.; Foot, C. J. [Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU (United Kingdom)

2011-04-15

37

Adjoint sensitivity analysis of time averaged quantities for unsteady flows

NASA Astrophysics Data System (ADS)

Sensitivity analysis is an essential gradient for data assimilation, aerodynamic design, uncertainty quantification and optimal flow control. In particular. the adjoint sensitivity analysis method has been shown to solve very high dimensional optimization problems typically found these applications. This talk focuses on recent developments in extending adjoint sensitivity analysis to unsteady flows. The adjoint equation of unsteady flows must be integrated backwards in time. Each backward time step must use the flow solution at the corresponding time. As a result, the entire time history of the flow solution must be either stored or recalculated. The invention of checkpointing schemes provides an economic solution to this challenge. In particular, the dynamic checkpointing scheme makes this solution more practical for computational fluid dynamics problems. In unsteady flows, the quantities of interest are often long time averages. We demonstrate that sensitivity analysis of these long time averaged quantities poses significant new challenge. A novel windowing scheme is developed to compute correct sensitivity for periodic unsteady flows, such as in laminar vortex shedding. Initial investigation of sensitivity analysis of chaotic unsteady flows, i.e., transitional and turbulent flows, is also discussed.

Wang, Qiqi

2011-11-01

38

School Turnarounds: The Essential Role of Districts

ERIC Educational Resources Information Center

The inspiration for this book was a crucial observation: that if the school turnaround movement is to have widespread and lasting consequences, it will need to incorporate meaningful district involvement in its efforts. The result is a volume that considers school turnaround efforts at the district level, examining the evidence thus far and…

Zavadsky, Heather

2012-01-01

39

Turnaround Schools and the Leadership They Require

ERIC Educational Resources Information Center

"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…

Leithwood, Kenneth; Strauss, Tiiu

2008-01-01

40

COMPTEL Time-Averaged All-Sky Point Source Analysis

We use all COMPTEL data from the beginning of the CGRO mission (April '91) up to the end of CGRO Cycle 6 (November '97) to carry out all-sky point source analyses in the four standard COMPTEL energy bands for different time periods. We apply our standard maximum-likelihood method to generate all-sky significance and flux maps for point sources by subtracting off the diffuse emission components via model fitting. In addition, fluxes of known sources have been determined for individual CGRO Phases/Cycles to generate lightcurves with a time resolution of the order of one year. The goal of the analysis is to derive quantitative results -- significances, fluxes, light curves -- of our brightest and most significant sources such as 3C 273, and to search for additional new COMPTEL sources, showing up in time-averaged maps only.

W. Collmar; V. Schoenfelder; A. W. Strong; H. Bloemen; W. Hermsen; M. McConnell; J. Ryan; K. Bennett

1999-12-15

41

Average waiting time profiles of uniform DQDB model

The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.

Rao, N.S.V. [Oak Ridge National Lab., TN (United States); Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D. [Old Dominion Univ., Norfolk, VA (United States). Dept. of Computer Science

1993-09-07

42

Adaptive coherent averaging for real-time electrocardiogram enhancement.

This paper presents an adaptive coherent averaging structure capable of removing broad-band interference from the electrocardiogram (ECG) while preserving the morphological features of the signal. The proposed structure improves the signal-to-noise ratio (SNR) of the adaptive line enhancer (ALE) while maintaining robustness to quasi-periodic signals. The least mean-square (LMS) and recursive least-square (RLS) adaptive algorithms are implemented. Analysis and comparison of the results is provided, leading to an optimized hybrid implementation. The cascade nature of the proposed structure is both scalable and suitable for implementation in real-time hardware. PMID:25571127

Robinson, Brent W; Saquib, Mohammad

2014-08-01

43

A time Fourier analysis of zonal averaged ozone heating rates

NASA Technical Reports Server (NTRS)

A time-Fourier analysis is presented for the yearly variation of the zonal averaged ozone heating rates in the middle atmosphere based on a model study. The ozone heating rates are determined by utilizing two-dimensional ozone distributions, the altitude and latitude, and by including the effect of the curved earth's atmosphere. In addition, assumptions are introduced to the yearly variations of the ozone distributions due to the lack of sufficient existing ozone data. Among other results, it is shown that the first harmonic component indicates that the heating rates are completely out of phase between the northern and southern hemispheres. The second Fourier component shows a symmetric pattern with respect to the equator, as well as five distinct local extreme values of the ozone heating rate. The third harmonic component shows a pattern close to that of the first component except in the regions above 70 deg between 45-95 km in both hemispheres.

Wang, P.-H.; Wu, M.-F.; Deepak, A.; Hong, S.-S.

1981-01-01

44

Statistics of the end of turnaround-scale structure formation in Lambda CDM cosmology

In $\\Lambda$CDM cosmology, structure formation is halted shortly after dark energy dominates the mass/energy budget of the Universe. A manifestation of this effect is that in such a cosmology the turnaround radius --the non-expanding mass shell furthest away from the center of a structure-- has an upper bound. Recently, a new, local, test for the existence of dark energy in the form of a cosmological constant was proposed based on this turnaround bound. In this work, we build upon this proposal, and we further examine the advantages of studying the end of structure formation at the turnaround scale. Using the Press-Schechter formalism, we calculate the mass function of turnaround structures at various cosmic epochs, including the present one and an infinite time into the future. We find that structures at turnaround scales have in practice stopped forming already today, and consequently, the turnaround radii of structures must be very close to the maximum predicted value. We find that a mass scale of $m \\sim ...

Tanoglidis, Dimitrios; Tomaras, Theodore

2014-01-01

45

School Turnaround: Cristo Rey Boston High School Case Study

ERIC Educational Resources Information Center

The mandates of the federal No Child Left Behind Law, including the threat of closing a school for underperformance, have led to multiple public school turnaround attempts. Because turnaround is a relatively new area of focus in education, there is limited research on what does and does not work, and even the definition of turnaround is a work in…

Thielman, Jeff

2012-01-01

46

Relegation and Reversion: Longitudinal Analysis of School Turnaround and Decline

ERIC Educational Resources Information Center

School turnaround has become a popular strategy for improvement of chronically low-performing schools. Research related to school turnaround has relied substantially upon case studies. These case studies often focus on successful turnarounds and report immediate outcomes, failing to provide information about the sustainability of the results. In…

Hochbein, Craig

2012-01-01

47

The time-averaged paleomagnetic field 0–5 Ma

Persistent departures from the geocentric axial dipole field model of the time-aver- aged paleomagnetic field over the past 5 Myr have been analyzed using oceanic data from deep-sea cores and continental data from igneous rocks and sediments. The data set comprises the equivalent of 9490 spot readings of the field (5831 normal and 3659 reverse) from 930 groups of data.

M. W. McElhinny; P. L. McFadden; R. T. Merrill

1996-01-01

48

Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer

NASA Technical Reports Server (NTRS)

Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.

Dunn, M. G.; Rae, W. J.; Rigby, D. L.

1987-01-01

49

Sustainment of Fine Particle Cloud by Means of Time-Averaged Particle Driving Force in Plasmas

We have succeeded in sustaining fine particle cloud by using a time-averaged particle driving (TAPD) method in the RF discharge plasma. The particles feel only time-averaged force when the period of pulses applied to those point-electrodes is shorter than the particle response time. The particles are transported to a middle point between two point-electrodes.

Gohda, Takuma; Iizuka, Satoru [Department of Electrical Engineering, Graduate School of Engineering, Tohoku University Aza-Aoba, Aramaki, Aoba, 980-8579 Sendai (Japan)

2008-09-07

50

Manned Orbital Transfer Vehicle (MOTV). Volume 5: Turnaround analysis

NASA Technical Reports Server (NTRS)

The development of a low-cost reliable turnaround process to employ the MOTV in enhancing the utilization of the geosynchronous space region is analyzed. It is indicated that a routine effective turnaround/maintenance plan must make maximum use of flight data for maintenance planning, a high degree of test automation, and MOTV maintainability features in order to minimize tests, facilitate repair, and reduce manpower requirements. An effective turnaround plan provides a payback of reduced risks.

Boyland, R. E.; Sherman, S. W.; Morfin, H. W.

1979-01-01

51

Quick turnaround facilities for reusable rocket launchers

NASA Astrophysics Data System (ADS)

After defining the requirements for future vertical takeoff, recoverable/reusable vehicle launch pads on the basis of a functionl analysis, the conceptual design of a high turnaround/low maintenance facility has been performed. The results obtained show that performance goals are achievable if specific efforts are directed to the optimization of the launch pad's jet deflector. Potential damage is restricted to the support/release jaws and the jet deflector; a maintenance-free 200-launch life may be obtained through steel cladding of critical deflector areas.

Lacaze, H.; Bombled, J. P.

52

ERIC Educational Resources Information Center

Evidence collected over the last 30 years suggests that effective school leaders significantly influence student learning and other aspects of school performance. Documented experience also indicates that individual leaders in failing organizations in various sectors, including education, can effect rapid, dramatic improvements. School turnaround…

Public Impact, 2008

2008-01-01

53

ERIC Educational Resources Information Center

This toolkit includes these separate sections: (1) Selection Preparation Guide; (2) Day-of-Interview Tools; (3) Candidate Rating Tools; and (4) Candidate Comparison and Decision Tools. Each of the sections is designed to be used at different stages of the selection process. The first section provides turnaround teacher competencies that are the…

Public Impact, 2008

2008-01-01

54

ERIC Educational Resources Information Center

Turning around chronically low-performing schools is challenging work requiring fundamental rethinking of the change process, and a systemic rather than school-by-school approach. Without a doubt, high-impact school leaders are critical to turnaround success, and pockets of success around the country demonstrate this. However, transformational and…

Robinson, William S.; Buntrock, LeAnn M.

2011-01-01

55

Some dynamic and time-averaged flow measurements in a turbine rig

NASA Astrophysics Data System (ADS)

Four types of sensors were used to make both dynamic and time-averaged flow measurements in a cold turbine rig to determine the magnitude of errors in time-averaged total-pressure measurement at a station 5 1/2 blade cords downstream from the rotor. The errors turned out to be negligible. The sensors and their intended use are discussed.

Krause, L. N.; Fralick, G. C.

1980-01-01

56

Average Time Results -Child -Sample Size 500 Error Bars = +/-Std.Dev.

Time Average Time Results - Child - Sample Size 500 Error Bars = +/- Std.Dev. 0 50 100 150 11516 MMHC - Child3 - Sample Size 500 Error Bars = +/- Std.Dev. 0 100 200 300 400 108147 MMHC OR1k=5 OR1k=10 OR1k=20 OR2k=5 OR2k=10 OR2k=20 SCk=5 SCk=10 GS PC TPDA GES Time Average Time Results - Child5 - Sample Size

Brown, Laura E.

57

k=10 GS PC TPDA GES Average Time Results -Child -Sample Size 500

TPDA GES Time Average Time Results - Child - Sample Size 500 Error Bars = +/- Std.Dev. 0 50 100 150 Average Time Results - Child3 - Sample Size 500 Error Bars = +/- Std.Dev. 0 100 200 300 400 108147 MMHC OR Results - Child5 - Sample Size 500 Error Bars = +/- Std.Dev. 0 500 1000 1500 2386 MMHC OR1 k=5 OR1 k=10 OR

Brown, Laura E.

58

Using Competencies to Improve School Turnaround Principal Success

ERIC Educational Resources Information Center

This paper aims first to shed light on one element of leadership: the characteristics--or "competencies"--of turnaround leaders who succeed in driving rapid, dramatic change. Second, it recounts the elements of support that districts must provide these leaders to enable and sustain a portfolio of successful school turnarounds. Fortunately, decades…

Steiner, Lucy; Hassel, Emily Ayscue

2011-01-01

59

DSP Implementation of Time Delay Estimation Based on Bufferable Average Square Difference Matrix

Time delay estimation is a fundamental subject in the field of source localization. Although many accurate approaches were proposed to estimate time delay, only some of them are implementable. We propose two approaches based on average square difference to estimate time delay in adverse conditions. In this way we estimate time delay between two sensors signals. In these approaches a

Erfan Soltanmohammadi; S. Dastgerdi; A. H. Rezaie

2010-01-01

60

Federal Register 2010, 2011, 2012, 2013

...effort to coordinate the quick turnaround surveys with other research...Information from the quick turnaround surveys will complement but...with changes. Title: Quick Turnaround Surveys of all statutes...Burden Cost for operation and maintenance: $0. Comments...

2013-08-08

61

Federal Register 2010, 2011, 2012, 2013

...the information on a quick turnaround basis. ETA will make every effort to coordinate the quick turnaround surveys with other research...Administration. Title: Quick Turnaround Surveys of WIA. OMB Number...Burden Cost for operation and maintenance: $0. Comments...

2010-03-30

62

NASA Astrophysics Data System (ADS)

The time-varying character of transition probabilities is considered as finite piecewise homogeneous. This paper studies the problem of finite-time H? estimation for a class of discrete-time Markov jump systems with time-varying transition probabilities subject to average dwell time switching, sufficient conditions ensuring the Markov jump systems to be finite-time bounded and H? filtering finite-time boundness are established. Based on the results of finite-time boundness and average dwell time, the system trajectory stays within a prescribed bound. Finally, an example is provided to illustrate the usefulness and effectiveness of the proposed method.

Cheng, Jun; Zhu, Hong; Zhong, Shouming; Zhong, Qishui; Zeng, Yong

2015-02-01

63

A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

NASA Technical Reports Server (NTRS)

A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

Kundu, Prasun K.; Bell, Thomas L.

2003-01-01

64

Time-averaging equivalent circuit analysis of a resonant switching converter

The sophisticated resonant switching converter is investigated by use of a new time-averaging equivalent circuit analysis method presented recently by the authors. The parameters of the time-averaging equivalent circuit model can easily be determined by the new approach without a prior knowledge of the d.c. solution for the converter. The d.c. analysis and a.c, small-signal analysis can then be performed

JIANPING XU; JUEBANG YU

1989-01-01

65

NASA Astrophysics Data System (ADS)

Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.

Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.

66

Respiratory cycle as time basis: an improved method for averaging olfactory neural events

1 Respiratory cycle as time basis: an improved method for averaging olfactory neural events Roux largely modulated by respiration. Accurate analysis of respiratory synchronized activity is precluded by the variability of the respiratory frequency from trial to trial. Thus, the use of respiratory cycle as the time

Paris-Sud XI, UniversitÃ© de

67

Leadership and the psychology of turnarounds.

Turnaround champions--those leaders who manage to bring distressed organizations back from the brink of failure--are often acclaimed for their canny financial and strategic decision making. But having studied their work closely, Harvard Business School's Rosabeth Moss Kanter emphasizes another aspect of their achievement. These leaders reverse the cycle of corporate decline through deliberate interventions that increase the level of communication, collaboration, and respect among their managers. Ailing companies descend into what Kanter calls a "death spiral," which typically works this way: After an initial blow to the company's fortunes, people begin pointing fingers and deriding colleagues in other parts of the business. Tensions rise and collaboration declines. Once they are no longer acting in concert, people find themselves less able to effect change. Eventually, many come to believe they are helpless. Passivity sets in. Finally, the ultimate pathology of troubled companies takes hold: denial. Rather than volunteer an opinion that no one else seems to share, people engage in collective pretense to ignore what they individually know. To counter these dynamics, Kanter says, and reverse the company's slide, the CEO needs to apply certain psychological interventions--specifically, replacing secrecy and denial with dialogue, blame and scorn with respect, avoidance and turf protection with collaboration, and passivity and helplessness with initiative. The author offers in-depth accounts of how the CEOs at Gillette, Invensys, and the BBC used these interventions to guide their employees out of corporate free fall and onto a more productive path. PMID:12800717

Kanter, Rosabeth Moss

2003-06-01

68

Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio

NASA Astrophysics Data System (ADS)

Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.

Li, Shenghong; Bi, Guoan

2014-12-01

69

Neural Networks Used to Compare Designed and Measured Time-Average Patterns

NASA Technical Reports Server (NTRS)

Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.

Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

1999-01-01

70

34. BOILER HOUSE, COAL CONVEYOR AND TURNAROUND TRACK FOR COAL ...

34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA

71

Turnaround operations analysis for OTV. Volume 2: Detailed technical report

NASA Technical Reports Server (NTRS)

The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.

1988-01-01

72

Where the world stands still: turnaround as a strong test of ?CDM cosmology

NASA Astrophysics Data System (ADS)

Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ?CDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ?CDM model is, for a given mass M, equal to (3GM/? c2)1/3, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ?CDM cosmology. Current data appear to favor ?CDM over alternatives with local inhomogeneities and no ?. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ?CDM cosmology.

Pavlidou, V.; Tomaras, T. N.

2014-09-01

73

Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values,

G. E. P. Box; David A. Pierce

1970-01-01

74

DISCRETE-TIME CONTROLLED MARKOV PROCESSES WITH AVERAGE COST CRITERION: A SURVEY

This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies.

ARISTOTLE ARAPOSTATHIS; VIVEK S. BORKAR; EMMANUEL FERN; MRINAL K. GHOSH; STEVEN I. MARCUS

75

Time averaged properties along unstable periodic orbits and chaotic orbits in two map systems

NASA Astrophysics Data System (ADS)

Unstable periodic orbit (UPO) recently has become a keyword in analyzing complex phenomena in geophysical fluid dynamics and space physics. In this paper, sets of UPOs in low dimensional maps are theoretically or systematically found, and time averaged properties along UPOs are studied, in relation to those of chaotic orbits.

Saiki, Y.; Yamada, M.

2008-08-01

76

Climate Projections Using Bayesian Model Averaging and Space-Time Dependence

Climate Projections Using Bayesian Model Averaging and Space-Time Dependence K. Sham Bhat, Murali Haran, Adam Terando, and Klaus Keller. Abstract Projections of future climatic changes are a key input to the design of climate change mitiga- tion and adaptation strategies. Current climate change projections

Haran, Murali

77

Probability distribution of the time-averaged mean-square displacement of a Gaussian process

NASA Astrophysics Data System (ADS)

We study the probability distribution of the time-averaged mean-square displacement of a discrete Gaussian process. An empirical approximation for the probability density is suggested and numerically validated for fractional Brownian motion. The optimality of quadratic forms for inferring dynamical and microrheological quantities from individual random trajectories is discussed, with emphasis on a reliable interpretation of single-particle tracking experiments.

Grebenkov, Denis S.

2011-09-01

78

An upper bound to time-averaged space-charge limited diode currents

The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.

Griswold, M. E.; Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States); Wurtele, J. S. [Department of Physics, University of California at Berkeley, Berkeley, California 94720 (United States)

2010-11-15

79

The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.

Decker, A.J.

1984-06-01

80

NASA Technical Reports Server (NTRS)

The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three-dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed. Previously announced in STAR as N84-21849

Decker, A. J.

1984-01-01

81

A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

NASA Technical Reports Server (NTRS)

Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.

Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

2002-01-01

82

Time-averaged in-line digital holographic interferometry for vibration analysis

NASA Astrophysics Data System (ADS)

Time-averaged in-line digital holography is applied for vibration analysis. In particular, by use of a double-exposure approach, simultaneous determination of vibration mode shape and mean static state deformation during a vibration cycle are obtained. The subtraction of two numerically reconstructed digital holograms recorded at the same resonant frequency but with a small difference in amplitude shows the mixing of Bessel-type time-averaged fringes owing to vibration and of the double-exposure fringes owing to differences in the mean deformation of the object. It is shown that separation of these fringe patterns can be readily accomplished numerically. An experimental demonstration of this effect by use of in-line digital holography for relatively small membranes is demonstrated.

Asundi, Anand; Singh, Vijay Raj

2006-04-01

83

Conditional sampling schemes based on the Variable Interval Time Averaging (VITA) algorithm

NASA Astrophysics Data System (ADS)

The variable interval time averaging (VITA) algorithm was tested in a variety of boundary layers for its ability to detect motions principally involved in the production of shear stress. A VITA+LEVEL scheme (which uses a variance and level criterion) was devised and is shown to produce length scale statistics that are independent of the conditioning criteria, where those from the VITA scheme are not.

Morrison, J. F.; Tsai, H. M.; Bradshaw, P.

1986-08-01

84

A simple, easy-to-use graphical method is presented for use in determining if there is any statistically significant trend or pattern over time in an underlying Poisson event rate of occurrence or binomial failure on demand probability. The method is based on the combined use of both an exponentially weighted moving-average (EWMA) and a Shewhart chart. Two nuclear power plant examples

Harry F. Martz; Paul H. Kvam

1996-01-01

85

Single FGF-23 Measurement and Time-Averaged Plasma Phosphate Levels in Hemodialysis Patients

Summary Background and objectives Plasma phosphate levels display considerable intraindividual variability. The phosphatonin fibroblast growth factor 23 is a central regulator of plasma phosphate levels, and it has been postulated to be a more stable marker than conventional CKD–mineral and bone disorder parameters. Thus, fibroblast growth factor 23 has been hypothesized to reflect time-averaged plasma phosphate levels in CKD patients. Design, setting, participants, & measurements Among 40 patients from the outpatient dialysis center, serial measurements of plasma calcium and phosphate (before every dialysis session) as well as C-terminal fibroblast growth factor 23, parathyroid hormone, and alkaline phosphatase (one time weekly) were performed over a study period of 4 weeks in November and December of 2011. Intraindividual variability of repeated plasma fibroblast growth factor 23 measurements compared with other CKD–mineral and bone disorder markers was tested, and the association of a single plasma fibroblast growth factor 23 measurement with time-averaged plasma phosphate levels was analyzed. Results Against expectations, intraindividual variability of fibroblast growth factor 23 (median coefficient of variation=27%; interquartile range=20–35) was not lower than variability of plasma phosphate (median coefficient of variation=15%; interquartile range=10–20), parathyroid hormone (median coefficient of variation=24%; interquartile range=15–39), plasma calcium (median coefficient of variation=3%; interquartile range=2–4), or alkaline phosphatase (median coefficient of variation=5%; interquartile range=3–10). Moreover, the correlation between the last fibroblast growth factor 23 measurement after 4 weeks and time-averaged plasma phosphate did not surpass the correlation between the last fibroblast growth factor 23 measurement and a single plasma phosphate value (r=0.67, P<0.001; r=0.76, P<0.001, respectively). Conclusions Surprisingly, fibroblast growth factor 23 was not more closely associated to time-averaged plasma phosphate levels than a single plasma phosphate value, and it did not show a lower intraindividual variability than other tested markers of CKD–mineral and bone disorder. Thus, fibroblast growth factor 23 should not be used in clinical practice as a reflector of time-averaged plasma phosphate levels. PMID:23846463

Seiler, Sarah; Lucisano, Gaetano; Ege, Philipp; Fell, Lisa H.; Rogacev, Kyrill S.; Lerner-Gräber, Anne; Klingele, Matthias; Ziegler, Matthias; Fliser, Danilo

2013-01-01

86

On the construction of a time base and the elimination of averaging errors in proxy records

NASA Astrophysics Data System (ADS)

Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- ? ? n+12??- y(n,?) = ?- 1?- y(m,?)dm n-2 ? where m is the position, x(m) = ?m; ? are the unknown parameters and y(m,?) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, ?) = A +?H [A sin(k?t(m ))+ A cos(k?t(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ? g(m ) = b bl?l(m ) l=1 where, b is a vector of unknown time base distortion parameters, and ? is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.

Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

2009-04-01

87

TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA

Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

Iuppa, R. [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy)] [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy); Di Sciascio, G., E-mail: roberto.iuppa@roma2.infn.it, E-mail: giuseppe.disciascio@roma2.infn.it [Istituto Nazionale di Fisica Nucleare, Sezione di Roma Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy)

2013-04-01

88

Averaging time modeling of exposure simulation with application to the El Camino Real vehicle data

When profiles of activity patterns are used to generate time series of simulated exposure, one typically samples from exposure distributions which are microenvironment-specific to each activity. If the simulation time step is short, then independent sampling at each time step, ignoring autocorrelation, will result in aggregates with too little variability from one simulation to another. Autocorrelation can often be modeled with one or two extra parameters and then used in the simulation. Furthermore, one may substantially reduce computation by generating a single averaged exposure for each activity segment whose distribution depends in a simple way on the activity duration and the modeled autocorrelation. The process is illustrated using the El Camino Real commuting exposure study data of Ott, Switzer, and Willits.

Switzer, P.; Ott, W.R.; Willits, N.H. (Department of Statistics, Stanford University, CA (United States))

1991-01-01

89

An analytical model for time-averaged ion energy distributions in collisional rf sheaths

NASA Astrophysics Data System (ADS)

An analytical model is proposed for time-averaged ion energy distributions (IEDs) in collisional rf sheaths (?i < sm, where ?i is the ion mean free path and sm is the sheath thickness), in which charge transfer is the dominant ion-neutral collision mechanism. Our model is different from the model in Israel et al 2006 J. Appl. Phys. 99 093303 in two aspects. Firstly, to calculate the ion flux, we consider ions created in both the space charge region and the quasi-neutral region instead of only the quasi-neutral region. Secondly, to calculate the ion energy, we use an rf-modulated transit time instead of only the average transit time. Consequently, our model is valid over a wider pressure range (from a weakly collisional sheath to a strongly collisional sheath) compared with the model in Israel et al 2006 J. Appl. Phys. 99 093303. Our model shows that, in a collisional rf sheath (argon, 5 Pa, 27.12 MHz and 100 W), 65% of the ion flux in the IED comes from secondary ions created in the space charge region. Results of our model are compared with those obtained by measurement, direct integration method and particle-in-cell/Monte Carlo collision simulation.

Chen, Wen-Cong; Pu, Yi-Kang

2014-08-01

90

A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

NASA Technical Reports Server (NTRS)

A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

1992-01-01

91

Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay

Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.

Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.

2000-01-01

92

Turnaround and Closure Rates in the Charter and District Sectors

ERIC Educational Resources Information Center

This article examines the turnaround and closure rates of charter and district schools in 10 states to determine if one school sector has done comparatively better than the other at addressing chronically low-performing schools. Low-performing charter and district schools were identified in 2003-04 based on their reading and math proficiency rates…

Stuit, David

2012-01-01

93

Importing Leaders for School Turnarounds: Lessons and Opportunities

ERIC Educational Resources Information Center

One of the biggest challenges in education today is identifying talented candidates to successfully lead turnarounds of persistently low-achieving schools. Evidence suggests that the traditional principal pool is already stretched to capacity and cannot supply enough leaders to fix failing schools. But potentially thousands of leaders capable of…

Kowal, Julie; Hassel, Emily Ayscue

2011-01-01

94

Can Principals' Emotional Intelligence Matter to School Turnarounds?

ERIC Educational Resources Information Center

Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: the…

Cai, Qijie

2011-01-01

95

Grid Resource Monitoring and Selection for Rapid Turnaround Applications

SUMMARY In this paper, we present a resource monitoring and selec- tion method for rapid turnaround grid applications (for example, within 10 seconds). The novelty of our method is the distributed evaluation of re- sources for rapidly selecting the appropriate idle resources. We integrate our method with a widely used resource management system, namely the Monitoring and Discovery System 2

Kensuke Muraki; Yasuhiro Kawasaki; Yasuharu Mizutani; Fumihiko Ino; Kenichi Hagihara

2005-01-01

96

The Rural Turnaround in Ohio: Some Evidence Related to Implications.

ERIC Educational Resources Information Center

Limited to residents of small towns, villages and the rural open country, the study assessed the implications of the rural turnaround in the southern Ohio counties of Athens, Gallia, Jackson, Meigs, and Vinton. All five counties experienced outmigration in the 1950s, and all but Athens County lost population through outmigration in the 1960s. In…

Thomas, Donald W.; Bachtel, Douglas C.

97

Time-averaged and instantaneous porosity distribution in a fluidized bed with an immersed obstacle

Erosion in bubbling fluidized-bed combustors is a serious issue that may affect their reliability and economics. Available evidence suggests that the key to understanding this erosion is detailed knowledge of the coupled and complex phenomena of solids circulation and bubble motion. A thin, transparent two-dimensional'' rectangular fluidized bed with an obstacle served as a rough model for a fluidized-bed combustor. This model was studied experimentally and computationally, using two hydrodynamic equation sets. The computed hydrodynamic results agree reasonably well with experimental data. Bubble frequencies and sizes compare well with those obtained from analyzing a high speed motion picture frame-by-frame. Time-averaged porosities computed from both models agree with time-averaged porosity distributions measured with a gamma-ray densitometer. The principal differences between the data and the computations in this paper result from asymmetries present in the experiment and from the simplified solids rheology used in the hydrodynamic models. 53 refs., 26 figs., 7 tabs.

Lyczkowski, R.W.; Bouillard, J.X.; Gidaspow, D.; Berry, G.F.

1990-02-01

98

Turnaround Schools as a U-Turn for Student Achievement: The Rotary of School Reform Efforts

ERIC Educational Resources Information Center

This case study presents data to support the notion that turnaround school policy can improve the efficiency of how traditionally low-performing schools function. The schools that were successful in implementing the UVA Turnaround Program training developed a clear understanding of the expectations for participating in the UVA Turnaround Program…

Mette, Ian M.

2012-01-01

99

Apollo/Saturn 5 space vehicle countdown. Volume 2: Turnaround from scrub

NASA Technical Reports Server (NTRS)

The procedures required to prepare a space vehicle for subsequent launch attempt after cancelling lift-off activities are presented. The cancellation must occur after the start of cryogenic loading, but prior to initiation of ignition sequences. The sequence of operations necessary to return the space vehicle to a predetermined configuration at which time the launch count can be resumed or rescheduled for following launch opportunities is developed. The considerations and constraints that are the basis for the scrub/turnaround times are described.

1972-01-01

100

Time-averaged flow over a hydrofoil at high Reynolds number

NASA Astrophysics Data System (ADS)

At high Reynolds number, the flow of an incompressible viscous fluid over a lifting surface is a rich blend of fluid dynamic phenomena. Here, boundary layers formed at the leading edge develop over both the suction and pressure sides of the lifting surface, transition to turbulence, separate near the foil's trailing edge, combine in the near wake, and eventually form a turbulent far-field wake. The individual elements of this process have been the subject of much prior work. However, controlled experimental investigations of these flow phenomena and their interaction on a lifting surface at Reynolds numbers typical of heavy-lift aircraft wings or full-size ship propellers (chord-based Reynolds numbers, Re_C {˜} 10(7{-}10^8) ) are largely unavilable. This paper presents results from an experimental effort to identify and measure the dominant features of the flow over a two-dimensional hydrofoil at nominal Re_C values from near one million to more than 50 million. The experiments were conducted in the US Navy's William B. Morgan Large Cavitation Channel with a solid-bronze hydrofoil (2.1 m chord, 3.0 m span, 17 cm maximum thickness) at flow speeds from 0.25 to 18.3 m s(-1) . The foil section, a modified NACA 16 with a pressure side that is nearly flat and a suction side that terminates in a blunt trailing-edge bevel, approximates the cross-section of a generic naval propeller blade. Time-averaged flow-field measurements drawn from laser-Doppler velocimetry, particle-imaging velocimetry, and static pressure taps were made for two trailing-edge bevel angles (44 (°) and 56 (°) ). These velocity and pressure measurements were concentrated in the trailing-edge and near-wake regions, but also include flow conditions upstream and far downstream of the foil, as well as static pressure distributions on the foil surface and test section walls. Observed Reynolds-number variations in the time-averaged flow over the foil are traced to changes in suction-side boundary-layer transition and separation. Observed Reynolds-number variations in the time-averaged near wake suggest significant changes occur in the dynamic flow in the range of Re_C investigated.

Bourgoyne, Dwayne A.; Hamel, Joshua M.; Ceccio, Steven L.; Dowling, David R.

2003-12-01

101

A Continuous Time Model for Interest Rate with Autoregressive and Moving Average Components

NASA Astrophysics Data System (ADS)

In this paper we present a multi-factor continuous-time autoregressive moving-average (CARMA) model for the short and forward interest rates. This models is able to present a more adequate statistical description of the short and forward rate dynamics. We show that this is a tractable term structure model and provide closed-form solutions to bond and bond option prices, bond yields, and the forward rate volatility term structure. We demonstrate the capabilities of our model by calibrating it to market data and show that it can reproduce rather complex shapes of the empirical volatility term structure. In particular, a three-factor CARMA model can easily capture the dynamics of the level, slope, and curvature factors widely documented in term structure models.

Benth, F. E.; Koekebakker, S.; Zakamouline, V.

2010-09-01

102

Time-Averaged Adiabatic Potentials: Versatile Matter-Wave Guides and Atom Traps

We demonstrate a novel class of trapping potentials, time-averaged adiabatic potentials (TAAP), which allows the generation of a large variety of traps for quantum gases and matter-wave guides for atom interferometers. Examples include stacks of pancakes, rows of cigars, and multiple rings or sickles. The traps can be coupled through controllable tunneling barriers or merged altogether. We present analytical expressions for pancake-, cigar-, and ring-shaped traps. The ring geometry is of particular interest for guided matter-wave interferometry as it provides a perfectly smooth waveguide of widely tunable diameter and thus adjustable sensitivity of the interferometer. The flexibility of the TAAP would make possible the use of Bose-Einstein condensates as coherent matter waves in large-area atom interferometers.

Lesanovsky, Igor; Klitzing, Wolf von [Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, P.O. Box 1527, GR-71110 Heraklion (Greece)

2007-08-24

103

Average diffusion-to-the-wall times for laser-tagged molecules in a long cylinder

NASA Astrophysics Data System (ADS)

An approximate closed-form expression is derived for the average wall-reaching rate kW or time ?W=kW-1 of laser-tagged molecules diluted in an inert carrier gas migrating to the wall of a long cylinder where they are removed with sticking coefficient ?W. The exact equation for the sticky wall problem requires a trial-and-error solution of a transcendental relation with Bessel-functions. With the aid of the Fermi-Amaldi albedo theory, an explicit ``compromise'' relation is derived which approximates the exact diffusion curve of kW versus ?W rather well. The result is applied to obtain curves of kW versus ?W at different gas mix pressures for UF6 diluted in N2.

Eerkens, J. W.

104

ARECIBO MULTI-FREQUENCY TIME-ALIGNED PULSAR AVERAGE-PROFILE AND POLARIZATION DATABASE

We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.

Hankins, Timothy H. [Physics Department, New Mexico Tech, Socorro, NM 87801 (United States); Rankin, Joanna M. [Physics Department, University of Vermont, Burlington, VT 05401 (United States)], E-mail: thankins@nrao.edu, E-mail: Joanna.Rankin@uvm.edu

2010-01-15

105

Time-distance helioseismology: A new averaging scheme for measuring flow vorticity

NASA Astrophysics Data System (ADS)

Context. Time-distance helioseismology provides information about vector flows in the near-surface layers of the Sun by measuring wave travel times between points on the solar surface. Specific spatial averages of travel times have been proposed for distinguishing between flows in the east-west and north-south directions and measuring the horizontal divergence of the flows. No specific measurement technique has, however, been developed to measure flow vorticity. Aims: Here we propose a new measurement technique tailored to measuring the vertical component of vorticity. Fluid vorticity is a fundamental property of solar convection zone dynamics and of rotating turbulent convection in particular. Methods: The method consists of measuring the travel time of waves along a closed contour on the solar surface in order to approximate the circulation of the flow along this contour. Vertical vorticity is related to the difference between clockwise and anti-clockwise travel times. Results: We applied the method to characterize the vortical motions of solar convection using helioseismic data from the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory (SDO/HMI) and from the Michelson Doppler Imager onboard the Solar and Heliospheric Observatory (SOHO/MDI). Away from the equator, a clear correlation between vertical vorticity and horizontal divergence is detected. Horizontal outflows are associated with negative vorticity in the northern hemisphere and positive vorticity in the southern hemisphere. The signal is much stronger for HMI than for MDI observations. We characterize the spatial power spectrum of the signal by comparison with a noise model. Vertical vorticity at horizontal wavenumbers below 250 /R? can be probed with this helioseismic technique.

Langfellner, J.; Gizon, L.; Birch, A. C.

2014-10-01

106

NASA Astrophysics Data System (ADS)

A vector aeroacoustics method is developed to analyze the acoustic energy flow path from the rotating source. In this method, the instantaneous and time-averaged active acoustic intensity vectors are evaluated from the time-domain and frequency-domain acoustic pressure and acoustic velocity formulations, respectively. With the above method, the acoustic intensity vectors and the acoustic energy streamlines are visualized to investigate the propagation feature of the noise radiated from the monopole and dipole point sources and the rotor in subsonic rotation. The result reveals that a portion of the acoustic energy spirals many circles before moving towards the far field, and another portion of the acoustic energy firstly flows inward along the radial direction and then propagates along the axial direction. Further, an acoustic black hole exists in the plane of source rotation, from which the acoustic energy cannot escape once the acoustic energy flows into it. Moreover, by visualizing the acoustic intensity field around the rotating sources, the acoustic-absorption performance of the acoustic liner built in the casing and centerbody is discussed.

Mao, Yijun; Xu, Chen; Qi, Datong

2015-02-01

107

A multi-wavelength inversion method is extended to reconstruct the time-averaged temperature distribution in non-axisymmetric turbulent unconfined sooting flame by the multi-wavelength measured data of low time-resolution outgoing emission and transmission radiation intensities. Gaussian, ? and uniform distribution probability density functions (PDF) are used to simulate the turbulent fluctuation of temperature, respectively. The reconstruction of time-averaged temperature consists of three steps.

L. H. Liu; G. L. Man

2003-01-01

108

NASA Astrophysics Data System (ADS)

In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

Kim, Byung Chan; Park, Seong-Ook

109

simulations are commonly utilized and the mean breakthrough curve is often calculated from the arithmetic average of all realizations. The arithmetic mean breakthrough curve in general overestimates the mass flow]. The mean breakthrough curve computed from arithmetic averaging in general overestimates the mass flow rate

Lu, Zhiming

110

Uncertainty and variability in historical time-weighted average exposure data.

Beginning around 1940, private companies began processing of uranium and thorium ore, compounds, and metals for the Manhattan Engineer District and later the U.S. Atomic Energy Commission (AEC). Personnel from the AEC's Health and Safety Laboratory (HASL) visited many of the plants to assess worker exposures to radiation and radioactive materials. They developed a time-and-task approach to estimating "daily weighted average" (DWA) concentrations of airborne uranium, thorium, radon, and radon decay products. While short-term exposures greater than 10(5) dpm m(-3) of uranium and greater than 10(5) pCi L(-1) of radon were observed, DWA concentrations were much lower. The HASL-reported DWA values may be used as inputs for dose reconstruction in support of compensation decisions, but they have no numerical uncertainties associated with them. In this work, Monte Carlo methods are used retrospectively to assess the uncertainty and variability in the DWA values for 63 job titles from five different facilities that processed U, U ore, Th, or 226Ra-222Rn between 1948 and 1955. Most groups of repeated air samples are well described by lognormal distributions. Combining samples associated with different tasks often results in a reduction of the geometric standard deviation (GSD) of the DWA to less than those GSD values typical of individual tasks. Results support the assumption of a GSD value of 5 when information on uncertainty in DWA exposures is unavailable. Blunders involving arithmetic, transposition, and transcription are found in many of the HASL reports. In 5 out of the 63 cases, these mistakes result in overestimates of DWA values by a factor of 2 to 2.5, and in 2 cases DWA values are underestimated by factors of 3 to 10. PMID:18188049

Davis, Adam J; Strom, Daniel J

2008-02-01

111

Mercury's Time-Averaged and Induced Magnetic Fields from MESSENGER Observations

NASA Astrophysics Data System (ADS)

Observations from MESSENGER's Magnetometer (MAG) have allowed the construction of a baseline, time-averaged model for Mercury's magnetosphere. The model, constructed with the approximation that the magnetospheric shape can be represented as a paraboloid, includes two external (magnetopause and magnetotail) current systems and an internal (dipole) field. We take advantage of the geometry of the orbital MAG data to constrain all but one of the model parameters, and their ranges, directly from the observations. These parameters are then used as a priori constraints in the magnetospheric model, and the remaining parameter, the dipole moment, is estimated from a grid search. The model provides an excellent fit to the MAG observations, with a root-mean-square misfit of less than 20 nT globally. The mean distance from the planetary dipole origin to the magnetopause subsolar point, RSS, is 1.45 RM (where RM = 2440 km) and the mean planetary dipole moment is 190 nT- RM3. Temporal variations in the global-scale magnetic fields result from changes in solar wind ram pressure, Pram, at Mercury that arise from the planet's 88-day eccentric orbit around the Sun and from transient, rapid changes in solar wind conditions. For a constant planetary dipole moment, RSS varies as Pram-1/6. However, magnetopause crossings obtained from several Mercury years of MESSENGER observations indicate that RSS is proportional to Pram-1/a where a is greater than 6, suggesting induction in Mercury's highly conducting metallic interior. We obtain an effective dipole moment that varies by up to ?15% about its mean value. We further investigate the periodic 88-day induction signature and use the paraboloid model to describe the spatial structure in the inducing magnetopause field, together with estimates for the outer radius of Mercury's liquid core and possible overlying solid iron sulfide layer, to calculate induced core fields. The baseline magnetospheric model is adapted to include the 88-day periodic induction signature, and residuals to this time-varying global model from magnetically quiet orbits are then used to investigate structure at higher degree and order in the internal and external fields.

Johnson, C. L.; Winslow, R. M.; Anderson, B. J.; Purucker, M. E.; Korth, H.; Al Asad, M. M.; Slavin, J. A.; Baker, D. N.; Hauck, S. A.; Phillips, R. J.; Zuber, M. T.; Solomon, S. C.

2012-12-01

112

A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows

NASA Astrophysics Data System (ADS)

A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the potential to significantly increase the flexibility of hybrid rarefied/continuum flow analyses.

Diaz, Steven William

113

Diagnostic quality of time-averaged ECG-gated CT data

NASA Astrophysics Data System (ADS)

Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.

Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan

2009-02-01

114

High-pressure rocket engine turnaround duct computational fluid dynamics analysis

Current and projected high-pressure, staged-combustion rocket engine designs, such as the XLR129 and space shuttle main engine (SSME), introduced a type of turnaround duct flowpath wherein the turnaround is accompanied by an increase in mean radius and a decrease in fluid velocity through a subsequent diffuser. This turnaround duct flowpath is needed in the rocket engine to conduct high-pressure turbopump

G. B. Cox Jr.

1984-01-01

115

Meeting the challenge of a group practice turnaround.

Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing. PMID:11258274

Porn, L M

2001-03-01

116

NASA Astrophysics Data System (ADS)

The variable-interval time-averaging (“VITA”) algorithm has been tested in a variety of turbulent boundary layers for its ability to detect shear-stress-producing motions from hot-wire signals. A “VITA + LEVEL” scheme (which uses criteria for both short-time variance and short-time average, i.e.“level”) has been devised, and used in several different boundary layers. This scheme yields length-scale statistics that are acceptably independent of the conditioning criteria, which the VITA scheme does not.

Morrison, J. F.; Tsai, H. M.; Bradshaw, P.

1988-12-01

117

NASA Astrophysics Data System (ADS)

The variable-interval time-averaging ('VITA') algorithm has been tested in a variety of turbulent boundary layers for its ability to detect shear-stress-producing motions from hot-wire signals. A 'VITA+LEVEL' scheme (which uses criteria for both short-time variance and short-time average, i.e., 'level') has been devised, and used in several different boundary layers. This scheme yields length-scale statistics that are acceptably independent of the conditioning criteria, which the VITA scheme does not.

Morrison, J. F.; Tsai, H. M.; Bradshaw, P.

118

We appraised in this study the effects of core excess reactivity and average coolant temperature on the operable time of the Nigeria Research Reactor-1 (NIRR-1), which is a miniature neutron source reactor (MNSR). The duration of the reactor operating time and fluence depletion under different operation mode as well as change in core excess reactivity with temperature coefficient was investigated

Y. A. Ahmed; I. B. Mansir; I. Yusuf; G. I. Balogun; S. A. Jonah

2011-01-01

119

Use of the harmonic mean: On averaging count-rate data for variable-time, fixed-count intervals

The average of count rates from measurements performed by accumulating a fixed total number of counts over a variable time interval should be obtained with harmonic mean. Use of an arithmetic mean in these cases will give intuitively incorrect results. Uncertainty estimators for this harmonically averaged counting rate, such as the standard deviation, must also be suitably transformed to correspond to that for the harmonic mean.

Colle, R.

1996-09-01

120

1 From average travel time budgets to daily travel time distributions: an appraisal of two-paul.hubert@insee.fr 4999 words, 5 figures, 5 tables = 7499 Keywords: travel time, distributions, travel behaviour. #12 and human energy expenditure for travel, which is assumed to be constant in time and space. The second one

Toint, Philippe

121

NASA Technical Reports Server (NTRS)

Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature information could reliably be recovered, however the turbulence estimates contained significant uncertainty. Resulting flow parameter estimates are presented for surveys of Mach 0.6, 0.95, and 1.4 jet flows. Velocity, density, and temperature were determined with accuracies of 5 m/s, 1.5%, and 1%, respectively, in flows with no particles present, and with accuracies of 5 m/s, 1-4%, and 2% in flows with particles. Comparison with hotwire data for the Mach 0.6 condition demonstrated turbulence estimates with accuracies of about 5 m/s outside the jet core where Mie scattering from dust/smoke particulates aided in the estimation of turbulence. Turbulence estimates could not be recovered with any significant accuracy for measurement points where no particles were present.

Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta

2004-01-01

122

Evaluation of Time-Averaged CERES TOA SW Product Using CAGEX Data

NASA Technical Reports Server (NTRS)

A major component in the analysis of the Earth's radiation budget is the recovery of daily and monthly averaged radiative parameters using noncontinuous spatial and temporal measurements from polar orbiting satellites. In this study, the accuracy of the top of atmosphere (TOA) shortwave (SW) temporal interpolation model for the Clouds and the Earth's Radiant Energy System (CERES) is investigated using temporally intensive half-hourly TOA fluxes from the CERES/ARM/GEWEX Experiment (CAGEX) over Oklahoma (Charlock et al., 1996).

Carlson, Ann B.; Wong, Takmeng

1998-01-01

123

A Measurement of Time-Averaged Aerosol Optical Depth using Air-Showers Observed in Stereo by HiRes

of Physics and High Energy Astrophysics Institute, Salt Lake City, UT 84112, USA. 2Los Alamos National- partment of Physics and Astronomy, Piscataway, NJ 08854, USA. 7University of Tokyo, Institute for CosmicA Measurement of Time-Averaged Aerosol Optical Depth using Air-Showers Observed in Stereo by Hi

124

Meeting the Challenge of School Turnaround: Lessons from the Intersection of Research and Practice

ERIC Educational Resources Information Center

"Turnaround" has become the new buzzword in education. From states and districts paying for principals to be trained as turnaround specialists to U.S. Secretary of Education Arne Duncan's call to turn around the country's 5,000 lowest-performing schools, the concept of rapid, large-scale reform in individual schools--not just incremental school…

Salmonowicz, Michael

2009-01-01

125

On the Edge: A Study of Small Private Colleges That Have Made a Successful Financial Turnaround

ERIC Educational Resources Information Center

This article describes a qualitative study that involved two small private universities, examining their process of transformation from institutions headed toward closure to institutions that underwent a successful turnaround. The primary questions that guided the study included the issues and circumstances that led to the need for a turnaround,…

Carey, Amy Bragg

2014-01-01

126

School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform

ERIC Educational Resources Information Center

School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…

Peck, Craig; Reitzug, Ulrich C.

2014-01-01

127

Using temporal averaging to decouple annual and nonannual information in AVHRR NDVI time series

the presence of artificial interannual value drift in any region possessing multiyear regularly sampled time series remotely sensed imagery. Further, this approach is objective and does not require the prior identification of a SIT within the region of study...

Kastens, Jude Heathcliff; Lerner, David E.; Jakubauskas, Mark E.

2003-11-01

128

Significant mean cloud level circulation changes since 1974, noted in 1982 Venus cloud motion observations, have been validated by independent measurements of cloud motions in nearly-identical sets of images; agreement is obtained not only for the average zonal and meridional components, but for the eddy circulation's meridional transport of momentum. In contrast to 1979 observations, the time latitudinal profile and the longitudinally-averaged zonal component of the cloud motions for 1982 exhibit jets near 45 deg latitude in both the northern and southern hemispheres. 30 references.

Limaye, S.S.; Grassotti, C.; Kuetemeyer, M.J.

1988-02-01

129

NASA Astrophysics Data System (ADS)

Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for human-induced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

Kodra, Evan; Chatterjee, Snigdhansu; Ganguly, Auroop R.

2011-06-01

130

Empirical wind model for the middle and lower atmosphere. Part 1: Local time average

NASA Technical Reports Server (NTRS)

The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.

Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.

1993-01-01

131

Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround

NASA Astrophysics Data System (ADS)

An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.

Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.

132

American Institute of Aeronautics and Astronautics Time-Averaged and Unsteady Imaging Analysis of a

. Noble2 ,Karthik Periagaram1 , Jerry M. Seitzman3 ,Tim C.Lieuwen3 School of Aerospace Engineering, 270 L(x,t) = Leeward edge position W(x,t) = Windward edge position j = Jet density m = Crossflow denity 1 = Measured ignition time 1 Graduate Research Assistant, Ben T. Zinn Combustion Laboratory 2

Seitzman, Jerry M.

133

The problem of high-resolution imaging through long horizontal-path ground-level turbulence has gone unsolved since it was first addressed many decades ago. In this paper I describe a method that shows promise for diffraction-limited imaging through ground-level turbulence with large (meters) apertures and at large (kilometers) distances. The key lies in collecting image data in the spatial frequency domain via the method of Fourier telescopy and taking suitable time averages of the magnitude and phase of the Fourier telescopy signal. The method requires active illumination of the target with laser light, and the time averages required will likely be over many tens of seconds if not tens of minutes or more. The scheme will thus not be suitable for time-varying scenes. The basic scheme is described, and principle challenges briefly discussed. PMID:22307125

Rhodes, William T

2012-02-01

134

Coherent averaging of the passive fathometer response using short correlation time

experimental data.1Â3,8,9 The technique relies on surface generated noise from wind and waves.10,11 Ship- ping s and, for correlation times less than a few seconds, the observed sig- nal-to-noise ratio (SNR) agrees-correlation of ambient noise data from a drifting vertical array, has been the subject of much discussion in recent years

Gerstoft, Peter

135

be represented by the following relationship: tj Avg. Dem. Sat. Time = P (2) From Equation (2), it should be clear that the average time to demand satisfaction is a function of the area to the left of the satisfaction curve and below the total demand line... SLOPE1(20, 20):DIM DEM(20):DIM TEMPDEM(20):DIM SLTEMP(20, 20):DIM PATHOPT(21):DIM PATH1(21):DIM TIME(20, 20):DIM SLTEMP2 ( 2 0 / 2 0 ) NODES 0 HOLDER 0 HOLD2 0 HOLD3 0 HOLD4 0 100 KEY OFF:COLOR 7:NLINE=22:GOSUB 270 110 PGMN$=?TSP FOR CUSTOMER DEMAND...

Ryan, James Patrick

1992-01-01

136

Theory for the optimal control of time-averaged quantities in open quantum systems

We present variational theory for optimal control over a finite time interval in quantum systems with relaxation. The corresponding Euler-Lagrange equations determining the optimal control field are derived. In our theory the optimal control field fulfills a high order differential equation, which we solve analytically for some limiting cases. We determine quantitatively how relaxation effects limit the control of the system. The theory is applied to open two level quantum systems. An approximate analytical solution for the level occupations in terms of the applied fields is presented. Different other applications are discussed.

Ilia Grigorenko; Martin E. Garcia; K. H. Bennemann

2002-03-25

137

Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet

NASA Astrophysics Data System (ADS)

The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.

Hsu, C. M.; Huang, R. F.

2013-07-01

138

The instantaneous and time-averaged flow fields in the tip region of a ducted marine propulsor are examined. In this flow, a primary tip-leakage vortex interacts with a secondary, co-rotating trailing edge vortex and other co- and counter-rotating vorticity found in the blade wake. Planar particle imaging velocimetry (PIV) is used to examine the flow in a plane approximately perpendicular to

Ghanem F. Oweis; Steven L. Ceccio

2005-01-01

139

BackgroundTime domain analysis (TD) of the signal-averaged electrocardiogram (SAECG) presents a higher incidence of false positives in inferior myocardial infarction (MI), whereas spectral turbulence analysis (STA) suffers from a higher incidence of false positives in anterior MI. We investigated the hypothesis that a combined TD and STA (TD+STA) analysis of the SAECG could improve its predictive accuracy for major arrhythmic

Rafael Vázquez; Edward B Caref; Francisco Torres; Margarita Reina; Aurora Espina; Nabil El-Sherif

1999-01-01

140

NASA Astrophysics Data System (ADS)

In infinite ergodic theory, two distributional limit theorems are well-known. One is characterized by the Mittag-Leffler distribution for time averages of functions, i.e., integrable functions with respect to an infinite invariant measure. The other is characterized by the generalized arc-sine distribution for time averages of non- functions. Here, we provide another distributional behavior of time averages of non- functions in one-dimensional intermittent maps where each has an indifferent fixed point and an infinite invariant measure. Observation functions considered here are non- functions which vanish at the indifferent fixed point. We call this class of observation functions weak non- function. Our main result represents a first step toward a third distributional limit theorem, i.e., a distributional limit theorem for this class of observables, in infinite ergodic theory. To prove our proposition, we propose a stochastic process induced by a renewal process to mimic a Birkoff sum of a weak non- function in the one-dimensional intermittent maps.

Akimoto, Takuma; Shinkai, Soya; Aizawa, Yoji

2015-01-01

141

High-pressure rocket engine turnaround duct computational fluid dynamics analysis

NASA Astrophysics Data System (ADS)

Current and projected high-pressure, staged-combustion rocket engine designs, such as the XLR129 and space shuttle main engine (SSME), introduced a type of turnaround duct flowpath wherein the turnaround is accompanied by an increase in mean radius and a decrease in fluid velocity through a subsequent diffuser. This turnaround duct flowpath is needed in the rocket engine to conduct high-pressure turbopump turbine discharge gases toward the main injector while maintaining a compact turbomachinery and powerhead flowpath. Current experience in the SSME shows that this type of turnaround duct can experience flow separation with an attendant increase in flow turbulence levels. Advances in computational fluid dynamics techniques over the last decade produced a number of codes which are applicable to the analysis and prediction of flow field behavior within such ducts. A version of the TEACH code developed at Pratt & Whitney Engineering Division was used to analyze three representative turnaround duct configurations. The study included the effects of fluid properties, inlet velocity flowfields, solution grid mesh size, and numerical solution algorithm upon calculated flow. Results include flow streamline comparisons, velocity profiles, and calculated total pressure losses. The results showed that turnaround ducts of the type studied can exhibit separation but that appropriately designed turnaround ducts can be made to function without suffering from internal flow separation.

Cox, G. B., Jr.

1984-10-01

142

This paper proposes a first attempt to visualize and analyze the vibrations induced by a bone-conduction device and propagating at the surface of the skin of a human face. The method is based on a new approach in a so-called quasi-time-averaging regime, resulting in the retrieval of the vibration amplitude and phase from a sequence of digital Fresnel holograms recorded with a high image rate. The design of the algorithm depends on the ratio between the exposure time and the vibration period. The results show the propagation of vibrations at the skin surface, and quantitative analysis is achieved by the proposed approach. PMID:23292386

Leclercq, Mathieu; Karray, Mayssa; Isnard, Vincent; Gautier, François; Picart, Pascal

2013-01-01

143

Time-weighted average sampling of airborne n-valeraldehyde by a solid-phase microextration device.

A solid-phase microextraction (SPME) device was used as a time-weighted average sampler for n-valeraldehyde. The SPME device was first modified to improve the wearer's acceptance as a passive sampler. Then a poly(dimethylsiloxane)-divinylbenzene fiber was used and O-2,3,4,5,6-(pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was loaded onto the fiber. Vapors of known concentrations around the threshold limit values time-weighted average of n-valeraldehyde and specific relative humidities (RHs) were generated by syringe pumps in a dynamic generation system. n-Valeraldehyde vapors in gas bags were also generated. An exposure chamber was designed to allow measurement of face velocities, temperatures, exposing vapor concentrations, and RHs. Gas chromatography with flame ionization detection was used for sample analysis. The appropriate adsorption time for SPME coating PFBHA was determined to be 2 min and the desorption time for oxime formed after sampling was optimized to be 2 min. The experimental sampling constant was found to be (3.86+/-0.13) x 10(-2) cm3/min and face velocity was not expect to have effect on the sampler. PMID:12058903

Tsai, S W; Chang, T A

2002-04-19

144

NASA Astrophysics Data System (ADS)

The instantaneous and time-averaged flow fields in the tip region of a ducted marine propulsor are examined. In this flow, a primary tip-leakage vortex interacts with a secondary, co-rotating trailing edge vortex and other co- and counter-rotating vorticity found in the blade wake. Planar particle imaging velocimetry (PIV) is used to examine the flow in a plane approximately perpendicular to the mean axis of the primary vortex. An identification procedure is used to characterize multiple regions of compact vorticity in the flow fields as series of Gaussian vortices. Significant differences are found between the vortex properties from the time-averaged flow fields and the average vortex properties identified in the instantaneous flow fields. Variability in the vortical flow field results from spatial wandering of the vortices, correlated fluctuations of the vortex strength and core size, and both correlated and uncorrelated fluctuations in the relative positions of the vortices. This variability leads to pseudo-turbulent velocity fluctuations. Corrections for some of this variability are performed on the instantaneous flow fields. The resulting processed flow fields reveal a significant increase in flow variability in a region relatively far downstream of the blade trailing edge, a phenomenon that is masked through the process of simple averaging. This increased flow variability is also accompanied by the inception of discrete vortex cavitation bubbles, which is an unexpected result, since the mean flow pressures in the region of inception are much higher than the vapor pressure of the liquid. This suggests that unresolved fine-scale vortex interactions and stretching may be occurring in the region of increased flow variability.

Oweis, Ghanem F.; Ceccio, Steven L.

2005-05-01

145

ERIC Educational Resources Information Center

Given the importance of good teaching and leadership for school success, turnaround schools should think carefully about how to structure professional environments that reward and motivate excellence. A system of "Pay-for-Contribution" that includes tools such as hard-to-staff and skill shortage pay, performance pay, and/or retention pay, will…

Mass Insight Education (NJ1), 2009

2009-01-01

146

NASA Astrophysics Data System (ADS)

In inertial confinement fusion (ICF) experiments, irradiation uniformity is improved by passing laser beams through distributed phase plates (DPPs), which produce focused intensity profiles with well-controlled, reproducible envelopes modulated by fine random speckle. [C. B. Burckhardt, Appl. Opt. 9, 695 (1970); Y. Kato and K. Mima, Appl. Phys. B 29, 186 (1982); Y. Kato et al., Phys. Rev. Lett. 53, 1057 (1984); Laboratory for Laser Energetics LLE Review 33, NTIS Document No. DOE/DP/40200-65, 1987 (unpublished), p. 1; Laboratory for Laser Energetics LLE Review 63, NTIS Document No. DOE/SF/19460-91, 1995 (unpublished), p. 1.] A uniformly ablating plasma atmosphere acts to reduce the contribution of the speckle to the time-averaged irradiation nonuniformity by causing the intensity distribution to move relative to the absorption layer of the plasma. This occurs most directly as the absorption layer in the plasma moves with the ablation-driven flow, but it is shown that the effect of the accumulating ablated plasma on the phase of the laser light also makes a quantitatively significant contribution. Analytical results are obtained using the paraxial approximation applied to the beam propagation, and a simple statistical model is assumed for the properties of DPPs. The reduction in the time-averaged spatial spectrum of the speckle due to these effects is shown to be quantitatively significant within time intervals characteristic of atmospheric hydrodynamics under typical ICF irradiation intensities.

Epstein, R.

1997-09-01

147

Agile turnaround using post-stall maneuvers for tail-sitter VTOL UAVs

Miniature vertical take-off and landing unmanned aerial vehicles (VTOL UAVs) make various missions possible alone such as surveillance in partially-destroyed building and at broad hazard area where many obstacle exist. In such missions, agile turnaround using post-stall maneuvers is useful to avoid obstacles. This paper discusses agile turnaround strategies utilizing post-stall maneuvers for tail-sitter VTOL UAVs. Two agile turn strategies

Takaaki Matsumoto; Atsushi Konno; Ren Suzuki; Atsushi Oosedo; Kenta Go; Masaru Uchiyama

2010-01-01

148

NASA Astrophysics Data System (ADS)

Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.

Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

2013-12-01

149

Background Primary IgA nephropathy (IgAN) is the most common form of idiopathic glomerulonephritis worldwide. Although most patients are able to achieve remission with the current therapy, a large number of patients will still progress to end-stage renal disease. This study aimed to evaluate kidney disease progression and the risk factors for progression in IgAN patients who achieved remission. Methods Patients from a prospective database with IgAN were included in this study. All the subjects had achieved a complete remission (CR) or partial remission (PR) following 6 months of therapy. Renal survival and the relationship between the clinical parameters and composite renal outcomes were assessed. Results The study comprised 878 IgAN patients recruited between January 2005 and December 2010. Overall, 632 patients were enrolled in this study. The data from the 369 patients who achieved remission were analyzed; the mean follow-up time was 49 months. The median serum creatinine (SCr) concentration at baseline was 91.3 ?mol/L, and the time-averaged creatinine (TA-SCr) was 91.8 ?mol/L. The mean serum albumin (ALB) level at baseline was 39.4 g/L, and the time-averaged serum albumin (TA-ALB) was 42.1 g/L. Multivariate Cox regression analyses revealed that the TA-ALB and TA-SCr levels were independently associated with the composite renal outcome. The patients with a TA-SCr value?>?120 ?mol/L and a TA-ALB level?

2014-01-01

150

On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

Long, Leroy L.; Srinivasan, Manoj

2013-01-01

151

Background: India's National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Materials and Methods: Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q) nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA) was applied. Results: Forecasting showed that the replacement level of 2.1 total fertility rate (TFR) will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Conclusion: Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples. PMID:25364146

Mumbare, Sachin S; Gosavi, Shriram; Almale, Balaji; Patil, Aruna; Dhakane, Supriya; Kadu, Aniruddha

2014-01-01

152

NASA Astrophysics Data System (ADS)

Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4-6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100-1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.

Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis

2014-02-01

153

NASA Technical Reports Server (NTRS)

Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local stationarity diminishes as the temporal duration of the cycle increases. This is most evident for a planetary mesh cycle, which can take several minutes to complete.

Huff, Edward M.; Mosher, Marianne; Barszcz, Eric

2002-01-01

154

NASA Astrophysics Data System (ADS)

Despite the extensive use of Sn-Ag-Cu as a Pb-free solder alloy, its melting point is significantly higher than that of eutectic Sn-Pb solder. Sn-Ag-Bi-In solder is an alternative Pb-free solder, with a melting temperature close to that of eutectic Sn-Pb. This study elucidates the electromigration behavior of Sn-Ag-Bi-In solder and then compares the results with those of the Sn-Ag-Bi system. The behavior of Pb-free Sn-Ag-Bi-In solder strips under electromigration is examined by preparing them in Si (001) U-grooves. The samples are then tested under various temperatures and current densities. Although the compounds thicken near both electrodes with current stressing, the thickness at the anode exceeds that at the cathode. Experimental results of the average failure time indicate that Sn-Ag-Bi-In solder has a longer lifetime than does Sn-Ag-Bi, which is attributed to the ? phase. Additionally, the ? phase dissolved by the current in the early stage replenishes the outgoing atomic flux. These atomic fluxes also enhance the growth of abnormally large particles in the middle of the strips. Field-emission electron probe microanalysis (FE-EPMA) results indicate that the amount of indium is reduced after the ? phase near the cathode is exhausted for extended current stressing time.

Wu, Albert T.; Sun, K. H.

2009-12-01

155

BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra

NASA Technical Reports Server (NTRS)

We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.

Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.

1996-01-01

156

We evaluated the time course of development of positive signal-averaged electrocardiograms (SA-ECGs) by time-domain and Spectral Temporal Mapping (STM) analyses after myocardial infarction in 88 patients without bundle branch block. The incidence of positive SA-ECGs by time-domain analysis peaked at 4 to 8 weeks postinfarction whereas the peak incidence by STM analysis varied from 4 days to 4 to 10 months postinfarction. Positive time-domain SA-ECGs demonstrated a significantly reduced factor of normality (NF) compared with negative time-domain SA-ECGs by X, Z, or vector STM analyses, but marked overlap was present for the standard deviations of positive and negative SA-ECGs in all STM leads. Chi square analysis demonstrated a significant correlation only between X-lead STM analysis and time-domain analysis; however, the two methods were markedly discordant. Although there is a statistically significant relation between time-domain and STM analyses of SA-ECGs, the two analyses are not clinically interchangeable. PMID:7832095

Graceffo, M A; O'Rourke, R A; Hibner, C; Boulet, A J

1995-02-01

157

Purpose: lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. Methods and Materials: 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Results: Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods <0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.

Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F. [Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands)

2008-09-15

158

Ultrasonic standing waves are increasingly applied in the manipulation and sorting of micrometer-sized particles in microfluidic cells. To optimize the performance of such devices, it is essential to know the exact forces that the particles experience in the acoustic wave. Although much progress has been made via analytical and numerical modeling, the reliability of these methods relies strongly on the assumptions used, e.g. the boundary conditions. Here, we have combined an acoustic flow cell with an optical laser trap to directly measure the force on a single spherical particle in two dimensions. While performing ultrasonic frequency scans, we measured the time-averaged forces on single particles that were moved with the laser trap through the microfluidic cell. The cell including piezoelectric transducers was modeled with finite element methods. We found that the experimentally obtained forces and the derived pressure fields confirm the predictions from theory and modeling. This novel approach can now be readily expanded to other particle, chamber, and fluid regimes and opens up the possibility of studying the effects of the presence of boundaries, acoustic streaming, and non-linear fluids. PMID:25370872

Lakämper, Stefan; Lamprecht, Andreas; Schaap, Iwan A T; Dual, Jurg

2014-12-01

159

Time-averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row and the first-stage blade row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. These measurements were made at 10, 50, and 90 percent span on both the pressure and suction surfaces of the component. Stanton-number distributions are also reported for the second-stage vane at 50 percent span. A shock tube is used as a short-duration source of heated and pressurized air to which the turbine is subjected. Platinum thin-film gages are used to obtain the heat-flux measurements and miniature silicone-diaphragm pressure transducers are used to obtain the surface pressure measurements. The first-stage vane Stanton number distributions are compared with predictions obtained using a quasi-three dimensional Navier-Stokes solution and a version of STAN5. This same N-S technique was also used to obtain predictions for the first blade and the second vane.

Dunn, M.G.; Kim, J. (Calspan-UB Research Center, Buffalo, NY (United States)); Boyle, R.J. (NASA Lewis Research Center, Cleveland, OH (United States)); Civinskas, K.C.

1994-01-01

160

NASA Technical Reports Server (NTRS)

Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

2007-01-01

161

NASA Astrophysics Data System (ADS)

The Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) notes with a high degree of certainty that global warming can be attributed to anthropogenic emissions. Detection and attribution studies, which attempt to delineate human influences on regional- and decadal-scale climate change or its impacts, use a variety of techniques, including Granger causality. Recently, Granger causality was used as a tool for detection and attribution in climate based on a spatio-temporal data mining approach. However, the degree to which Granger causality may be able to delineate natural versus anthropogenic drivers of change in these situations needs to be thoroughly investigated. As a first step, we use multivariate global-average time series of observations to test the performance of Granger causality. We apply the popular Granger F-tests to Radiative Forcing (RF), which is a transformation of carbon dioxide (CO2), and Global land surface Temperature anomalies (GT). Our preliminary results with observations appear to suggest that RF Granger-causes GT, which seem to become more apparent with more data. However, carefully designed simulations indicate that these results are not reliable and may, in fact, be misleading. On the other hand, the same observation- and simulation-driven methodologies, when applied to the El Niño Southern Oscillation (ENSO) index, clearly show reliable Granger-causality from ENSO to GT. We develop and test several hypotheses to explain why the Granger causality tests between RF and GT are not reliable. We conclude that the form of Granger causality used in this study, and in past studies reported in the literature, is sensitive to data availability, random variability, and especially whether the variables arise from a deterministic or stochastic process. Simulations indicate that Granger causality in this form performs poorly, even in simple linear effect cases, when applied to one deterministic and one stochastic time series rather than two stochastic time series. We observe that this problem may amplify problems with limited data availability and random variability. This form of Granger causality also appears to be sensitive to the possible presence of latent variables which may drive the causal relationships. In particular, there is a possibility that the effects of RF are overshadowed by the effects of other more statistically significant causal variables, such as the ENSO index. Our findings motivate the investigation of a modified form of Granger causality testing which is robust to all of the aforementioned conditions. While such a modified test may be able to delineate natural versus anthropogenic drivers of climate change and impacts, results and insights reported in the literature which are derived from the existing form of Granger causality may need to be carefully re-evaluated.

Kodra, E. A.; Chatterjee, S.; Ganguly, A. R.

2009-12-01

162

We develop further the approach to upper and lower bounds in quantum dynamics via complex analysis methods which was introduced by us in a sequence of earlier papers. Here we derive upper bounds for non-time averaged outside probabilities and moments of the position operator from lower bounds for transfer matrices at complex energies. Moreover, for the time-averaged transport exponents, we present improved lower bounds in the special case of the Fibonacci Hamiltonian. These bounds lead to an optimal description of the time-averaged spreading rate of the fast part of the wavepacket in the large coupling limit. This provides the first example which demonstrates that the time-averaged spreading rates may exceed the upper box-counting dimension of the spectrum.

David Damanik; Serguei Tcheremchantsev

2008-01-22

163

The influence of time-averaging on bias is investigated in the finite-volume\\/particle hybrid algorithm for the joint PDF equation for statistically-stationary turbulent reactive flows. It is found that the time-averaging of the mean fluctuating velocity (TAu) leads to the same variances of the fluctuating velocity before and after the velocity correction, whereas without TAu the estimates are different, and an additional

Haifeng Wang; Stephen B. Pope

2008-01-01

164

ERIC Educational Resources Information Center

Purpose: The purpose of this paper is to review the literature from the organizational sciences to develop a grounded narrative of turnaround leadership. Design/methodology/approach: The paper is a review of literature, which employs a ten-step process to explore and make sense of the turnaround literature from the organizational sciences. The…

Murphy, Joseph

2008-01-01

165

ERIC Educational Resources Information Center

This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the true…

MacTaggart, Terrence, Ed.

2007-01-01

166

ERIC Educational Resources Information Center

Beginning from 1 high-poverty, historically low-achieving secondary school's successful turnaround work, this article provides data relative to a successful school turnaround, the importance of external and system-level supports, and the importance of building for sustainable institutionalization of improvements. The evidence suggests the…

Schaffer, Eugene; Reynolds, David; Stringfield, Sam

2012-01-01

167

NSDL National Science Digital Library

This lab is not an inquiry activity but will help students understand the meaning of average velocity. Students do not easily understand from a textbook that an object's velocity changes over a period of acceleration. They have difficulty understanding wh

Horton, Michael

2009-05-30

168

NASA Astrophysics Data System (ADS)

Urban air pollution is an ongoing and complicated problem for both residents and policy makers. This study aims to provide a better understanding of the geographic source and fate of organic pollutants in a dynamic urban environment. Natural and artificial hydrophobic substrates were employed for the passive monitoring and mapping of ground-level organic pollutants in the San Francisco Bay area. We focused specifically on volatile and semi-volatile polycyclic aromatic hydrocarbons (PAHs). These compounds are proxies for a broad range of combustion related air pollutants derived from local, regional, and global combustion sources. PAHs include several well-studied carcinogens and can be measured easily and accurately across a broad range of concentrations. Estimates of time-integrated vapor phase and particle deposition were made from measuring accumulated PAHs in the leaves of several widely distributed tree species (including the Quercus agrifolia and Sequoia sempervirens) and an artificial wax film. Samples were designed to represent pollutant exposure over a period of one to several months. The selective sampling and analysis of hydrophobic substrates providess insight into the average geographic distribution of ground-level air pollutants in a simple and inexpensive way. However, accumulated organics do not directly correlated with human exposure and the source signature of PAHs may be obscured by transport, deposition, and flux processes. We attempted to address some of these complications by studying 1) PAH accumulation rates within substrates in a controlled microcosm, 2) differences in PAH abundance in different substrate types at the same locality, and 3) samples near long-term high volume air sampling stations. We also set out to create a map of PAH concentrations based on our measurements. This map can be directly compared with interpolated data from high-volume sampling stations and used to address questions concerning atmospheric heterogeneity of these pollutants (i.e. due to both source localization and dominant wind patterns). Our initial results indicate that exposure to PAHs in the bay area is geographically heterogeneous and individual exposure may vary by more than two orders of magnitude. The signatures of PAH contamination also varies considerably, indicating different sources and differing transportation mechanisms may be important at different sites and times.

Yu, C.; Zinniker, D. A.; Moldowan, J.

2010-12-01

169

Previous human activity pattern-exposure models have required improved ways for handling the serial autocorrelation of pollutant concentrations in indoor and in-transit microenvironments. Because existing models often sample concentrations from microenvironmental distributions for different averaging times, one approach for handling this autocorrelation is to develop an averaging time model for each microenvironment of importance. The paper explores a new approach for developing averaging time models: deriving the model theoretically from the mass balance equation, which describes the relationship between the time series of the input and output concentrations of any pollutant introduced into a well-mixed chamber. Beginning with the mass balance equation, the paper derives an averaging time model that predicts the mean, variance, and autocorrelation of the time series of pollutant concentrations in a well-mixed chamber for any averaging time. The paper considers the case of a discrete model in which the input source concentration is a time series of independent, piecewise-constant concentrations of equal duration while the air exchange rate remains fixed. Because the model is derived theoretically, the model is exact for the conditions specified. The goal of the research is to provide human exposure researchers with basic concepts for designing and developing useful, practical algorithms for future exposure and indoor air quality models.

Switzer, P.; Ott, W.

1993-01-01

170

Investigating the Role of Human Resources in School Turnaround: Evidence from Two States

ERIC Educational Resources Information Center

Teachers are generally recognized as the schooling factor accounting for the highest proportion of student learning outcomes (Aaronson et al., 2007; Hanushek, 1986). This implies the quick and dramatic improvement in school performance observed in turnaround (TA) schools was associated with a major change in the performance of its teachers. This…

Hansen, Michael

2012-01-01

171

Tinkering and Turnarounds: Understanding the Contemporary Campaign to Improve Low-Performing Schools

ERIC Educational Resources Information Center

An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The article…

Duke, Daniel L.

2012-01-01

172

CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success

ERIC Educational Resources Information Center

Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…

Gorman, Lynn

2012-01-01

173

Chronically Low-Performing Schools and Turnaround: Evidence from Three States

ERIC Educational Resources Information Center

The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This paper…

Hansen, Michael; Choi, Kilchan

2012-01-01

174

ERIC Educational Resources Information Center

Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apart…

Schmidt-Davis, Jon; Bottoms, Gene

2012-01-01

175

IET control building (TAN620) turnaround area. probably facing north. ventilation ...

IET control building (TAN-620) turnaround area. probably facing north. ventilation ducts near ceiling. sign says, "parking position no. 3, use this space last." poured concrete walls. INEEL negative no. HD-21-5-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

176

A Case Study of Change Strategies Implemented in a Turnaround Elementary School

ERIC Educational Resources Information Center

This case study examined the change strategies in a turnaround school at the elementary level to understand and describe how change occurred and was sustained at this campus. This study examined the factors which contributed to the change in academic success of students, examined beliefs about change that led to the change process, identified the…

Colson, Jo Ann

2012-01-01

177

How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds

ERIC Educational Resources Information Center

In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…

Kowal, Julie; Ableidinger, Joe

2011-01-01

178

Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform

ERIC Educational Resources Information Center

This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in student…

Mette, Ian M.; Scribner, Jay P.

2014-01-01

179

Facilitating EMS turnaround intervals at hospitals in the face of receiving facility overcrowding.

The escalating national problem of oversaturated hospital beds and emergency departments (EDs) has resulted in serious operational impediments within patient-receiving facilities. It has also had a growing impact on the 9-1-1 emergency care system. Beyond the long-standing difficulties arising from ambulance diversion practices, many emergency medical services (EMS) crews are now finding themselves detained in EDs for protracted periods, unable to transfer care of their transported patients to ED staff members. Key factors have included a lack of beds or stretcher space, and, in some cases, EMS personnel are used transiently for ED patient care services. In other circumstances, ED staff members no longer prioritize rapid turnaround of EMS-transported patients because of the increasing volume and acuity of patients already in their care. The resulting detention of EMS crews confounds concurrent ambulance availability problems, creates concrete risks for delayed EMS responses to impending critical cases, and incurs regulatory jeopardy for hospitals. Communities should take appropriate steps to ensure that delivery intervals (time elapsing from entry into the hospital to physical transfer of patient care to ED staff) remain extremely brief (less than a few minutes) and that they rarely exceed 10 minutes. While recognizing that the root causes of these issues will require far-reaching national health care policy changes, EMS and local government officials should still maintain ongoing dialogues with hospital chief administrators to mitigate this mutual crisis of escalating service demands. Federal and state health officials should also play an active role in monitoring progress and compliance. PMID:16147474

Eckstein, Marc; Isaacs, S Marshal; Slovis, Corey M; Kaufman, Bradley J; Loflin, James R; O'Connor, Robert E; Pepe, Paul E

2005-01-01

180

NASA Astrophysics Data System (ADS)

This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

Prevosto, L.; Kelly, H.; Mancinelli, B.

2013-12-01

181

This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina)] [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina) [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

2013-12-15

182

Automatic layout of low-cost quick-turnaround random-logic custom LSI devices

This paper discusses the application of the RCA-developed standard cell approach for generating low-cost, quick-turnaround random-logic LSI arrays using automatic placement and routing computer programs. Originally developed with government support1 for two-phase PMOS dynamic logic in 1967, the programs have been expanded to include many technologies2,3 and have gone through extensive evolutionary and revolutionary changes during the past 10 years.

A. Feller

1976-01-01

183

Reducing barriers to timely MR imaging scheduling.

Scheduling a magnetic resonance (MR) imaging study at the authors' large health system in 2011 required considerable preparation before an appointment time was given to a patient. Difficulties in promptly scheduling appointments resulted from the varying time required for examinations, depending on the requested protocol, availability of appropriate MR imaging equipment, examination timing, prior insurance authorization verification, and proper patient screening. These factors contributed to a backlog of patients to schedule that regularly exceeded 300. A multidisciplinary process-improvement team was assembled to improve the turnaround time for scheduling an outpatient MR imaging examination (the interval between the time when the order was received and the time when the patient was informed about the MR imaging appointment). Process improvements targeted by the team included protocol turnaround time, schedule standardization, schedule intervals, examination timing, service standards, and scheduling redesign. Using lean methods and multiple plan-do-check-act cycles, the time to schedule an outpatient MR imaging examination improved from 117 hours to 33 hours, a 72% reduction, during the 9-month study period in 2011-2012. The number of patients in the scheduling queue was reduced by 90%. Overall MR imaging examinations within the specific patient population studied increased from 773 patient studies during the first month of intervention to 1444 studies the following month and averaged over 1279 patient studies per month throughout the study. PMID:25384302

Wessman, Brooke V; Moriarity, Andrew K; Ametlli, Vanda; Kastan, David J

2014-01-01

184

A Q-GERT analysis of the space shuttle ground turnaround system at Vandenberg Air Force Base

NASA Astrophysics Data System (ADS)

The efficient ground turnaround of the Space Shuttle is critical to the execution of national policies. Research identified four major subsystems within the Vandenberg Ground Turnaround System; the Orbiter Maintenance Subsystem, External Tank Subsystem, Solid Rocket Booster Subsystem, and Launch Pad Subsystem. A Q-GERT simulation of the ground Turnaround System was conducted to investigate the system and observe the interactions between the major subsystems. The Q-GERT model simulated the integration of Space Shuttle components to determine the subsystem most sensitive to change. The results indicate the Launch Pad Subsystem is the most critical in the Vandenberg Ground Turnaround System. However, further research is recommended in areas of logistics support, spares availability, and transportation of components.

Graham, S.; Jones, T. W.

1982-09-01

185

NASA Astrophysics Data System (ADS)

Stable carbon isotope curves are used as a precise stratigraphic tool in the Paleozoic, even though they are commonly based on shallow-water carbonate record, characterized by low stratigraphic completeness. Identification of episodes of large-scale redeposition and erosion may improve ?13Ccarb-based correlations. Here, a series of at least three episodes of high-energy onshore redeposition are described from the Makarivka Member (new unit) of the Ustya Formation from the Homerian (middle Silurian) of Podolia, Ukraine. The Makarivka Member is emplaced within a tidal flat succession. Its most prominent part is divided into a lower polymictic conglomerate of sand- to boulder-sized clasts representing a range of subtidal facies, and an upper heterolithic unit composed of grainstone and mudstone laminae. The aim of the study is to identify the mechanism of deposition of the allochthonous conglomeratic material in this Member. Based on analogies with recent tsunami deposits, the conglomerate is interpreted to reflect the strongest landward-directed current in the tsunami run-up phase, and the heterolith - alternating high-density landward currents, stagnant intervals allowing mud and land-derived debris to settle, and backwash flows. The tsunamite was deposited during an interval of decreasing isotopic values of the Mulde excursion, a global ?13C excursion reaching + 5.2‰ in the studied sections. Clast redeposition in an interval characterized by rapidly changing ?13Ccarb offers the opportunity to evaluate the degree of temporal and spatial averaging caused by the tsunami. The clasts in the polymictic conglomerate show scattered ?13Ccarb values (- 0.3‰ to + 2.1‰) compared to homogenous (1.3‰ to 1.6‰) values in the matrix. The presence of clasts characterized by low ?13Ccarb values is explained by their decrease with bathymetry rather than erosion of pre-excursion strata, whereas high values characterize material entrained from the sea-floor and strata directly underlying the tsunamite. Close (1.3‰ and 1.5‰) average ?13Ccarb values suggest that the matrix of the conglomerate is potentially a product of clast grinding.

Jarochowska, Emilia; Munnecke, Axel

2015-01-01

186

BACKGROUND: Gene-expression analysis is increasingly important in biological research, with real-time reverse transcription PCR (RT-PCR) becoming the method of choice for high-throughput and accurate expression profiling of selected genes. Given the increased sensitivity, reproducibility and large dynamic range of this methodology, the requirements for a proper internal control gene for normalization have become increasingly stringent. Although housekeeping gene expression has

Jo Vandesompele; Katleen De Preter; Filip Pattyn; Bruce Poppe; Nadine Van Roy; Anne De Paepe; Frank Speleman

2002-01-01

187

Long-term recordings of dorsal raphe (DRN) activity were obtained from cats chronically implanted with microwires. The continuous time-course of DRN discharge, PGO waves, and muscle tone was quantified across multiple sleep cycles. DRN activity profiles were inversely correlated with PGO waves, biphasically related to muscle tone, and varied with sleep cycle phase. The role of DRN as a putative regulator of behavioral state and/or specific physiological variables is discussed. PMID:6626966

Lydic, R; McCarley, R W; Hobson, J A

1983-09-12

188

Field demonstration of rapid turnaround, multilevel groundwater screening

A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.

Tingle, A.R. [Analysas Corp., Oak Ridge, TN (United States); Baker, L. [Automated Sciences Group, Inc., Oak Ridge, TN (United States); Long, D.D. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States). Hazardous Waste Remedial Actions Program; Miracle, M. [Advanced Sciences, Inc., Oak Ridge, TN (United States)

1994-09-01

189

4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan

NASA Astrophysics Data System (ADS)

The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (?registration) and the regularity of the deformation vector field (\\overline{\\Delta |J|} ) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (?signal). The two irregularity measures, \\overline{\\Delta |J|} and ?registration, were dependent on each other (p < 0.0001, R2 = 0.80 for P-4D-CT, R2 = 0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower \\overline{\\Delta |J|} and ?registration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal. The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42?mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16?mm in caudal direction (4.1% of the amplitude). Similar relative offsets were found at the diaphragm. We have devised a method to use amplitude binned 4D-CT to construct motion model and generate a mid-position planning CT for radiotherapy treatment purposes. We have decimated the systematic offset of this mid-position model with a motion model derived from P-4D-CT. We found that the A-4D-CT led to a decrease of local artefacts and that this decrease was correlated to the irregularity of the external respiration signal.

Kruis, Matthijs F.; van de Kamer, Jeroen B.; Belderbos, José S. A.; Sonke, Jan-Jakob; van Herk, Marcel

2014-09-01

190

A diode-pumped injection-seeded Nd:YAG laser system with an average output power of 38 W is described. The laser operates at 300 Hz with pulse energies up to 130 mJ. The temporal pulse shape is nominally flat in time and the pulse width is user selectable from 350 to 600 ps. In addition, the spatial profile of the beam is near top hat with contrast <10%. PMID:17514285

Honig, John; Halpin, John; Browning, Don; Crane, John; Hackel, Richard; Henesian, Mark; Peterson, John; Ravizza, Doug; Wennberg, Tim; Rieger, Harry; Marciante, John

2007-06-01

191

This paper presents verification and validation results for the time-averaged, three-dimensional velocity field immediately downstream of a finite elliptic cylinder at a Reynolds number of 1.35×104. Numerical simulations were performed with the finite element package, Fidap, using the steady state, standard k-epsilon model. The ratio of the cylinder height to the major axis of the elliptical cross section is 5.0;

Michael R. Flynn; Alfred D. Eisner

2004-01-01

192

This paper presents verification and validation results for the time-averaged, three-dimensional velocity field immediately downstream of a finite elliptic cylinder at a Reynolds number of 1.35 × 104. Numerical simulations were performed with the finite element package, Fidap, using the steady state, standard k-epsilon model. The ratio of the cylinder height to the major axis of the elliptical cross section

Michael R. Flynn; Alfred D. Eisner

2004-01-01

193

NASA Technical Reports Server (NTRS)

The role of unsteady flow processes in establishing the performance of axial flow turbomachinery was investigated. The development of the flow model, as defined by the time average flow equations associated with the passage of the blade row embedded in a multistage configuration, is presented. The mechanisms for unsteady momentum and energy transport is outlined. The modeling of the unsteady momentum and energy transport are discussed. The procedure for simulating unsteady multistage turbomachinery flows is described.

Adamczyk, John J.

1996-01-01

194

NASA Astrophysics Data System (ADS)

Recently discovered identities in statistical mechanics have enabled the calculation of equilibrium ensemble averages from realizations of driven nonequilibrium processes, including single-molecule pulling experiments and analogous computer simulations. Challenges in collecting large data sets motivate the pursuit of efficient statistical estimators that maximize use of available information. Along these lines, Hummer and Szabo developed an estimator that combines data from multiple time slices along a driven nonequilibrium process to compute the potential of mean force. Here, we generalize their approach, pooling information from multiple time slices to estimate arbitrary equilibrium expectations. Our expression may be combined with estimators of path-ensemble averages, including existing optimal estimators that use data collected by unidirectional and bidirectional protocols. We demonstrate the estimator by calculating free energies, moments of the polymer extension, the thermodynamic metric tensor, and the thermodynamic length in a model single-molecule pulling experiment. Compared to estimators that only use individual time slices, our multiple time-slice estimators yield substantially smoother estimates and achieve lower variance for higher-order moments.

Minh, David D. L.; Chodera, John D.

2011-01-01

195

Instability at long averaging times

linewidth: nano-Hertz range ! - excitation is accompanied by large light shift due to coupling to other lock PM fiber ECDL 934 nm AOM tuning slave laser diode 934 nm H maser / 871 nm reference frequency comb generator PM fiber SHG cavity pulse switch, fine tuning AOM 467 nm to trap H-C lock DPD slave laser diode

Hensinger, Winfried

196

NASA Astrophysics Data System (ADS)

The statistics of the average height fluctuation of the one-dimensional Kardar-Parisi-Zhang (KPZ)-type surface is investigated. Guided by the idea of local stationarity, we derive the scaling form of the characteristic function in the early-time regime, t\\ll N^{3/2} with t time and N the system size, from the known characteristic function in the stationary state (t\\gg N^{3/2} ) of the single-step model derivable from a Bethe ansatz solution, and thereby find the scaling properties of the cumulants and the large deviation function in the early-time regime. These results, combined with the scaling analysis of the KPZ equation, imply the existence of the universal scaling functions for the cumulants and an universal large deviation function. The analytic predictions are supported by the simulation results for three different models in the KPZ class.

Lee, Deok-Sun; Kim, Doochul

2006-08-01

197

NASA Technical Reports Server (NTRS)

Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass burning in South America and Africa dominates the 03 budget in the tropical southern hemisphere during the study period.

Kim, J. H.; Hudson, R. D.; Thompson, A. M.

1996-01-01

198

NASA Astrophysics Data System (ADS)

Volcanoes are a strong source of sulphur dioxide (SO2) with time-averaged emission inventories (e.g. Andres and Kasgnoc, 1998) indicating that volcanoes account for around 40% of the total annual SO2 flux in the pre-industrial atmosphere. We use a global aerosol microphysics model (GLOMAP-mode) to quantify the contribution of time-averaged volcanic sulphur emissions (from both continuous passive degassing and explosive volcanoes) on the global cloud condensation nuclei (CCN) budget. GLOMAP-mode is capable of simulating microphysical processes, such as binary homogeneous nucleation, hygroscopic growth, coagulation, condensation, cloud processing (oxidation of dissolved SO2 to SO4 in cloud droplets), as well as dry and wet deposition. For this study we use a sulphur chemistry scheme which includes 7 species (DMS, DMSO, MSA, SO2, H2SO4, COS, CS2). The runs were conducted using four internally mixed aerosol components, sulphate (SO4), sea salt, black carbon (BC) and organic carbon (OC). We simulated the impact of volcanic degassing in a pre-industrial setting (i.e. using 1750 BC and OC emissions in the absence of any anthropogenic emissions) using the volcanic emission inventory by Dentener et al. (2006). This volcanic inventory is based on datasets by Andres and Kasgnoc (1998) and Halmer et al. (2002) and accounts for an annual flux of ~13 Tg(S) of volcanic SO2. Our simulations suggest that volcanic degassing contributes on average ~50 CCN (>35nm in radius) per cubic centimetre to the annual zonal mean CCN concentrations in the tropical boundary layer. The simulations also reveal complex changes in annual zonal mean total particle concentrations (CN). CN concentrations are more than double in large parts of the tropical boundary layer when comparing the unperturbed run (i.e. without volcanic degassing) to the run featuring time-averaged volcanic degassing. However, the simulations also reveal that the additional SO2 and its subsequent conversion to sulphate aerosol enhances microphysical processes such as the coagulation of existing and/or entrained particles in upper parts of the atmosphere and in parts of the Southern Hemisphere boundary layer. Our model simulations suggest that volcanoes were a very important source of CCN in the pre-industrial atmosphere. Equatorwards of about 50° volcanoes can sustain CCN concentrations in excess of 50 cm-3, and up to 500 cm-3 over large tropical regions, even when the other main sulphur source (from DMS) is eliminated. Andres, R. J. and Kasgnoc, A. D.: A time-averaged inventory of subaerial volcanic sulfur emissions, J. Geophys. Res., 103, 25 251-25 262, 1998. Dentener, F., Kinne, S., Bond, T., Boucher, O., Cofala, J., Generoso, S., Ginoux, P., Gong, S., Hoelzemann, J. J., Ito, A., Marelli, L., Penner, J. E., Putaud, J. P., Textor, C., Schulz, M., van der Werf, G. R., and Wilson, J.: Emissions of primary aerosol and precursor gases in the years 2000 and 1750 prescribed data-sets for AeroCom, Atmos. Chem. Phys., 6, 4321-4344, 2006. Halmer, M., Schmincke, H., and Graf, H.-F.: The annual volcanic gas input into the atmosphere, in particular into the stratosphere, A global data-set for the past 100 years, J. Volca. Geotherm. Res., 115, 511-528, 2002.

Schmidt, Anja; Carslaw, Kenneth; Mann, Graham; Merikanto, Joonas

2010-05-01

199

ERIC Educational Resources Information Center

The turnaround recommendations and framework in "The Turnaround Challenge" grew out of both new research and synthesis of extensive existing research, as carried out by Mass Insight Education & Research Institute and its partners since September 2005. If the main report is the tip of the proverbial iceberg, this supplement represents at least part…

Calkins, Andrew; Guenther, William; Belfiore, Grace; Lash, Dave

2007-01-01

200

NASA Technical Reports Server (NTRS)

Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

Wilson, Robert M.

2001-01-01

201

Orbit-on-demand vehicles - Mission possible, but one-day turnaround?

NASA Technical Reports Server (NTRS)

The next generation launch-on-demand manned orbital vehicles will ideally be able to perform plane change maneuvers for rendezvous. Plane-changes of, e.g., 11 deg, will require a 3000 fps on-orbit maneuver velocity. Any plane-change capabilities beyond 800 fps rapidly increase the necessary vehicle weight. Sensitivity studies of the payload weight/vehicle weight, the presence or absence of air-breathing engines, high acceleration, etc., indicated that advances are required in the payload stowage pallet, thermal protection system, fault detection electronics, line-replaceable units, and accessibility to realize one-day turnaround. The spacecraft would also need autonomous systems checks and launch from minimum support sites reached by air ferry. A horizontal take-off, two-stage subsonic launch vehicle so far offers the lowest weight, maximum flexibility and minimum support combination of all configurations under consideration.

Talay, T. A.; Morris, W. D.; Lemessurier, R. W.

1985-01-01

202

integrative sampler and the solid-phase extraction for estimating herbicide time-weighted average integrative sampler and the solid-phase extraction for estimating herbicide time-weighted average for detecting episodic or short-term pollution events (e.g. increased herbicide concentrations during a flood

Boyer, Edmond

203

A universe described by braneworlds is studied in a cyclic scenario. As expected such an oscillating universe will undergo turnarounds, whenever the phantom energy density reaches a critical value from either side. It is found that a universe described by RSII brane model will readily undergo oscillations if, either the brane tension, \\lambda, or the bulk cosmological constant, \\Lambda_{4}, is negative. The DGP brane model does not readily undergo cyclic turnarounds. Hence for this model a modified equation is proposed to incorporate the cyclic nature. It is found that there is always a remanent mass of a black hole at the verge of a turnaround. Hence contrary to known results in literature, it is found that the destruction of black holes at the turnaround is completely out of question. Finally to alleviate, if not solve, the problem posed by the black holes, it is argued that the remanent masses of the black holes do not act as a serious defect of the model because of Hawking evaporation.

Prabir Rudra

2012-02-25

204

ERIC Educational Resources Information Center

Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of…

Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad

2014-01-01

205

ERIC Educational Resources Information Center

This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failed…

Carlson, Robert V.

206

ERIC Educational Resources Information Center

This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…

Jacobson, Stephen L.; Szczesek, Jill

2013-01-01

207

Background The movement of organelles in root hairs primarily occurs along the actin cytoskeleton. Circulation and “reverse fountain” cytoplasmic streaming constitute the typical forms by which most organelles (such as mitochondria and the Golgi apparatus) in plant root hair cells engage in bidirectional movement. However, there remains a lack of in-depth research regarding the relationship between the distribution of the actin cytoskeleton and turnaround organelle movement in plant root hair cells. Results In this paper, Arabidopsis seedlings that had been stably transformed with a GFP-ABD2-GFP (green fluorescent protein-actin-binding domain 2-green fluorescent protein) construct were utilized to study the distribution of bundles of filamentous (F)-actin and the directed motion of mitochondria along these bundles in root hairs. Observations with a confocal laser scanning microscope revealed that there were widespread circular F-actin bundles in the epidermal cells and root hairs of Arabidopsis roots. In root hairs, these circular bundles primarily start at the sub-apical region, which is the location where the turnaround movement of organelles occurs. MitoTracker probes were used to label mitochondria, and the dynamic observation of root hair cells with a confocal laser scanning microscope indicated that turnaround mitochondrial movement occurred along circular F-actin bundles. Conclusions Relevant experimental results demonstrated that the circular F-actin bundles provide a track for the turnaround and bidirectional movement of mitochondria. PMID:24626218

Zhang, Yu; Sheng, Xiaojing; Meng, Xiangfei; Li, Yan

2014-01-01

208

Signal-averaged electrocardiography (SAECG) was performed in 120 consecutive patients with idiopathic dilated cardiomyopathy (IDC), and in 60 healthy controls. Time-domain analysis of SAECGs revealed ventricular late potentials in 27 of 120 patients with IDC (23%) compared to 2 of 60 controls (3%; P < 0.05). Frequency-domain analysis of SAECGs showed ventricular late potentials in 9 of 120 patients with IDC (8%) compared to none of the 60 controls (0%, P < 0.05). During a prospective follow-up of 15 +/- 7 months, serious arrhythmic events, defined as sustained ventricular tachyarrhythmias or sudden death, occurred in 17 of 120 patients with IDC (14%). The sensitivity, specificity, and positive and negative predictive values of ventricular late potentials for serious arrhythmic events were 35%, 80%, 22%, and 88% for the time-domain analysis, and 18%, 94%, 33%, and 87% for the frequency-domain analysis of SAECG, respectively. Thus, neither the time-nor the frequency-domain analysis of SAECG appears to be useful for risk stratification in the setting of IDC in view of their low sensitivity and low positive predictive value for serious arrhythmic events during follow-up. PMID:8945070

Grimm, W; Hoffmann, J; Knop, U; Winzenburg, J; Menz, V; Maisch, B

1996-11-01

209

Averaging procedures for flow within vegetation canopies

Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’

M. R. Raupach; R. H. Shaw

1982-01-01

210

The primary goal of this work was to verify experimentally the applicability of the recently introduced Time-Averaged Wave Envelope (TAWE) method [1] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F = 80 mm) square (20 × 20 mm) and rectangular (10 × 25 mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [1]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within ± 1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [2]. The reduction in computation time depends on several parameters, including the source geometry, dimensions, fundamental resonance frequency, excitation level as well as the strength of the medium nonlinearity. For the non-axisymmetric focused transducers mentioned above and excited by a tone burst corresponding to moderately nonlinear and highly nonlinear conditions the execution time of computations was 3 and 12 hours, respectively, when using a 1.5 GHz clock frequency, 32-bit processor PC laptop with 2 GB RAM memory, only. Such prediction of the full 4D pulsed field is not possible when using conventional, Fourier-series scheme as it would require increasing the RAM memory by at least 2 orders of magnitude. PMID:18474387

Wójcik, J.; Kujawska, T.; Nowicki, A.; Lewin, P.A.

2008-01-01

211

In this paper, theoretical calculations as well as numerical simulations are performed for the time-averaged acoustic force and torque on a rigid cylinder of arbitrary size in a fluid with low viscosity, i.e., the acoustic boundary layer is thin compared to the cylinder radius. An exact analytical solution and its approximation are proposed in the form of an infinite series including Bessel functions. These solutions can be evaluated easily by a mathematical software package such as mathematica and matlab. Three types of incident waves, plane traveling wave, plane standing wave, and dual orthogonal standing waves, are investigated in detail. It is found that for a small particle, the viscous effects for an incident standing wave may be neglected but those for an incident traveling wave are notable. A nonzero viscous torque is experienced by the rigid cylinder when subjected to dual orthogonal standing waves with a phase shift even when the cylinder is located at equilibrium positions without imposed acoustic forces. Furthermore, numerical simulations are carried out based on the FVM algorithm to verify the proposed theoretical formulas. The theoretical results and the numerical ones agree with each other very well in all the cases considered. PMID:21682376

Wang, Jingtao; Dual, Jurg

2011-06-01

212

Human and system errors, using adaptive turnaround documents to capture data in a busy practice.

Capturing coded clinical data for clinical decision support can improve care, but cost and disruption of clinic workflow present barriers to implementation. Previous work has shown that tailored, scannable paper forms (adaptive turnaround documents, ATDs) can achieve the benefits of computer-based clinical decision support at low cost and minimal disruption of workflow. ATDs are highly accurate under controlled circumstances, but accuracy in the setting of busy clinics with untrained physician users is untested. We recently developed and implemented such a system and studied rates of errors attributable to physician users and errors in the system. Prompts were used in 63% of encounters. Errors resulting from incorrectly marking forms occurred in 1.8% of prompts. System errors occurred in 7.2% of prompts. Most system errors were failures to capture data and may represent human errors in the scanning process. ATDs are an effective way to collect coded data from physicians. Further automation of the scanning process may reduce system errors. PMID:16779032

Downs, Stephen M; Carroll, Aaron E; Anand, Vibha; Biondich, Paul G

2005-01-01

213

Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)

NASA Technical Reports Server (NTRS)

This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.

Gross, R. Steven

1996-01-01

214

NASA Astrophysics Data System (ADS)

Turbulent mixing generated by shock-driven acceleration of a perturbed interface is simulated using a new multi-component Reynolds-averaged Navier-Stokes (RANS) model closed with a two-equation - model. The model is implemented in a hydrodynamics code using a third-order weighted essentially non-oscillatory finite-difference method for the advection terms and a second-order central difference method for the gradients in the source and diffusion terms. In the present reshocked Richtmyer-Meshkov instability and mixing study, an incident shock with Mach number is generated in air and progresses into a sulfur hexafluoride test section. The time evolution of the predicted mixing layer widths corresponding to six shock tube test section lengths are compared with experimental measurements and three-dimensional multi-mode numerical simulations. The mixing layer widths are also compared with the analytical self-similar power-law solution of the simplified model equations prior to reshock. A set of model coefficients and initial conditions specific to these six experiments is established, for which the widths before and after reshock agree very well with experimental and numerical simulation data. A second set of general coefficients that accommodates a broader range of incident shock Mach numbers, Atwood numbers, and test section lengths is also established by incorporating additional experimental data for , , and with and with and previous RANS modeling. Terms in the budgets of the turbulent kinetic energy and dissipation rate equations are examined to evaluate the relative importance of turbulence production, dissipation and diffusion mechanisms during mixing. Convergence results for the mixing layer widths, mean fields, and turbulent fields under grid refinement are presented for each of the cases.

Morán-López, J. T.; Schilling, O.

2014-05-01

215

The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections.

P. Welch

1967-01-01

216

-weighted average exposure to 85 decibels or greater can have unfavorable effects on hearing. The Hearing Conservation Program has been designed to reduce hearing loss at the College of Agricultural Sciences about hearing and its loss are likely to use hearing protection. Prior to working in a noisy area

217

The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

Chrien, R.E.

1986-10-01

218

and effective. Visit our website aae.boisestate.edu/know- your-learning-style/ for some assessments. Our brains at a time, make sure to take a 5 minute break every 20 minutes to maintain up your concentration, focus, longer study sessions. Try to plan your studying throughout the week for shorter time periods, rather

Barrash, Warren

219

Temperature: Decadal Averages Map

NSDL National Science Digital Library

This interactive map allows students to experiment with decadal average temperature projections. Overall temperatures are expected to rise throughout the century and this tool demonstrates those projected measurements.

Commission, California E.

220

NASA Astrophysics Data System (ADS)

The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.

Lakshminarayana, B.; Luo, J.

1993-07-01

221

With the introduction of classroom response systems (CRS) in physics classrooms, instructors are now able to examine assessment parameters that are commonly described in the standardized assessment literature but were not previously available. The purpose of this research was to examine the relationship between students' item response time for answering multiple-choice questions posed in an Introduction to Physical Science course

Wilson J. González-Espada; Daniel W. Bullock

222

The efficiency of producing salable products in the pork industry is largely determined by costs associated with feed and by the amount and quality of lean meat produced. The objectives of this paper were 1) to explore heritability and genetic correlations for growth, feed efficiency, and real-time ultrasound traits using both pedigree and marker information and 2) to assess accuracy of genomic prediction for those traits using Bayes A prediction models in a Duroc terminal sire population. Body weight at birth (BW at birth) and weaning (BW at weaning) and real-time ultrasound traits, including back fat thickness (BF), muscle depth (MD), and intramuscular fat content (IMF), were collected on the basis of farm protocol. Individual feed intake and serial BW records of 1,563 boars obtained from feed intake recording equipment (FIRE; Osborne Industries Inc., Osborne, KS) were edited to obtain growth, feed intake, and feed efficiency traits, including ADG, ADFI, feed conversion ratio (FCR), and residual feed intake (RFI). Correspondingly, 1,047 boars were genotyped using the Illumina PorcineSNP60 BeadChip. The remaining 516 boars, as an independent sample, were genotyped with a low-density GGP-Porcine BeadChip and imputed to 60K. Magnitudes of heritability from pedigree analysis were moderate for growth, feed intake, and ultrasound traits (ranging from 0.44 ± 0.11 for ADG to 0.58 ± 0.09 for BF); heritability estimates were 0.32 ± 0.09 for FCR but only 0.10 ± 0.05 for RFI. Comparatively, heritability estimates using marker information by Bayes A models were about half of those from pedigree analysis, suggesting "missing heritability." Moderate positive genetic correlations between growth and feed intake (0.32 ± 0.05) and back fat (0.22 ± 0.04), as well as negative genetic correlations between growth and feed efficiency traits (-0.21 ± 0.08, -0.05 ± 0.07), indicate selection solely on growth traits may lead to an undesirable increase in feed intake, back fat, and reduced feed efficiency. Genetic correlations among growth, feed intake, and FCR assessed by a multiple-trait Bayes A model resulted in increased genetic correlation between ADG and ADFI, a negative correlation between ADFI and FCR, and a positive correlation between ADG and FCR. Accuracies of genomic prediction for the traits investigated, ranging from 9.4% for RFI to 36.5% for BF, were reported that might provide new insight into pig breeding and future selection programs using genomic information. PMID:24671579

Jiao, S; Maltecca, C; Gray, K A; Cassady, J P

2014-06-01

223

NASA Astrophysics Data System (ADS)

Despite remarkable achievements with r some chemicals, a field-measurement technique has not been advanced for volatile hydrophobic organic chemicals (HOCs) that are the subjects of international concern. This study assesses the applicability of passive air sampling (PAS) by comparing PUF-PAS and its modified SIP-PAS which was made by impregnating XAD-4 powder into PUF, overviewing the principles of PAS, screening sensitive parameters, and determining the uncertainty range of PAS-derived concentration. The PAS air sampling rate determined in this study, corrected by a co-deployed low-volume active air sampler (LAS) for neutral PFCs as model chemicals, was ˜1.2 m3 day-1. Our assessment shows that the improved sorption capacity in a SIP lengthens PAS deployment duration by expanding the linear uptake range and then enlarges the effective air sampling volume and detection frequency of chemicals at trace level. Consequently, volatile chemicals can be collected during sufficiently long times without reaching equilibrium when using SIP, while this is not possible for PUF. The most sensitive parameter to influence PAS-derived CA was an air-side mass transfer coefficient (kA), implying the necessity of spiking depuration chemicals (DCs) because this parameter is strongly related with meteorological conditions. Uncertainty in partition coefficients (KPSM-A or KOA) influences PAS-derived CA to a greater extent with regard to lower KPSM-A chemicals. Also, the PAS-derived CA has an uncertainty range of a half level to a 3-fold higher level of the calculated one. This work is expected to establish solid grounds for the improvement of field measurement technique of HOCs.

Kim, Seung-Kyu; Park, Jong-Eun

2014-06-01

224

NSDL National Science Digital Library

This demo provides students with a concrete understanding of the average rate of change for physical situations and for functions described in tabular or graphic form. Animations in gif and Quicktime format are available, as well as Excel programs for classroom or student use.

Roberts, Lila F.

2005-04-21

225

ERIC Educational Resources Information Center

"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…

Young, Vershawn Ashanti

2004-01-01

226

Time-frequency maps of signal-averaged electrocardiogram based on a short time Fourier transform (STFT) technique analysis was employed to assess the presence of high frequency electrical transients (turbulence) and arrhythmia risk assessment. The optimal configuration set of STFT variables aiming at risk stratification for sustained monomorphic ventricular tachycardia (SMVT) is still undetermined. Different configuration variables, including analyzing time-window widths, starting positions, relative displacements, and zero-padding for STFT time-frequency maps constructions were combined on each analyzing averaged signal from 18 healthy controls and 18 subjects presenting inducible SMVT. Spectral turbulence analysis (STA) was, thus, carried out according to conventional procedures. The optimal configuration set of variables for STA was obtained by assessing the total diagnostic accuracy of all combinations of parameters. The optimal diagnostic performance was found at 86% total diagnostic accuracy as compared to 56% using previous defined normality thresholds (p=0.01). Present configuration set of variables is distinctive from previously defined set of variables and improves risk stratification. PMID:19428130

Benchimol-Barbosa, P R; Nasario-Junior, O; Nadal, J

2010-11-01

227

Averaging with B-series Ander Murua

, ^q2 = cos()q2 - -1 sin()p2, (note that at stroboscopic times n = 2n, that change of variables reduces possibly a (2)-periodic -dependent change of variables that is the identity map at stroboscopic times n = 2 is called stroboscopic averaging. High order averaging There exists a formal (2)-periodic change

Murua, Ander

228

Covariant approximation averaging

We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

Eigo Shintani; Rudy Arthur; Thomas Blum; Taku Izubuchi; Chulwoo Jung; Christoph Lehner

2014-02-02

229

This study evaluated a large set of blinded, previously analyzed prenatal DNA samples with a novel, CGG triplet-repeat primed (TP)-PCR assay (Amplidex FMR1 PCR Kit; Asuragen, Austin, TX). This cohort of 67 fetal DNAs contained 18 full mutations (270 to 1100 repeats, including 1 mosaic), 12 premutations (59 to 150 repeats), 9 intermediate mutations (54 to 58 repeats), and 28 normal samples (17 to 50 repeats, including 3 homozygous female samples). TP-PCR accurately identified FMR1 genotypes, ranging from normal to full- mutation alleles, with a 100% specificity (95% CI, 85.0% to 100%) and a 97.4% sensitivity (95% CI, 84.9% to 99.9%) in comparison with Southern blot analysis results. Exact sizing was possible for a spectrum of normal, intermediate, and premutation (up to 150 repeats) alleles, but CGG repeat numbers >200 are only identified as full mutations. All homozygous alleles were correctly resolved. The assay is also able to reproducibly detect a 2.5% premutation and a 3% full-mutation mosaicism in a normal male background, but a large premutation in a full male mutation background was masked when the amount of the latter was >5%. Implementation of this TP-PCR will significantly reduce reflex testing using Southern blot analyses. Additional testing with methylation-informative techniques might still be needed for a few cases with (large) premutations or full mutations. PMID:22921311

Seneca, Sara; Lissens, Willy; Endels, Kristof; Caljon, Ben; Bonduelle, Maryse; Keymolen, Kathleen; De Rademaeker, Marjan; Ullmann, Urielle; Haentjens, Patrick; Van Berkel, Kim; Van Dooren, Sonia

2012-11-01

230

NSDL National Science Digital Library

This statistics activity requires students to find the mean using their TI-10 or TI-15 Explorer calculators. The task asks students to read part of Sideways Stories of Wayside School by Louis Sachar and find different combinations of lunches that would give an average price of $1.85. This resource includes a three-page PDF lesson plan with guiding questions, data collection page, and calculator instructions.

2008-06-09

231

A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum

Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.

1994-01-01

232

NASA Astrophysics Data System (ADS)

digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of K?lauea Volcano, Hawai`i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at K?lauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of K?lauea's 1983-present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of K?lauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at K?lauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.

Poland, Michael P.

2014-07-01

233

IgG/anti-IgG immunoassay based on a turn-around point long period grating

NASA Astrophysics Data System (ADS)

Long period fiber gratings (LPFGs) have been proposed as label-free optical biosensor for a few years. Refractive index changes, which modify the fiber transmission spectrum, are still used for evaluating a biochemical interaction that occurs along the grating region. A turn-around point (TAP) LPFG was manufactured for enhancing the refractive index sensitivity of these devices. Considering the simplicity and the fast process with respect to the silanization procedure, the functionalization of the fiber was carried out by Eudragit L100 copolymer. An IgG/anti-IgG immunoassay was implemented for studying the antigen/antibody interaction. A limit of detection lower than 100 ?g L-1 was achieved. Based on the same model assay, we compared the resonance wavelength shifts during the injection of 10 mg L-1 anti-IgG antigen between the TAP LPFG and a standard non-TAP one, in which the coupling occurs with a lower order cladding mode, as performance improvement of the LPFG-based biosensors.

Chiavaioli, F.; Biswas, P.; Trono, C.; Giannetti, A.; Tombelli, S.; Bandyopadhyay, S.; Basumallick, N.; Dasgupta, K.; Baldini, F.

2014-02-01

234

Ensemble averaging of acoustic data

NASA Technical Reports Server (NTRS)

A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

Stefanski, P. K.

1982-01-01

235

Modular Approach to Physics: Weighted Average

NSDL National Science Digital Library

This resource is a multi-exercise module relating to average speed. It contains an interactive Java simulation featuring a car that the user controls for speed. Graphs of instantaneous and average velocities are simultaneously displayed, helping the user to understand the concept of time-weighted average. The lesson plan available through the Help menu explains how to calculate a weighted average, first with two speeds, and then with multiple. This item is part of a larger collection of simulation based physics models sponsored by the MAP project (Modular Approach to Physics).

Austin, David; Martin, Brian

2008-08-13

236

Convergence speed in distributed consensus and averaging

We propose three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case. The convergence times of our fixed-graph algorithms compare favorably ...

Olshevsky, Alexander

2006-01-01

237

Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f > 4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841

Rosowski, John J.; Cheng, Jeffrey Tao; Ravicz, Michael E.; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme

2009-01-01

238

Average-cost based robust structural control

NASA Technical Reports Server (NTRS)

A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

Hagood, Nesbitt W.

1993-01-01

239

NASA Astrophysics Data System (ADS)

Unconformities and fossil rich layers are common elements in the stratigraphic architecture of upper Cretaceous sediments exposed on both the Atlantic and Eastern Gulf Coastal Plains. Contacts between the Eutaw Formation and Tombigbee Sands Member in Alabama, the Blufftown Formation and Cusseta Sands in Georgia and the Wenonah- Mt. Laurel and Navesink Formations in New Jersey are marked by erosional surfaces with overlying blankets and lenses of macrofossil residuum. These contacts correspond to bounding unconformities and transgressive lags separating Santonian-Campanian, lower Campanian-upper Campanian and Campanian-Maastrichtian depositional sequences. Regression and subsequent transgression of sea level at the top of these depositional sequences resulted in hydrodynamic sorting of sediments and fossils that had previously accumulated in shelf and lower shoreface paleoenvironments. Remobilization of sediments by shoreface retreat reworked fossil hard-parts which became concentrated above erosional surfaces as sea level rose. Because of the abundance of chondrichthyan, pelecypod and ammonite fossils, these lags have great biostratigraphic significance and provide a basis for examining time averaging in macrofossil zonation. Chondrichthyan teeth are composed of extremely durable and highly insoluble, biogenic apatite. This tooth apatite accurately records the Sr87/Sr86 isotopic signature of seawater, from which the numerical age of the teeth can be calculated using published age/concentration data. Teeth (e.g. Squalicorax kaupi, Scapanorhynchus texanus) from Santonian-Campanian lag deposits at the contact of the Eutaw Formation and Tombigbee Sands Member in Alabama yield approximate ages of 85-81 Ma. Teeth from lower-upper Campanian lag deposits at the contact of the Blufftown Formation and Cusseta Sands in Georgia yield approximate ages of 83-75 Ma. Teeth from Campanian-Maastrichtian lag deposits at the contact of the Wenonah-Mt. Laurel and Navesink Formations in New Jersey yield approximate ages of 80-76 Ma. Isotopic age determination from these chondrichthyan teeth indicate average hiatus of approximately 3-7 million years occur during the development of lag accumulations and transgressive unconformities. Santonian, Campanian and Maastrichtian macrofossils analyzed in this study are hydrodynamically stable components representing time-averaged fossil assemblages sorted together by physical processes and are not life cohorts. Abrupt appearance and disappearance of organisms found in upper Cretaceous lag deposits of the Atlantic and Eastern Gulf Coastal Plains are artifacts of a physical sorting processes associated with sea-level cyclicity.

Becker, Martin Andrew

1997-11-01

240

Small scale magnetic flux-averaged magnetohydrodynamics

By relaxing exact magnetic flux conservation below a scale [lambda] a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when [lambda] is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time [tau] but not for less than [tau]. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits.

Pfirsch, D. (Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, D-8046 Garching (Germany)); Sudan, R.N. (Laboratory of Plasma Studies, Cornell University, Ithaca, New York 14853 (United States))

1994-08-01

241

Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models

NASA Astrophysics Data System (ADS)

Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.

Lakshminarayana, B.; Luo, J.

1993-11-01

242

Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models

NASA Technical Reports Server (NTRS)

Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.

Lakshminarayana, B.; Luo, J.

1993-01-01

243

Reflight of the First Microgravity Science Laboratory: Quick Turnaround of a Space Shuttle Mission

NASA Technical Reports Server (NTRS)

Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.

Simms, Yvonne

1998-01-01

244

Averaging Robertson-Walker cosmologies

The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

Brown, Iain A.; Robbers, Georg [Institut fuer Theoretische Physik, Philosophenweg 16, 69120 Heidelberg (Germany)] [Institut fuer Theoretische Physik, Philosophenweg 16, 69120 Heidelberg (Germany); Behrend, Juliane, E-mail: I.Brown@thphys.uni-heidelberg.de, E-mail: G.Robbers@thphys.uni-heidelberg.de, E-mail: Juliane.Behrend@uni-ulm.de [Institut fuer Theoretische Physik, Albert-Einstein-Allee 11, 89069 Ulm (Germany)] [Institut fuer Theoretische Physik, Albert-Einstein-Allee 11, 89069 Ulm (Germany)

2009-04-15

245

Criticality and Averaging in Cosmology

We propose comparing cosmological solutions in terms of their total spatial volumes $V(\\tau)$ as functions of proper time $\\tau$, assuming synchronous gauge, and with this intention evaluate the variations of $V(\\tau)$ about the Friedmann-Lema\\^{\\i}tre-Robertson-Walker (FLRW) solutions for dust. This can be done successfully in a simple manner without solving perturbation equations. In particular, we find that first variations vanish with respect to all directions which do not possess homogeneity and isotropy preserving components; in other words, every FLRW solution is a {\\it critical point} for $V(\\tau)$ in the properly restricted subspace of the space of solutions. This property may support a validity of the interpretation of the FLRW solutions as constituting an averaged model. We also briefly investigate the second variations of $V(\\tau)$.

Masayuki Tanimoto

1999-07-29

246

Arithmetic averaging: A versatile technique for smoothing and trend removal

Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.

Clark, E.L.

1993-12-31

247

Constrained Semi-Markov decision processes with average rewards

This paper deals with constrained average reward Semi-Markov Decision Processes (SMDPs) with finite state and action sets. We consider two average reward criteria. The first criterion is time-average rewards, which equal the lower limits of the expected average rewards per unit time, as the horizon tends to infinity. The second criterion is ratio-average rewards, which equal the lower limits of

Eugene A. Feinberg; W. A. Harriman

1994-01-01

248

ERIC Educational Resources Information Center

It's true that each school and each classroom presents its own unique challenges. But Kevin Mixon--who in October traded his instrumental music instructor duties at Blodgett School on the west side of Syracuse, New York, for a job as fine arts coordinator for Syracuse City Schools--knows from experience that some schools are far more challenging…

Olson, Cathy Applefeld

2010-01-01

249

A method for measuring the interactions of dust particles within a two-dimensional (2D) layer in a plasma is presented, along with the use of dust as a probe for determining plasma presheath electric fields. Particle interactions were directly determined from the lateral compression of two-dimensional plasma dust crystals confined in a parabolic potential well. The shape of the parabolic potential well was confirmed by observing trajectories of single particles falling within the well. Particle trajectories were in good agreement with values calculated using gas temperatures derived from laser-induced-fluorescence measurements of the argon metastable velocity distributions and assuming diffusive gas scattering. Measured particle separations combined with an equation of state for the crystal were used to derive values for the plasma screening length and the charge on the particles. Screening lengths and charges were measured for a range of plasma power, pressure, and particle diameter (mass). Analysis of the particle heights and charge were used to derive the time-averaged electric fields at the edge of the rf driven plasma sheath. Measured electric fields were between 5 and 22 V/cm. For smaller particle diameters, the ion wind force was comparable to the force of gravity. These measurements showed that the particles are confined to the bulk-plasma side of the classical Bohm point. PMID:12443330

Hebner, G A; Riley, M E; Greenberg, K E

2002-10-01

250

Average-passage flow model development

NASA Astrophysics Data System (ADS)

A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

1989-04-01

251

Model-Based Average Reward Reinforcement Learning

Abstract: Reinforcement Learning (RL) is the study of programs that improve their performanceby receiving rewards and punishments from the environment. Most RL methodsoptimize the discounted total reward received by an agent, while, in many domains,the natural criterion is to optimize the average reward per time step. In this paper,we introduce a model-based Average-reward Reinforcement Learning method calledH-learning and show that

Prasad Tadepalli

1998-01-01

252

Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments

NASA Technical Reports Server (NTRS)

ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .

Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

2012-01-01

253

Physically-Based, Real-Time Visualization and Constraint Analysis in Multidisciplinary Design

Physically-Based, Real-Time Visualization and Constraint Analysis in Multidisciplinary Design the performance, lowering the lifecycle cost and shortening product design time for complex systems.1 to improved designs as well as much faster design turnaround time. As the problems considered become more

Peraire, Jaime

254

Modular Approach to Physics: Average Speed versus Average Velocity

NSDL National Science Digital Library

This applet is an interactive demonstration of average speed and velocity. The user can move the object, while average speed and velocity is calculated and available through the Data panel. A full lesson plan is available through the Help menu. This item is part of a larger collection of simulation based physics modules sponsored by the MAP project (Modular Approach to Physics).

Austin, David; Martin, Brian

2008-05-24

255

Spacetime Averaged Null Energy Condition

The averaged null energy condition has known violations for quantum fields in curved space, even if one considers only achronal geodesics. Many such examples involve rapid variation in the stress-energy tensor in the vicinity of the geodesic under consideration, giving rise to the possibility that averaging in additional dimensions would yield a principle universally obeyed by quantum fields. However, after discussing various procedures for additional averaging, including integrating over all dimensions of the manifold, we give a class of examples that violate any such averaged condition.

Douglas Urban; Ken D. Olum

2010-02-25

256

Average Speed and Unit Conversion

NSDL National Science Digital Library

Students will determine average speeds from data collected and convert units for speed problems. Students try to roll the ball with a prescribed average speed based on intuition. Then, based on unit conversion we will see how accurate the rolls really were.

2009-01-01

257

Improved averaging for non-null interferometry

NASA Astrophysics Data System (ADS)

Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

Fleig, Jon F.; Murphy, Paul E.

2013-09-01

258

Exponential Moving Average Stock Model

NSDL National Science Digital Library

A primary application of econophysics is using digital signal processing techniques to filter and predict market data, which is theorized to exhibit random walk motion. An exponential moving average is one tool that physicists use to smooth data from an input signal to identify its trends. The Exponential Moving Average Stock Model implements three types of exponential moving averages and allows the user to change the parameters of each. The model allows the user to view the results of exponential moving averages computed on the New York Stock Exchange daily closing price of six familiar companies. It demonstrates one way that traders use causal filters to smooth market data and forecast the next day's price.

Mohorn, Matthew

2013-02-14

259

High average power pockels cell

A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

Daly, Thomas P. (Pleasanton, CA)

1991-01-01

260

Higher-order averaging, formal series and numerical integration II

to obtain a variety of averaged systems. In particular we study the quasi-stroboscopic averaged system at the initial time. We show that quasi-stroboscopic averaging is a geometric procedure because it is independent of the particular choice of co-ordinates used to write the given system. As a consequence, quasi-stroboscopic

Murua, Ander

261

Averaging of Temporal Memories by Rats

Rats were trained on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10 s and stimulus B (the other stimulus) indicated food availability after 20 s. Testing consisted of non-reinforced probe trials in which the stimulus was A, B, or the compound AB. On single-stimulus trials, rats responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats showed a single scalar peak of responding at a time midway between those for stimulus A and B. These results suggest that when provided with discrepant information regarding the temporal predictability of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use this average to generate an expectation of reward for the compound stimuli. PMID:19594288

Swanton, Dale N.; Gooch, Cynthia M.; Matell, Matthew S.

2009-01-01

262

Neutron average cross sections of Np237

NASA Astrophysics Data System (ADS)

This work reports Np237 neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the Np237 neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width =39.3±1.0 meV, a neutron strength function 104S0=1.02±0.14, a mean level spacing D0=0.60±0.03 eV, and a potential scattering length R'=9.8±0.1 fm.

Noguere, G.

2010-04-01

263

Evaluations of average level spacings

The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

Liou, H.I.

1980-01-01

264

High average power excimer laser

An excimer laser developed at Los Alamos produces an ultraviolet (308 nm) output power of 200 W average at a repetition rate of 500 Hz. This electrical-discharge-pumped XeCl laser uses x-ray preionization and active gas cleanup.

Butcher, R.R.; Tennant, R.A.; Erickson, G.F.; Swisher, S.L.; Willis, W.L.

1983-01-01

265

Models of space averaged energetics of plates

NASA Technical Reports Server (NTRS)

The analysis of high frequency vibrations in plates is of particular interest in the study of structure borne noise in aircrafts. The current methods of analysis are either too expensive (finite element method) or may have a confidence band wider than desirable (Statistical Energy Analysis). An alternative technique to model the space and time averaged response of structural acoustics problems with enough detail to include all significant mechanisms of energy generation, transmission, and absorption is highly desirable. The focus of this paper is the development of a set of equations which govern the space and time averaged energy density in plates. To solve this equation, a new type of boundary value problem must be treated in terms of energy density variables using energy and intensity boundary conditions. A computer simulation verification study of the energy governing equation is performed. A finite element formulation of the new equations is also implemented and several test cases are analyzed and compared to analytical solutions.

Bouthier, O. M.; Bernhard, R. J.

1990-01-01

266

Terrestrial kilometric radiation: 3-average spectral properties

NASA Technical Reports Server (NTRS)

A study is presented of the average spectral properties of terrestrial kilometric radiation (TKR) derived from observations made by radio astronomy experiments onboard the IMP-6 and RAE-2 spacecraft. As viewed from near the equatorial plane, TKR is most intense and most often observed in the 21-24 hr local time zone and is rarely seen in the 09-12 hr zone. The peak flux density usually occurs near 240 kHz, but there is evidence that the peak occurs at a somewhat lower frequency on the dayside. The frequency of the peak in the average flux spectrum varies inversely with increasing substorm activity as inferred from the auroral electrojet index (AE) from a maximum near 300 kHz during very quiet times to a minimum below 200 kHz during very disturbed times. The absolute flux levels in the 100-600 kHz TKR band increase significantly with increasing AE. The average power associated with a particular source region seems to decrease rapidly with increasing source altitude.

Kaiser, M. L.; Alexander, J. K.

1976-01-01

267

Average formation length in string model

The space-time scales of the hadronization process in the framework of string model are investigated. It is shown that the average formation lengths of pseudoscalar mesons, produced in semi-inclusive deep inelastic scattering (DIS) of leptons on different targets, depend from their electrical charges. In particular the average formation lengths of positively charged hadrons are larger than of negatively charged ones. This statement is fulfiled for all using scaling functions, for $z$ (the fraction of the virtual photon energy transferred to the detected hadron) larger than 0.15, for all nuclear targets and any value of the Bjorken scaling variable $x_{Bj}$. In all cases, the main mechanism is direct production of pseudoscalar mesons. Including in consideration additional mechanism of production in result of decay of resonances, leads to decrease of average formation lengths. It is shown that the average formation lengths of positively (negatively) charged mesons are slowly rising (decreasing) functions of $x_{Bj}$. The obtained results can be important, in particular, for the understanding of the hadronization process in nuclear environment.

L. Grigoryan

2010-02-04

268

ERIC Educational Resources Information Center

When a teacher gives their time to a student, it is more significant to that student than anything else one could do for him or her. Music teachers deal with time all the time. Someone once said that "time is like money: we never have enough." This may seem true; however, time is not like money. One can make more money, but one cannot "make time."…

Circle, David

2005-01-01

269

NASA Technical Reports Server (NTRS)

This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

2000-01-01

270

The assembly and maintenance of cilia depends on intraflagellar transport (IFT). Activated IFT motor kinesin-II enters the cilium with loaded IFT particles comprising IFT-A and IFT-B complexes. At the ciliary tip, kinesin-II becomes inactivated, and IFT particles are released. Moreover, the rate of IFT entry is dynamically regulated during cilium assembly. However, the regulatory mechanism of IFT entry and loading/unloading of IFT particles remains elusive. We show that the kinesin-II motor subunit FLA8, a homolog of KIF3B, is phosphorylated on the conserved S663 by a calcium-dependent kinase in Chlamydomonas. This phosphorylation disrupts the interaction between kinesin-II and IFT-B, inactivates kinesin-II and inhibits IFT entry, and is also required for IFT-B unloading at the ciliary tip. Furthermore, our data suggest that the IFT entry rate is controlled by regulation of the cellular level of phosphorylated FLA8. Therefore, FLA8 phosphorylation acts as a molecular switch to control IFT entry and turnaround. PMID:25175706

Liang, Yinwen; Pang, Yunong; Wu, Qiong; Hu, Zhangfeng; Han, Xue; Xu, Yisheng; Deng, Haiteng; Pan, Junmin

2014-09-01

271

Monthly Average Temperature for Boston, MA

NSDL National Science Digital Library

The phenomenon is monthly average temperature data for Boston, MA from March 1872 until September 2000. In addition to monthly averages, the National Weather Service table also shows the yearly average temperature.

272

Self-similarity of higher-order moving averages

NASA Astrophysics Data System (ADS)

In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

Arianos, Sergio; Carbone, Anna; Türk, Christian

2011-10-01

273

THE AVERAGED CONTROL SYSTEM OF FAST OSCILLATING CONTROL SYSTEMS

THE AVERAGED CONTROL SYSTEM OF FAST OSCILLATING CONTROL SYSTEMS ALEX BOMBRUN AND JEAN-BAPTISTE POMET Abstract. For control systems that either have an explicit periodic dependence on time or have periodic solutions and small controls, we define an average control system that takes into account all

Paris-Sud XI, UniversitÃ© de

274

Global atmospheric circulation statistics: Four year averages

NASA Technical Reports Server (NTRS)

Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

1987-01-01

275

Average deployments versus missile and defender parameters

This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

Canavan, G.H.

1991-03-01

276

Averaging VMAT treatment plans for multi-criteria navigation

The main approach to smooth Pareto surface navigation for radiation therapy multi-criteria treatment planning involves taking real-time averages of pre-computed treatment plans. In fluence-based treatment planning, fluence maps themselves can be averaged, which leads to the dose distributions being averaged due to the linear relationship between fluence and dose. This works for fluence-based photon plans and proton spot scanning plans. In this technical note, we show that two or more sliding window volumetric modulated arc therapy (VMAT) plans can be combined by averaging leaf positions in a certain way, and we demonstrate that the resulting dose distribution for the averaged plan is approximately the average of the dose distributions of the original plans. This leads to the ability to do Pareto surface navigation, i.e. interactive multi-criteria exploration of VMAT plan dosimetric tradeoffs.

Craft, David; Unkelbach, Jan

2013-01-01

277

Adaptive common average filtering for myocontrol applications.

The use of electromyography (EMG) for the control of upper-limb prostheses has received great interest in neurorehabilitation engineering since decades. Important advances have been performed in the development of machine learning algorithms for myocontrol. This paper describes a novel adaptive filter for EMG preprocessing to be applied as conditioning stage for optimal subsequent information extraction. The aim of this filter is to improve both the quality (signal-to-noise ratio) and the selectivity of the EMG recordings. The filter is based on the classic common average reference (CAR), often used in EEG processing. However, while CAR is stationary, the proposed filter, which is referred to as adaptive common average reference (ACAR), is signal-dependent and its spatial transfer function is adapted over time. The ACAR filter is evaluated in this study for noise reduction and selectivity. Furthermore, it is proven that its application improves the performance of both pattern recognition and regression methods for myoelectric control. It is concluded that the proposed novel filter for EMG conditioning is a useful preprocessing tool in myocontrol applications. PMID:25388778

Rehbaum, Hubertus; Farina, Dario

2015-02-01

278

Time-averaged subspace methods for radar clutter texture retrieval

Subspace approaches have become popular in the last two decades for retrieving constant amplitude harmonics observed in white additive noise because they may exhibit superior resolution over the FFT-based methods, especially with short data records and closely spaced harmonics. We demonstrate that MUSIC and ESPRIT methods can also be applied when the harmonics are corrupted by white or wideband multiplicative

Fulvio Gini; Georgios B. Giannakis; Maria Greco; G. Tong Zhou

2001-01-01

279

Predictabilty of time averages: The influence of the boundary forcing

NASA Technical Reports Server (NTRS)

The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.

Shukla, J.

1982-01-01

280

RHIC BPM system average orbit calculations

RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

2009-05-04

281

Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

2013-06-17

282

Averaging in LRS class II spacetimes

NASA Astrophysics Data System (ADS)

We generalize Buchert's averaged equations (Gen Relativ Gravit 32; 105, 2000; Gen Relativ Gravit 33; 1381, 2001) to LRS class II dust model in the sense that all Einstein equations are averaged, not only the trace part. We derive the relevant averaged equations and we investigate backreaction on expansion and shear scalars in an approximate LTB model. Finally we propose a way to close the system of averaged equations.

Kašpar, Petr; Svítek, Otakar

2015-02-01

283

Averaging and Adding in Children's Worth Judgements

ERIC Educational Resources Information Center

Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

2012-01-01

284

Districts Created to Steer "Turnarounds"

ERIC Educational Resources Information Center

If the job of a traditional superintendent is hard, imagine the complexities involved in building a school system from scratch--especially one composed of schools with some of the most intractable educational challenges. That's the task facing education leaders in Michigan and Tennessee, which are building special districts to take over…

Samuels, Christina A.

2011-01-01

285

Teachers' Guide to School Turnarounds

ERIC Educational Resources Information Center

Most guides on the process of turning around low-performing schools typically are addressed to principals and superintendents. Teachers, however, are the individuals expected to conduct the "heavy lifting" of school improvement. This is the first book on the subject written expressly for teachers. It provides a step-by-step introduction to the…

Duke, Daniel L.; Tucker, Pamela D.; Salmonowicz, Michael J.; Levy, Melissa; Saunders, Stephen

2007-01-01

286

Turnaround Team Racing Summer's Clock

ERIC Educational Resources Information Center

A few weeks before a new team of teachers was to report to the former Shawnee High School in Louisville, Kentucky, Principal Keith Look discovered the master schedule for the 2010-11 school year to be in total disarray. More than 100 Shawnee students--all of them juniors and seniors--had been enrolled in classes with no connection to the credits…

Maxwell, Lesli A.

2010-01-01

287

Averaging underwater noise levels for environmental assessment of shipping.

Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575

Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

2012-10-01

288

Probing turbulence intermittency via autoregressive moving-average models

NASA Astrophysics Data System (ADS)

We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index ? that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that ? is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that ? is a suitable index to reconstruct intermittency in experimental turbulent fields.

Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele

2014-12-01

289

Average Annual Pay by State and Industry, 1997

NSDL National Science Digital Library

Average annual pay levels by industry varied widely across the nation according to this BLS release for 1997, with high average incomes of $49,995 recorded in mining areas and a low average pay of $15,877 recorded in retail trade, an industry largely populated by part-time workers. Regionally, The District of Columbia held the greatest annual pay level at $46,775 in 1997, while South and North Dakota, Montana, Mississippi, and Arkansas retained low annual salaries of $21,645, $22,047, $21,947, $22,772, and $23,268 respectively. Additional comparisons of annual incomes by state and industry for 1996 and 1997 are available at this Average Annual Pay by State and Industry page with four detailed statistical tables.

290

Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions

Fisher’s concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

Álvarez-Castro, José M.; Yang, Rong-Cai

2012-01-01

291

Averaging battery model for a lead-acid battery operating in an electric car

Computer programs used to calculate the performance of an electric vehicle must contain a battery model. A simple mathematical model is developed and evaluated. The model is based on time averaged current or power required from a battery during the operation of the electric vehicle. The time averaging technique used accounts for time varying discharge rates, rest times, and the

Bozek

1979-01-01

292

Average Transmission Probability of a Random Stack

ERIC Educational Resources Information Center

The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

2010-01-01

293

Average brightness temperature of antenna aperture

This paper presented an idea for the average brightness temperature of lossless antenna aperture, gave its expression for\\u000a matched and dismatched noise source. This expression showed that the average brightness temperature of antenna aperture related\\u000a with three factors: the noise temperature of noise source, the reflection coefficient of noise source, and the aperture efficiency.

Li Zongqian

1997-01-01

294

Determinants of College Grade Point Averages

ERIC Educational Resources Information Center

Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…

Bailey, Paul Dean

2012-01-01

295

40 CFR 63.846 - Emission averaging.

Code of Federal Regulations, 2013 CFR

...in §§ 63.847 and 63.849. The owner or operator shall combine the results of secondary TF monthly average emissions with...in §§ 63.847 and 63.849. The owner or operator shall combine the results of secondary POM quarterly average emissions...

2013-07-01

296

40 CFR 63.846 - Emission averaging.

Code of Federal Regulations, 2010 CFR

...in §§ 63.847 and 63.849. The owner or operator shall combine the results of secondary TF monthly average emissions with...in §§ 63.847 and 63.849. The owner or operator shall combine the results of secondary POM quarterly average emissions...

2010-07-01

297

40 CFR 63.846 - Emission averaging.

Code of Federal Regulations, 2011 CFR

...in §§ 63.847 and 63.849. The owner or operator shall combine the results of secondary TF monthly average emissions with...in §§ 63.847 and 63.849. The owner or operator shall combine the results of secondary POM quarterly average emissions...

2011-07-01

298

INVERSIONS FOR AVERAGE SUPERGRANULAR FLOWS USING FINITE-FREQUENCY KERNELS

I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.

Svanda, Michal, E-mail: michal@astronomie.cz [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)

2012-11-10

299

Cosmic inhomogeneities and averaged cosmological dynamics.

If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811

Paranjape, Aseem; Singh, T P

2008-10-31

300

Video denoising using multiple class averaging with Multiresolution

Video denoising using multiple class averaging with Multiresolution Vladimir Zlokolica, Aleksandra presents a non-linear technique for noise reduc- tion in video that is suitable for real-time processing-decimated transform is used because it leads to better results for image/video denoising than the decimated transform

Pizurica, Aleksandra

301

Simplest Representation Yet for Gait Recognition: Averaged Silhouette

We present a robust representation for gait recogni- tion that is compact, easy to construct, and affords effi- cient matching. Instead of a time series based represen- tation comprising of a sequence of raw silhouette frames or of features extracted therein, as has been the prac- tice, we simply align and average the silhouettes over one gait cycle. We then

Zongyi Liu; Sudeep Sarkar

2004-01-01

302

DEFORMOTION Deforming Motion, Shape Average and the Joint

average" in order to track moving and deforming objects through time. Fig. 1. A jellyfish is "moving while direction while "locally deforming." The jellyfish in Fig. 1 is just another example to illustrate the same our intuition? For instance, in Fig. 6, how do we describe the "motion" of a jellyfish? Or in Fig. 5

Soatto, Stefano

303

Hlearning: A Reinforcement Learning Method to Optimize Undiscounted Average Reward

discounted total reward or undiscounted average reward. Our experimental results indicate that H the total discounted reward the learner receives. In other words, a reward which is received after one time of discounting is that it yields a finite total reward even for

304

Spacetime Average Density (SAD) Cosmological Measures

The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

Don N. Page

2014-06-02

305

Selling Geothermal Systems The "Average" Contractor

Selling Geothermal Systems #12;The "Average" Contractor Â· History of sales procedures Â· Manufacturer Driven Procedures Â· What makes geothermal technology any harder to sell? #12;"It's difficult to sell a geothermal system." Â· It should

306

307

40 CFR 76.11 - Emissions averaging.

Code of Federal Regulations, 2012 CFR

...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

2012-07-01

308

40 CFR 76.11 - Emissions averaging.

Code of Federal Regulations, 2010 CFR

...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

2010-07-01

309

40 CFR 76.11 - Emissions averaging.

Code of Federal Regulations, 2011 CFR

...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

2011-07-01

310

40 CFR 76.11 - Emissions averaging.

Code of Federal Regulations, 2013 CFR

2013-07-01

311

STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES

CALIFORNIA ENERGY COMMISSION STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES 2005 TO 2018 Mignon Marks Principal Author Mignon Marks Project Manager David Ashuckian Manager ELECTRICITY ANALYSIS OFFICE Sylvia Bender Acting Deputy Director ELECTRICITY SUPPLY DIVISION B.B. Blevins Executive Director

312

Total pressure averaging in pulsating flows

NASA Technical Reports Server (NTRS)

A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

1972-01-01

313

Total-pressure averaging in pulsating flows.

NASA Technical Reports Server (NTRS)

A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

1972-01-01

314

Monthly average polar sea-ice concentration

The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

Schweitzer, Peter N.

1995-01-01

315

Radial averages of astigmatic TEM images.

The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images. PMID:18662790

Fernando, K Vince

2008-10-01

316

Self-averaging characteristics of spectral fluctuations

The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second a small imaginary part of the quasi-energy. Self-averaging universal (like the CUE average) behavior is found for the smoothed correlator, apart from noise which shrinks like $1\\over\\sqrt N$ as the dimension $N$ of the quantum Hilbert space grows. There are periodically repeated quasi-energy windows of correlation decay and revival wherein the smoothed correlation remains finite as $N\\to\\infty$ such that the noise is negligible. In between those windows (where the CUE averaged correlator takes on values of the order ${1\\over N^2}$) the noise becomes dominant and self-averaging is lost. We conclude that the noise forbids distinction of CUE and GUE type behavior. Surprisingly, the underlying smoothed generating function does not enjoy any self-averaging outside the range of its variables relevant for determining the two-point correlator (and certain higher-order ones). --- We corroborate our numerical findings for the noise by analytically determining the CUE variance of the smoothed single-matrix correlator.

Petr Braun; Fritz Haake

2014-10-20

317

NSDL National Science Digital Library

Key figure from the 2007 Intergovernmental Panel on Climate Change (IPCC) report that shows changes in global average surface temperature, global average sea level, and Northern Hemisphere snow cover from as far back as 1850.

IPCC (Intergovernmental Panel on Climate Change) AR4 Synthesis Report

318

Self-averaging of Wigner transforms in random media

We establish the self-averaging properties of the Wigner transform of a mixture of states in the regime when the correlation length of the random medium is much longer than the wave length but much shorter than the propagation distance. The main ingredients in the proof are the error estimates for the semiclassical approximation of the Wigner transform by the solution of the Liouville equations, and the limit theorem for two-particle motion along the characteristics of the Liouville equations. The results are applied to a mathematical model of the time-reversal experiments for the acoustic waves, and self-averaging properties of the re-transmitted wave are proved.

G. Bal; T. Komorowski; L. Ryzhik

2002-10-08

319

Interpreting Sky-Averaged 21-cm Measurements

NASA Astrophysics Data System (ADS)

Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation of global 21-cm signal measurements are detections of Lyman Alpha Emitters at high redshifts and constraints on the midpoint of reionization, both of which are among the primary science objectives of ongoing or near-future experiments.

Mirocha, Jordan

2015-01-01

320

Books average previous decade of economic misery.

For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

2014-01-01

321

Benchmarking statistical averaging of spectra with HULLAC

NASA Astrophysics Data System (ADS)

Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

Klapisch, Marcel; Busquet, Michel

2008-11-01

322

Average: the juxtaposition of procedure and context

NASA Astrophysics Data System (ADS)

This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

Watson, Jane; Chick, Helen; Callingham, Rosemary

2014-09-01

323

Average Annual Rainfall Over the Globe

NASA Astrophysics Data System (ADS)

The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74×1017 J of solar radiation per second and it is divided over various channels as given in Table 1. It keeps our planet warm and maintains its average temperature2 of 288 K with the help of the atmosphere in such a way that life can survive. It also recycles the water in the oceans/rivers/ lakes by initial evaporation and subsequent precipitation; the average annual rainfall over the globe is around one meter. According to M. King Hubbert the amount of solar power going into the evaporation and precipitation channel is 4.0×1016 W. Students can verify the value of average annual rainfall over the globe by utilizing this part of solar energy. This activity is described in the next section.

Agrawal, D. C.

2013-12-01

324

Error estimates on averages of correlated data

NASA Astrophysics Data System (ADS)

We describe how the true statistical error on an average of correlated data can be obtained with ease and efficiency by a renormalization group method. The method is illustrated with numerical and analytical examples, having finite as well as infinite range correlations.

Flyvbjerg, H.; Petersen, H. G.

1989-07-01

325

Average Rate Speed Scaling Nikhil Bansal

is to minimize the energy used. They proposed an online speed scaling algorithm Average Rate (AVR) that runs each of AVR is at most (2) /2 if a processor running at speed s uses power s . We show the competitive ratio of AVR is at least ((2 - )) /2, where is a function of that approaches zero as approaches infinity

Bunde, David

326

Average Rate Speed Scaling Nikhil Bansal1

is to minimize the energy used. They proposed an online speed scaling algorithm Average Rate (AVR) that runs each of AVR is at most (2) /2 if a processor running at speed s uses power s . We show the competitive ratio of AVR is at least ((2-)) /2, where is a function of that approaches zero as approaches infinity

Bunde, David

327

A Functional Measurement Study on Averaging Numerosity

ERIC Educational Resources Information Center

In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

2014-01-01

328

Average Annual Rainfall over the Globe

ERIC Educational Resources Information Center

The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

Agrawal, D. C.

2013-01-01

329

Distributed Averaging via Lifted Markov Chains

Motivated by applications of distributed linear estimation, distributed control and distributed optimization, we consider the question of designing linear iterative algo- rithms for computing the average of numbers in a network. Speciflcally, our interest is in designing such an algorithm with the fastest rate of convergence given the topological constraints of the network. As the main result of this paper,

Kyomin Jung; Devavrat Shah; Jinwoo Shin

2009-01-01

330

Model averaging, optimal inference, and habit formation

Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

2014-01-01

331

HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

2005-08-21

332

Credibilistic Markov decision processes: The average case

NASA Astrophysics Data System (ADS)

Using a concept of random fuzzy variables in credibility theory, we formulate a credibilistic model for unichain Markov decision processes under average criteria. And a credibilistically optimal policy is defined and obtained by solving the corresponding non-linear mathematical programming. Also we give a computational example to illustrate the effectiveness of our new model.

Kageyama, Masayuki

2009-02-01

333

The Longitudinal Development of Understanding of Average.

ERIC Educational Resources Information Center

Explores the development of understanding of the concept of average with students from grades 3 to 9 through interviews. Observed six levels of response based on an hierarchical model of cognitive functioning. Documents usage of ideas associated with the three standard measures of central tendency and representation as strategies for problem…

Watson, Jane M.; Moritz, Jonathan B.

2000-01-01

334

NASA Technical Reports Server (NTRS)

Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

1994-01-01

335

Average and most-probable photon paths in random media.

Time-resolved experiments have revealed that, in contrast to the predictions of conventional diffusion theory, photons select certain curvilinear paths to travel between a source and a detector. Concepts of the average photon paths and Fermat paths are introduced on the basis of the non-Euclidean diffusion equation (NED) to explain experimental results. Comparison of the theory and the experiment demonstrates the potential of the NED to describe nondiffusive features of photon migration in the multiple-scattering regime. PMID:18183224

Polishchuk, A Y; Dolne, J; Liu, F; Alfano, R R

1997-04-01

336

Determination of the average lifetime of b -baryons

The average lifetime ofb-baryons has been studied using 3×106 hadronicZ\\u000a 0 decays collected by the DELPHI detector at LEP. Three methods have been used, based on the measurement of different observables:\\u000a the proper decay time distribution of 206 vertices reconstructed with a?, a lepton and an oppositely charged pion; the impact parameter distribution of 441 muons with high transverse momentum

P. Abreu; W. Adam; T. Adye; E. Agasi; I. Ajinenko; R. Aleksan; G. D. Alekseev; R. Alemany; P. P. Allport; S. Almehed; U. Amaldi; S. Amato; A. Andreazza; M. L. Andrieux; P. Antilogus; W. D. Apel; Y. Arnoud; B. Asman; J. E. Augustin; A. Augustinus; P. Baillon; P. Bambade; F. Barao; R. Barate; M. Barbi; D. Y. Bardin; A. Baroncelli; O. Barring; J. A. Barrio; W. Bartl; M. J. Bates; M. Battaglia; M. Baubillier; J. Baudot; K. H. Becks; M. Begalli; P. Beilliere; Yu. Belokopytov; K. Belous; A. C. Benvenuti; M. Berggren; D. Bertrand; F. Bianchi; M. Bigi; M. S. Bilenky; P. Billoir; D. Bloch; M. Blume; S. Blyth; T. Bolognese; M. Bonesini; W. Bonivento; P. S. L. Booth; G. Borisov; C. Bosio; S. Bosworth; O. Botner; E. Boudinov; B. Bouquet; C. Bourdarios; T. J. V. Bowcock; M. Bozzo; P. Branchini; K. D. Brand; T. Brenke; R. A. Brenner; C. Bricman; L. Brillault; R. C. A. Brown; P. Bruckman; J. M. Brunet; L. Bugge; T. Buran; T. Burgsmueller; P. Buschmann; A. Buys; S. Cabrera; M. Caccia; M. Calvi; A. J. Camacho Rozas; T. Camporesi; V. Canale; M. Canepa; K. Cankocak; F. Cao; F. Carena; L. Carroll; C. Caso; M. V. Castillo Gimenez; A. Cattai; F. R. Cavallo; L. Cerrito; V. Chabaud; M. Chapkin; Ph. Charpentier; L. Chaussard; J. Chauveau; P. Checchia; G. A. Chelkov; M. Chen; R. Chierici; P. Chliapnikov; P. Chochula; V. Chorowicz; J. Chudoba; V. Cindro; P. Collins; J. L. Contreras; R. Contri; E. Cortina; G. Cosme; F. Cossutti; H. B. Crawley; D. Crennell; G. Crosetti; J. Cuevas Maestro; S. Czellar; E. Dahl-Jensen; J. Dahm; B. Dalmagne; M. Dam; G. Damgaard; P. D. Dauncey; M. Davenport; W. Da Silva; C. Defoix; A. Deghorain; G. Della Ricca; P. Delpierre; N. Demaria; A. De Angelis; W. De Boer; S. De Brabandere; C. De Clercq; C. De La Vaissiere; B. De Lotto; A. De Min; L. De Paula; C. De Saint-Jean; H. Dijkstra; L. Di Ciaccio; F. Djama; J. Dolbeau; M. Donszelmann; K. Doroba; M. Dracos; J. Drees; K. A. Drees; M. Dris; Y. Dufour; D. Edsall; R. Ehret; G. Eigen; T. Ekelof; G. Ekspong; M. Elsing; J. P. Engel; N. Ershaidat; B. Erzen; E. Falk; D. Fassouliotis; M. Feindt; A. Ferrer; T. A. Filippas; A. Firestone; P. A. Fischer; H. Foeth; E. Fokitis; F. Fontanelli; F. Formenti; B. Franek; P. Frenkiel; D. C. Fries; A. G. Frodesen; R. Fruhwirth; F. Fulda-Quenzer; J. Fuster; A. Galloni; D. Gamba; M. Gandelman; C. Garcia; J. Garcia; C. Gaspar; U. Gasparini; Ph. Gavillet; E. N. Gazis; D. Gele; J. P. Gerber; L. Gerdyukov; M. Gibbs; R. Gokieli; B. Golob; G. Gopal; L. Gorn; M. Gorski; Yu. Gouz; V. Gracco; E. Graziani; G. Grosdidier; K. Grzelak; S. Gumenyuk; P. Gunnarsson; M. Gunther; J. Guy; F. Hahn; S. Hahn; Z. Hajduk; A. Hallgren; K. Hamacher; W. Hao; F. J. Harris; V. Hedberg; R. Henriques; J. J. Hernandez; P. Herquet; H. Herr; T. L. Hessing; E. Higon; H. J. Hilke; T. S. Hill; S. O. Holmgren; P. J. Holt; D. Holthuizen; S. Hoorelbeke; M. Houlden; J. Hrubec; K. Huet; K. Hultqvist; J. N. Jackson; R. Jacobsson; P. Jalocha; R. Janik; Ch. Jarlskog; G. Jarlskog; P. Jarry; B. Jean-Marie; E. K. Johansson; L. Jonsson; P. Jonsson; C. Joram; P. Juillot; M. Kaiser; F. Kapusta; K. Karafasoulis; M. Karlsson; E. Karvelas; S. Katsanevas; E. C. Katsoufis; R. Keranen; Yu. Khokhlov; B. A. Khomenko; N. N. Khovanski; B. King; N. J. Kjaer; H. Klein; A. Klovning; P. Kluit; B. Koene; P. Kokkinias; M. Koratzinos; K. Korcyl; C. Kourkoumelis; O. Kouznetsov; P. H. Kramer; M. Krammer; C. Kreuter; I. Kronkvist; Z. Krumstein; W. Krupinski; P. Kubinec; W. Kucewicz; K. Kurvinen; C. Lacasta; I. Laktineh; S. Lamblot; J. W. Lamsa; L. Lanceri; P. Langefeld; I. Last; J. P. Laugier; R. Lauhakangas; G. Leder; F. Ledroit; V. Lefebure; C. K. Legan; R. Leitner; Y. Lemoigne; J. Lemonne; G. Lenzen; V. Lepeltier; T. Lesiak; D. Liko; R. Lindner; A. Lipniacka; I. Lippi; B. Loerstad; J. G. Loken; J. M. Lopez; D. Loukas; P. Lutz; L. Lyons; J. MacNaughton; G. Maehlum; A. Maio; V. Malychev; F. Mandl; J. Marco; R. Marco; B. Marechal; M. Margoni; J. C. Marin; C. Mariotti; A. Markou; T. Maron; C. Martinez-Rivero; F. Martinez-Vidal; S. Marti i Garcia; J. Masik; F. Matorras; C. Matteuzzi; G. Matthiae; M. Mazzucato; M Mc Cubbin; R. Mc Kay; R. Mc Nulty; J. Medbo; M. Merk; C. Meroni; S. Meyer; W. T. Meyer; M. Michelotto; E. Migliore; L. Mirabito; W. A. Mitaroff; U. Mjoernmark; T. Moa; R. Moeller; K. Moenig; M. R. Monge; P. Morettini; H. Mueller; L. M. Mundim; W. J. Murray; B. Muryn; G. Myatt; F. Naraghi; F. L. Navarria; S. Navas; K. Nawrocki; P. Negri; S. Nemecek; W. Neumann; N. Neumeister; R. Nicolaidou; B. S. Nielsen; M. Nieuwenhuizen; V. Nikolaenko; P. Niss; A. Nomerotski; A. Normand; W. Oberschulte-Beckmann; V. Obraztsov; A. G. Olshevski; A. Onofre; R. Orava; K. Osterberg; A. Ouraou; P. Paganini; M. Paganoni; P. Pages; H. Palka; Th. D. Papadopoulou; K. Papageorgiou; L. Pape; C. Parkes; F. Parodi; A. Passeri; M. Pegoraro; L. Peralta; H. Pernegger; A. Perrotta; C. Petridou

1996-01-01

337

Self-averaging in complex brain neuron signals

NASA Astrophysics Data System (ADS)

Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.

Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.

2002-12-01

338

Polarized electron beams at milliampere average current

This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ? 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

Poelker, M. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States)

2013-11-07

339

Slater Averaged Pseudopotential and Its Inprovements

NASA Astrophysics Data System (ADS)

We demonstrate that the optimized effective potential method(OEP), which can be viewed as a way for constructing orbital independent potential from the known orbital dependent potentials, is valid for pseudopotentials. It is further on proved that for most group I and II elements as well as the elements with large radius, the Slater averaged pseudopotential, which is local and orbital independent, is applicable with very good transferability. A Heine-Abarenkov(HA) correction is proposed to make the pseudopotential workable for other elements, especially the first row atoms. Further on, the combination of the Slater averaged potential and the Bachelet-Hamman-Schluter(BHS) construction produces a new family of first principle norm-conserving pseudopotentials.

Miao, Maosheng

2001-03-01

340

Cosmological backreaction and spatially averaged spatial curvature

It has been suggested that the accelerated expansion of the Universe is due to backreaction of small scale density perturbations on the large scale spacetime geometry. While evidence against this suggestion has accumulated, it has not yet been definitively ruled out. Many investigations of this issue have focused on the Buchert formalism, which computes spatial averages of quantities in synchronous comoving gauge. We argue that, for the deceleration parameter of this formalism to agree with observations, the spatial average of the three dimensional Ricci scalar (spatial curvature) must be large today, with an $\\Omega_k$ in the range of $1 \\le \\Omega_k \\le 1.3$. We argue that this constraint is difficult to reconcile with observations of the location of the first Doppler peak of the CMBR. We illustrate the argument with a simple toy model for the effect of backreaction, which we show is generically incompatible with observations.

Eran Rosenthal; Éanna É. Flanagan

2008-09-11

341

New applications for high average power beams

NASA Astrophysics Data System (ADS)

The technology base formed by the development of high peak power simulators, laser drivers, FEL's, and ICF drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

Neau, E. L.; Turman, B. N.; Patterson, E. L.

1993-06-01

342

jModelTest: phylogenetic model averaging.

jModelTest is a new program for the statistical selection of models of nucleotide substitution based on "Phyml" (Guindon and Gascuel 2003. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst Biol. 52:696-704.). It implements 5 different selection strategies, including "hierarchical and dynamical likelihood ratio tests," the "Akaike information criterion," the "Bayesian information criterion," and a "decision-theoretic performance-based" approach. This program also calculates the relative importance and model-averaged estimates of substitution parameters, including a model-averaged estimate of the phylogeny. jModelTest is written in Java and runs under Mac OSX, Windows, and Unix systems with a Java Runtime Environment installed. The program, including documentation, can be freely downloaded from the software section at http://darwin.uvigo.es. PMID:18397919

Posada, David

2008-07-01

343

High average power free-electron lasers

NASA Astrophysics Data System (ADS)

Ever since the first free-electron laser (FEL) was both conceived and demonstrated in the 1970s at Stanford, it has attracted much attention as a promising foundation for a high average power light source. Since its light is produced from free electrons, an FEL can be designed to lase at virtually any wavelength. Furthermore, since its light is generated from a relativistic beam that is continually replenished, its gain medium cannot be damaged by any conventional means. Accordingly, as there have been several ambitious attempts to design and construct high average power FELs over the past several decades, we chronicle some of these efforts. Additionally, we provide an overview of FEL technology and theory, discuss some of the obstacles that remain in the construction of a high-power FEL, and provide a road map toward a megawatt-class device.

Blau, Joseph; Cohn, Keith; Colson, William B.

2013-02-01

344

Stochastic Games with Average Payoff Criterion

We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

Ghosh, M. K. [Department of Mathematics, Indian Institute of Science, Bangalore 560012 (India); Bagchi, A. [Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)

1998-11-15

345

Modern average global sea-surface temperature

The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

Schweitzer, Peter N.

1993-01-01

346

Average neutron detection efficiency for DEMON detectors

NASA Astrophysics Data System (ADS)

The neutron detection efficiency of a DEMON detector, averaged over the whole volume, was calculated using GEANT and applied to determine neutron multiplicities in an intermediate heavy ion reaction. When a neutron source is set at a distance of about 1 m from the front surface of the detector, the average efficiency, ?av, is found to be significantly lower (20-30%) than the efficiency measured at the center of the detector, ?0. In the GEANT simulation the ratio R=?av/?0 was calculated as a function of neutron energy. The experimental central efficiency multiplied by R was then used to determine the average efficiency. The results were applied to a study of the 64Zn+112Sn reaction at 40 A MeV which employed 16 DEMON detectors. The neutron multiplicity was extracted using a moving source fit. The derived multiplicities are compared well with those determined using the neutron ball in the NIMROD detector array in a separate experiment. Both are in good agreement with multiplicities predicted by a transport model calculation using an antisymmetric molecular dynamics (AMD) model code.

Zhang, S.; Lin, W.; Rodrigues, M. R. D.; Huang, M.; Wada, R.; Liu, X.; Zhao, M.; Jin, Z.; Chen, Z.; Keutgen, T.; Kowalski, S.; Hagel, K.; Barbui, M.; Bonasera, A.; Bottosso, C.; Materna, T.; Natowitz, J. B.; Qin, L.; Sahu, P. K.; Schmidt, K. J.; Wang, J.

2013-05-01

347

Average formation lengths of baryons and antibaryons in string model

In this work it is continued the investigation of the space-time scales of the hadronization process in the framework of string model. The average formation lengths of several widely using species of baryons (antibaryons) such as $p$ ($\\bar{p}$), $n$ ($\\bar{n}$), $\\Delta$ ($\\bar{\\Delta}$), $\\Lambda$ ($\\bar{\\Lambda}$) and $\\Sigma$ ($\\bar{\\Sigma}$) are studied. It is shown that they depend from electrical charges or, more precise, from quark contents of the hadrons. In particular, the average formation lengths of positively charged hadrons, for example protons, are considerably larger than of their negatively charged antiparticles, antiprotons. This statement is fulfilled for all nuclear targets and any value of the Bjorken scaling variable $x_{Bj}$. The main mechanism is direct production. Additional production mechanism in result of decay of resonances gives small contribution. It is shown that the average formation lengths of protons (antiprotons) are slowly rising (decreasing) functions of $x_{Bj}$, the ones of neutrons and antineutrons are slowly decreasing functions of $x_{Bj}$. The shape and behavior of average formation lengths for baryons qualitatively coincide with the ones for pseudoscalar mesons obtained earlier.

L. Grigoryan

2010-07-25

348

349

Code of Federal Regulations, 2013 CFR

...false Calculation of average fuel economy and average carbon-related exhaust emissions...CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR...Determining Manufacturer's Average Fuel Economy and Manufacturer's Average...

2013-07-01

350

Code of Federal Regulations, 2012 CFR

... false Calculation of average fuel economy and average carbon-related exhaust emissions...AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF...Determining Manufacturer's Average Fuel Economy and Manufacturer's Average...

2012-07-01

351

Quetelet, the average man and medical knowledge.

Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171

Caponi, Sandra

2013-08-14

352

[Quetelet, the average man and medical knowledge.

Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:24141918

Caponi, Sandra

2013-01-01

353

Averaging Spherically Symmetric Spacetimes in General Relativity

We discuss the averaging problem in general relativity, using the form of the macroscopic gravity equations in the case of spherical symmetry in volume preserving coordinates. In particular, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. On cosmological scales, the correlation tensor in a Friedmann-Lema\\^{\\i}tre-Robertson-Walker (FLRW) background is found to be of the form of a spatial curvature. On astrophysical scales the correlation tensor can be interpreted as the sum of a spatial curvature and an anisotropic fluid. We briefly discuss the physical implications of these results.

A. A. Coley; N. Pelavas

2006-06-21

354

Path-average kernels for long wavelength traveltime tomography

NASA Astrophysics Data System (ADS)

Although much effort goes into improving the resolution of tomographic models, investigating their quality has only just started. Probabilistic tomography provides a framework for the quantitative assessment of uncertainties of long-wavelength tomographic models. So far, this technique has been used to invert maps of surface wave phase velocities and normal-mode splitting functions. Including body waves would substantially increase the depth resolution in the lowermost mantle. In surface wave tomography, the construction of phase velocity maps and splitting functions is a well-defined inverse problem, and the depth inversion is less well constrained but characterized by a small number of dimensions suitable for a Monte Carlo search. Traveltime tomography is mostly based on ray theory that covers the 3-D Earth, thus the dimension of the inverse problem is too large for a Monte Carlo search. The ray-mode duality suggests to apply the path-average approximation to body wave traveltimes. In this way the measured traveltime residual as a function of ray parameter can be inverted using path-average kernels, which depend on depth only, similar to surface wave tomography. We investigate the validity of the path-average approximation for delay times in both the forward and the inverse problem using the velocity model S20RTS as well as random models. We numerically illustrate the precision of such kernels compared with ray-theoretic and finite-frequency ones. We further invert traveltime residuals, calculated from Fermat rays, using the path-average kernels. We find that the agreement between classical ray theory and path-average theory is good for long wavelength structures. We suggest that for mapping long wavelength structures, body waves can be inverted in two steps, similar to surface waves, where the ray parameter and the vertical traveltime play the role of frequency and phase velocity, respectively.

Mosca, I.; Trampert, J.

2009-05-01

355

Averaging in the presence of sliding errors

In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment`s measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms.

Yost, G.P. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Lawrence Berkeley Lab., CA (United States)

1991-08-01

356

Motional averaging in a superconducting qubit.

Superconducting circuits with Josephson junctions are promising candidates for developing future quantum technologies. Of particular interest is to use these circuits to study effects that typically occur in complex condensed-matter systems. Here we employ a superconducting quantum bit--a transmon--to perform an analogue simulation of motional averaging, a phenomenon initially observed in nuclear magnetic resonance spectroscopy. By modulating the flux bias of a transmon with controllable pseudo-random telegraph noise we create a stochastic jump of its energy level separation between two discrete values. When the jumping is faster than a dynamical threshold set by the frequency displacement of the levels, the initially separate spectral lines merge into a single, narrow, motional-averaged line. With sinusoidal modulation a complex pattern of additional sidebands is observed. We show that the modulated system remains quantum coherent, with modified transition frequencies, Rabi couplings, and dephasing rates. These results represent the first steps towards more advanced quantum simulations using artificial atoms. PMID:23361011

Li, Jian; Silveri, M P; Kumar, K S; Pirkkalainen, J-M; Vepsäläinen, A; Chien, W C; Tuorila, J; Sillanpää, M A; Hakonen, P J; Thuneberg, E V; Paraoanu, G S

2013-01-01

357

The 2009 World Average of $?_s$

Measurements of $\\alpha_s$, the coupling strength of the Strong Interaction between quarks and gluons, are summarised and an updated value of the world average of $\\alpha_s (M_Z)$ is derived. Building up on previous reviews, special emphasis is laid on the most recent determinations of $\\alpha_s$. These are obtained from $\\tau$-decays, from global fits of electroweak precision data and from measurements of the proton structure function $\\F_2$, which are based on perturbative QCD calculations up to $O(\\alpha_s^4)$; from hadronic event shapes and jet production in $\\epem$ annihilation, based on $O(\\alpha_s^3) $ QCD; from jet production in deep inelastic scattering and from $\\Upsilon$ decays, based on $O(\\alpha_s^2) $ QCD; and from heavy quarkonia based on unquenched QCD lattice calculations. Applying pragmatic methods to deal with possibly underestimated errors and/or unknown correlations, the world average value of $\\alpha_s (M_Z)$ results in $\\alpha_s (M_Z) = 0.1184 \\pm 0.0007$. The measured values of $\\alpha_s (Q)$, covering energy scales from $Q \\equiv \\mtau = 1.78$ GeV to 209 GeV, exactly follow the energy dependence predicted by QCD and therefore significantly test the concept af Asymptotic Freedom.

Siegfried Bethke

2009-08-10

358

Unpredictable visual changes cause temporal memory averaging.

Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change. PMID:17767943

Ohyama, Junji; Watanabe, Katsumi

2007-09-01

359

Average Transverse Momentum Quantities Approaching the Lightfront

NASA Astrophysics Data System (ADS)

In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the p T broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.

Boer, Daniël

2015-01-01

360

Average Gait Differential Image Based Human Recognition

The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648

Chen, Jinyan; Liu, Jiansheng

2014-01-01

361

Average-case technology mapping of asynchronous burst-mode circuits

This paper presents a technology mapper that optimizes the average performance of asynchronous burst-mode control circuits. More specifically, the mapper can be directed to minimize either the average la- tency or the average cycle time of the circuit. The input to the mapper is a burst-mode specification and its NAND-decomposed unmapped network. The mapper pre-processes the circuit's specification using stochastic

Wei-chun Chou; Peter A. Beerel; Kenneth Y. Yun

1999-01-01

362

Averaged null energy condition in loop quantum cosmology

NASA Astrophysics Data System (ADS)

Wormholes and time machines are objects of great interest in general relativity. However, to support them it needs exotic matters which are impossible at the classical level. Semiclassical gravity introduces the quantum effects into the stress-energy tensor and constructs many self-consistent wormholes. But they are not traversable due to the averaged null energy condition. Loop quantum gravity (LQG) significantly modifies the Einstein equation in the deep quantum region. If we write the modified Einstein equation in the form of the standard one but with an effective stress-energy tensor, it is convenient to analyze the geometry in LQG through the energy condition. Loop quantum cosmology (LQC), an application of LQG, has an effective stress-energy tensor which violates some kinds of local energy conditions. So it is natural that the inflation emerges in LQC. In this paper, we investigate the averaged null energy condition in LQC in the framework of the effective Hamiltonian, and we find that the effective stress-energy tensor in LQC violates the averaged null energy condition in the massless scalar field coupled model.

Li, Li-Fang; Zhu, Jian-Yang

2009-02-01

363

Dissociable effects of attention and crowding on orientation averaging

It has been proposed that visual crowding—the breakdown in recognition that occurs when objects are presented in cluttered scenes—reflects a limit imposed by visual attention. We examined this idea in the context of an orientation averaging task, having subjects judge the mean orientation of a set of oriented signal elements either in isolation, or “crowded” by nearby randomly oriented elements. In some conditions, subjects also had to perform an attentionally demanding secondary task. By measuring performance at different levels of signal orientation variability, we show that crowding increases subjects’ local uncertainty (about the orientation of individual elements) but that diverting attention reduces their global efficiency (the effective number of elements they can average over). Furthermore, performance with the same stimulus-sequence, presented multiple times, reveals that crowding does not induce more stimulus-independent variability (as would be predicted by some accounts based on attention). We conclude that crowding and attentional load have dissociable perceptual consequences for orientation averaging, suggesting distinct neural mechanisms for both. For the task we examined, attention can modulate the effects of crowding by changing the efficiency with which information is analyzed by the visual system but since crowding changes local uncertainty, not efficiency, crowding does not reflect an attentional limit. PMID:20053091

Dakin, Steven C.; Bex, Peter J.; Cass, John R.; Watt, Roger J.

2010-01-01

364

Constructing the Average Natural History of HIV-1 Infection

NASA Astrophysics Data System (ADS)

Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.

Diambra, L.; Capurro, A.; Malta, C. P.

2007-05-01

365

Saturn kilometric radiation: Average and statistical properties

Since Cassini entered Saturn's magnetosphere in July 2004, the auroral Saturnian kilometric radiation (SKR), which dominates the kronian radio spectrum, is observed quasi-continuously. Consecutive orbits of the spacecraft covered distances to Saturn down to 1.3 Saturn radii, all local times and, since December 2006, latitudes as high as 60°. On the basis of carefully calibrated and cleaned long-term time series

L. Lamy; P. Zarka; B. Cecconi; R. Prangé; W. S. Kurth; D. A. Gurnett

2008-01-01

366

18 CFR 301.7 - Average System Cost methodology functionalization.

Code of Federal Regulations, 2013 CFR

...2013-04-01 false Average System Cost methodology functionalization. 301.7 Section...ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE...ACT § 301.7 Average System Cost methodology functionalization. (a)...

2013-04-01

368

18 CFR 301.7 - Average System Cost methodology functionalization.

Code of Federal Regulations, 2012 CFR

...2012-04-01 false Average System Cost methodology functionalization. 301.7 Section...ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE...ACT § 301.7 Average System Cost methodology functionalization. (a)...

2012-04-01

369

18 CFR 301.7 - Average System Cost methodology functionalization.

Code of Federal Regulations, 2011 CFR

...2011-04-01 false Average System Cost methodology functionalization. 301.7 Section...ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE...ACT § 301.7 Average System Cost methodology functionalization. (a)...

2011-04-01

370

The Event-Related Low-Frequency Activity of Highly and Average Intelligent Children

ERIC Educational Resources Information Center

Using time-frequency analysis techniques to investigate the event-related low-frequency (delta: 0.5-4 Hz; theta: 4-8 Hz) activity of auditory event-related potentials (ERPs) data of highly and average intelligent children, 18 intellectually gifted children, and 18 intellectually average children participated the present study. Present findings…

Liu, Tongran; Shi, Jiannong; Zhao, Daheng; Yang, Jie

2008-01-01

371

Optimal Stock Selling/Buying Strategy with reference to the Ultimate Average*

Optimal Stock Selling/Buying Strategy with reference to the Ultimate Average* Min Dai and Yifei are concerned with the optimal decision to sell or buy a stock in a given period with reference to the ultimate average of the stock price. More precisely, we aim to determine an optimal selling (buying) time

Chaudhuri, Sanjay

372

University of Waterloo Weather Station Summary May 2013 A warm month with average precipitation

University of Waterloo Weather Station Summary Â May 2013 A warm month with average precipitation January. Most of the precipitation for the month happened during just 3 days (the 10th , 28th and 29th). Even with this average month, we are still about 100 mm over the precipitation we expect at this time

Waterloo, University of

373

A comparison of spatial averaging and Cadzow's method for array wavenumber estimation

We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.

Harris, D.B.; Clark, G.A.

1989-10-31

374

NASA Astrophysics Data System (ADS)

Consequences of a simple integral definition of electron charge bound to an ion are examined for Thomas-Fermi (TF) and Debye-Huckel-Thomas-Fermi (DHTF) average atom statistical potentials used to describe high temperature high density plasmas. A self-consistent scheme for calculating average degree of ionization within the DHTF approach is described. With the simple integral definition of bound charge the DHTF model, unlike the TF model, exhibits the anomalous behavior that degree of ionization can decrease as temperature increases. It is shown that this results from inclusion in the integration of electron charge density too extended and too near continuum energies to be physically considered as bound.

Zakowicz, W.; Feng, I. J.; Pratt, R. H.

1982-03-01

375

Calculating Free Energies Using Average Force

NASA Technical Reports Server (NTRS)

A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

2001-01-01

376

Use of Averaged Norbeck Social Support Questionnaire Scores

Background. Averaged Norbeck Social Support Questionnaire (NSSQ) support scores remove the influence of network size variability but may unduly lower scores for participants with large networks. Objectives. To evaluate the use of averaged NSSQ scores. Method. Pearson correlations determined if averaged scores decreased as network size increased across three samples. Then, Pearson correlations between a criterion variable and both averaged and raw support scores were computed along with the resultant power to detect a true effect. Results. Neither averaged total functional support nor averaged affect and affirmation scores decreased as sample size increased. However, averaged aid scores did decrease as network size increased. Power also increased markedly in all averaged versus raw scores except in averaged aid scores. Discussion and Conclusions. Use of averaged aid scores is not recommended. Use of all other averaged scores appears acceptable. PMID:22007323

Gigliotti, Eileen; Samuels, William Ellery

2011-01-01

377

APPENDIX A: MONTHLY AVERAGED DATA In many instances monthly averaged data are

for all sites in watt hours/meter2 per hour or day. For each site and each solar measurement the data for solar energy and climatic applications. Click on the buttons on the left to find out more about the lab for preliminary estimates of solar system performance. This section provides a summary of monthly averaged data

Oregon, University of

378

Global Average Brightness Temperature for April 2003

NASA Technical Reports Server (NTRS)

[figure removed for brevity, see original site] Figure 1

This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.

The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

2003-01-01

379

NASA Astrophysics Data System (ADS)

The paper presents a new method of digital terrain model (DTM) estimation based on modified moving average interpolation. There are many methods that can be employed in DTM creation, such as kriging, inverse distance weighting, nearest neighbour and moving average. The moving average method is not as precise as the others; hence, it is not commonly comprised in scientific work. Considering the high accuracy, the relatively low time costs, and the huge amount of measurement data collected by multibeam echosounder, however, the moving average method is definitely one of the most promising approaches. In this study, several variants of this method are analysed. An optimization of the moving average method is proposed based on a new module of selecting neighbouring points during the interpolation process—the "growing radius" approach. Tests experiments performed on various multibeam echosounder datasets demonstrate the high potential of this modified moving average method for improved DTM generation.

Maleika, Wojciech

2015-02-01

380

Does the orbit-averaged theory require a scale separation between periodic orbit size perturbation theory, we show that the orbit-averaged theory only requires a time-scale separation between equilibrium and perturbed motions and verifies the widely accepted notion that orbit averaging effects greatly

Lin, Zhihong

381

The Acoustic Thermometry of Ocean Climate (ATOC) Project: Towards depth-averaged temperature maps the acoustic paths from these sources criss-cross the North Pacific. Maps of depth-averaged temperature with depth is available (2), maps of sound speed averaged over 0-1000 m depths are derived. TRAVEL TIME

Dushaw, Brian

382

for both temperature and precipitation The daily high temperatures of May were just a little below average, it was still within the average range). Also on the edge of the average range was the precipitation (63.6 mm) for the month, but this time on the low end of the range. About 40 mm of this precipitation came down between

Waterloo, University of

383

To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'.

Khrennikov, Andrei [International Center for Mathematical Modeling in Physics and Cognitive Sciences, University of Vaexjoe, Vaexjoe S-35195 (Sweden)

2007-01-15

384

How to Define Average Class Size (and Deviations from the Average Class Size) in a Way Which Is

of the important parameters is the average class size. This average is usually estimated as an arithmetic average size. However, if we want to describe the effect of deviations from the average class size on the teaching effectiveness, then, instead of the standard deviation of the class size, a more complex

Kreinovich, Vladik

385

Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

NASA Technical Reports Server (NTRS)

In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

Shih, Tsan-Hsing; Liu, Nan-Suey

2012-01-01

386

Averaging techniques for steady and unsteady calculations of a transonic fan stage

NASA Technical Reports Server (NTRS)

It is often desirable to characterize a turbomachinery flow field with a few lumped parameters such as total pressure ratio or stage efficiency. Various averaging schemes may be used to compute these parameters. The momentum, energy, and area averaging schemes are described and compared. The schemes were compared for two computed solutions of the midspan section of a transonic fan stage: a steady averaging-plane solution in which average rotor outflow conditions were used as stator inflow conditions, and an unsteady rotor-stator interaction solution. The solutions were computed on identical grids using similar Navier-Stokes codes and an algebraic turbulence model. The unsteady solution is described, some unsteady flow phenomena are discussed, and the steady pressure distributions are compared. Despite large unsteady pressure fluctuations on the stator surface, the steady pressure distribution matched the average unsteady distribution almost exactly. Stator wake profiles, stator loss coefficient, and stage efficiency were computed for the two solutions with the three averaging schemes and are compared. In general, the energy averaging scheme gave good agreement between the averaging-plane solution and the time-averaged unsteady solution, even though certain phenomena due to unsteady wake migration were neglected.

Wyss, M. L.; Chima, R. V.; Tweedt, D. L.

1993-01-01

387

Perception of Average Value in Multiclass Scatterplots

The visual system can make highly efficient aggregate judgements about a set of objects, with speed roughly independent of the number of objects considered. While there is a rich literature on these mechanisms and their ramifications for visual summarization tasks, this prior work rarely considers more complex tasks requiring multiple judgements over long periods of time, and has not considered certain critical aggregation types, such as the localization of the mean value of a set of points. In this paper, we explore these questions using a common visualization task as a case study: relative mean value judgements within multi-class scatterplots. We describe how the perception literature provides a set of expected constraints on the task, and evaluate these predictions with a large-scale perceptual study with crowd-sourced participants. Judgements are no harder when each set contains more points, redundant and conflicting encodings, as well as additional sets, do not strongly affect performance, and judgements are harder when using less salient encodings. These results have concrete ramifications for the design of scatterplots. PMID:24051798

Gleicher, Michael; Correll, Michael; Nothelfer, Christine; Franconeri, Steven

2014-01-01

388

A vertically averaged spectral model for tidal circulation in estuaries

A frequency dependent computer model based on the two-dimensional vertically averaged shallow-water equations is described for general purpose application in tidally dominated embayments. This model simulates the response of both tides and tidal currents to user-specified geometries and boundary conditions. The mathematical formulation and practical application of the model are discussed in detail. Salient features of the model include the ability to specify: (1) stage at the open boundaries as well as within the model grid, (2) velocities on open boundaries (river inflows and so forth), (3) spatially variable wind stress, and (4) spatially variable bottom friction. Using harmonically analyzed field data as boundary conditions, this model can be used to make real time predictions of tides and tidal currents. (USGS)

Burau, J.R.; Cheng, R.T.

1989-01-01

389

Predicting global average thermospheric temperature changes resulting from auroral heating

NASA Astrophysics Data System (ADS)

The total Poynting flux flowing into both polar hemispheres as a function of time, computed with an empirical model, is compared with measurements of neutral densities in the thermosphere at two altitudes obtained from accelerometers on the CHAMP and GRACE satellites. The Jacchia-Bowman 2008 empirical thermospheric density model (JB2008) is used to facilitate the comparison. This model calculates a background level for the “global nighttime minimum exospheric temperature,” Tc, from solar indices. Corrections to this background level due to auroral heating, ?Tc, are presently computed from the Dst index. A proxy measurement of this temperature difference, ?Tc, is obtained by matching the CHAMP and GRACE density measurements with the JB2008 model. Through the use of a differential equation, the ?Tc correction can be predicted from IMF values. The resulting calculations correlate very well with the orbit-averaged measurements of ?Tc, and correlate better than the values derived from Dst. Results indicate that the thermosphere cools faster following time periods with greater ionospheric heating. The enhanced cooling is likely due to nitric oxide (NO) that is produced at a higher rate in proportion to the ionospheric heating, and this effect is simulated in the differential equations. As the ?Tc temperature correction from this model can be used as a direct substitute for the Dst-derived correction that is now used in JB200, it could be possible to predict ?Tc with greater accuracy and lead time.

Weimer, D. R.; Bowman, B. R.; Sutton, E. K.; Tobiska, W. K.

2011-01-01

390

Background The South African National Health Laboratory Service (NHLS) responded to HIV treatment initiatives with two-tiered CD4 laboratory services in 2004. Increasing programmatic burden, as more patients access anti-retroviral therapy (ART), has demanded extending CD4 services to meet increasing clinical needs. The aim of this study was to review existing services and develop a service-model that integrated laboratory-based and point-of-care testing (POCT), to extend national coverage, improve local turn-around/(TAT) and contain programmatic costs. Methods NHLS Corporate Data Warehouse CD4 data, from 60–70 laboratories and 4756 referring health facilities was reviewed for referral laboratory workload, respective referring facility volumes and related TAT, from 2009–2012. Results An integrated tiered service delivery model (ITSDM) is proposed. Tier-1/POCT delivers CD4 testing at single health-clinics providing ART in hard-to-reach areas (<5 samples/day). Laboratory-based testing is extended with Tier-2/POC-Hubs (processing ?30–40 CD4 samples/day), consolidating POCT across 8–10 health-clinics with other HIV-related testing and Tier-3/‘community’ laboratories, serving ?40 health-clinics, processing ?150 samples/day. Existing Tier-4/‘regional’ laboratories serve ?100 facilities and process <350 samples/day; Tier-5 are high-volume ‘metro’/centralized laboratories (>350–1500 tests/day, serving ?200 health-clinics). Tier-6 provides national support for standardisation, harmonization and quality across the organization. Conclusion The ITSDM offers improved local TAT by extending CD4 services into rural/remote areas with new Tier-3 or Tier-2/POC-Hub services installed in existing community laboratories, most with developed infrastructure. The advantage of lower laboratory CD4 costs and use of existing infrastructure enables subsidization of delivery of more expensive POC services, into hard-to-reach districts without reasonable access to a local CD4 laboratory. Full ITSDM implementation across 5 service tiers (as opposed to widespread implementation of POC testing to extend service) can facilitate sustainable ‘full service coverage’ across South Africa, and save>than R125 million in HIV/AIDS programmatic costs. ITSDM hierarchical parental-support also assures laboratory/POC management, equipment maintenance, quality control and on-going training between tiers. PMID:25490718

Glencross, Deborah K.; Coetzee, Lindi M.; Cassim, Naseem

2014-01-01

391

7 CFR 51.2548 - Average moisture content determination.

Code of Federal Regulations, 2010 CFR

...VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content...

2010-01-01

392

7 CFR 51.2548 - Average moisture content determination.

Code of Federal Regulations, 2012 CFR

...VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content...

2012-01-01

393

7 CFR 51.2561 - Average moisture content.

Code of Federal Regulations, 2013 CFR

...VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a...

2013-01-01

394

395

7 CFR 51.2548 - Average moisture content determination.

Code of Federal Regulations, 2013 CFR

...VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content...

2013-01-01

396

397

7 CFR 51.2548 - Average moisture content determination.

Code of Federal Regulations, 2011 CFR

2011-01-01

398

7 CFR 51.2561 - Average moisture content.

Code of Federal Regulations, 2010 CFR

...VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a...

2010-01-01

399

7 CFR 51.2561 - Average moisture content.

Code of Federal Regulations, 2011 CFR

...VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a...

2011-01-01

400

7 CFR 51.2561 - Average moisture content.

Code of Federal Regulations, 2012 CFR

2012-01-01

401

18 CFR 301.7 - Average System Cost methodology functionalization.

Code of Federal Regulations, 2010 CFR

...2010-04-01 false Average System Cost methodology functionalization...and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL...MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR...

2010-04-01

402

Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem

NASA Technical Reports Server (NTRS)

Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.

Deissler, Robert G.

1992-01-01

403

Stroboscopic averaging for the nonlinear Schrodinger F. Castella

Stroboscopic averaging for the nonlinear SchrÂ¨odinger equation F. Castella , Ph. Chartier , F. M with an averaging procedure, namely Stroboscopic aver- aging, for highly-oscillatory evolution equations posed-oscillatory evolution equation, stroboscopic averaging, Hamiltonian PDEs, invariants, nonlinear SchrÂ¨odinger, SAM. MSC

Murua, Ander

404

Stroboscopic averaging for the nonlinear Schrodinger F. Castella

Stroboscopic averaging for the nonlinear SchrÂ¨odinger equation F. Castella , Ph. Chartier , F. M, Â namely Stroboscopic averaging [SVM07, CMSS10] Â, for highly-oscillatory evolution equations posed-oscillatory evolution equation, stroboscopic averaging, Hamiltonian PDEs, invariants, nonlinear SchrÂ¨odinger. MSC

Paris-Sud XI, UniversitÃ© de

405

Stroboscopic averaging for the nonlinear Schrodinger equation F. Castella

Stroboscopic averaging for the nonlinear SchrÂ¨odinger equation F. Castella , Ph. Chartier , F. M, Ânamely Stroboscopic averaging [SVM07, CMSS10]Â, for highly-oscillatory evolution equations posed the recent literature. Keywords: highly-oscillatory evolution equation, stroboscopic averaging, Hamiltonian

Chartier, Philippe

406

The Invisibility of the "Average" Child in the Early Years.

ERIC Educational Resources Information Center

Explores teachers' tendency to adopt the "average" student as a reference point on personal and interpersonal dimensions of classroom life. Suggests that while reliance on this average might seem a solution to the problems of organizing learning for large groups, at a personal level, average children may be experiencing school life as a mere…

Waterhouse, Stephen

1995-01-01

407

78 FR 49770 - Annual Determination of Average Cost of Incarceration

Federal Register 2010, 2011, 2012, 2013

...Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice...The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year...a fee to cover the average cost of incarceration for Federal inmates. We...

2013-08-15

408

76 FR 57081 - Annual Determination of Average Cost of Incarceration

Federal Register 2010, 2011, 2012, 2013

...Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice...The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year...a fee to cover the average cost of incarceration for Federal inmates. We...

2011-09-15

409

78 FR 16711 - Annual Determination of Average Cost of Incarceration

Federal Register 2010, 2011, 2012, 2013

...Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice...The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year...a fee to cover the average cost of incarceration for Federal inmates. We...

2013-03-18

410

76 FR 6161 - Annual Determination of Average Cost of Incarceration

Federal Register 2010, 2011, 2012, 2013

2011-02-03

411

Total Reward Stochastic Games and Sensitive Average Reward Strategies

In this paper, total reward stochastic games are surveyed. Total reward games are motivated as a refinement of average reward games. The total reward is defined as the limiting average of the partial sums of the stream of payoffs. It is shown that total reward games with finite state space are strategically equivalent to a class of average reward games

F. Thuijsman; O. J. Vrieze

1998-01-01

412

ERIC Educational Resources Information Center

This study was undertaken to determine whether the grade point averages of students at Santa Ana College (SAC) have been increasing over time, and, with an increasing number of work-experience students, whether the work experience grade distribution has had any effect on the grade distribution of the entire college. Results indicate that SAC's…

Daly, Bob

413

A Rapid Turnaround Cryogenic Detector Characterization System

NASA Technical Reports Server (NTRS)

Upcoming major NASA missions such as the Einstein Inflation Probe and the Single Aperture Far-Infrared Observatory require arrays of detectors with thousands of elements, operating at temperatures near l00 mK and sensitive to wavelengths from approx. 100 microns to approx. 3 mm. Such detectors represent a substantial enabling technology for these missions, and must be demonstrated soon in order for them to proceed. In order to make rapid progress on detector development, the cryogenic testing cycle must be made convenient and quick. We have developed a cryogenic detector characterization system capable of testing superconducting detector arrays in formats up to 8 x 32, read out by SQUID multiplexers. The system relies on the cooling of a two-stage adiabatic demagnetization refrigerator immersed in a liquid helium bath. This approach permits a detector to be cooled from 300K to 50 mK in about 4 hours, so that a test cycle begun in the morning will be over by the end of the day. Tine system is modular, with two identical immersible units, so that while one unit is cooling, the second can be reconfigured for the next battery of tests. We describe the design, construction, and performance of this cryogenic detector testing facility.

Benford, Dominic j.; Dipirro, Michael J.; Forgione, Joshua B.; Jackson, Clifton E.; Jackson, Michael L.; Kogut, Al; Moseley, S. Harvey; Shirron, Peter J.

2004-01-01

414

Turnaround School: Embracing Diversity while Recognizing Giftedness

ERIC Educational Resources Information Center

In 2005, Lafayette Elementary School's enrollment and standardized test scores were declining. Although the school was known for its fine programs for special needs students and English-language learners, and its diverse population offered many special opportunities to celebrate other cultures and languages, it was not making strides toward…

Hultgren, Holly

2009-01-01

415

Kaiser's turnaround: hope for other nonprofits?

After the heady mid-1990's rush of lower premiums, most managed care systems tumbled into a well of red ink. Those hit especially hard were nonprofit MCOs, which have a limited ability to raise capital compared with their nonprofit competitors. However, Kaiser Permanente has reported its first quarterly surplus since 1997, and this has given hope to other nonprofits that they too will soon return to the black. Is this hope justified? Is Kaiser's medicine the right treatment for other integrated systems? PMID:10621094

Pope, T

1999-08-01

416

Depth-averaged turbulence k ? – w ? model and applications

This paper presents the developments and applications of a turbulence depth-averaged (depth-integrated) two-equation closure model, symbolised by k?–w? model (k?:depth-averaged turbulent kinetic energy parameter; w?:depth-averaged vorticity fluctuation parameter of turbulence). The k?–w? model, different from the well-known turbulence depth-averaged k?–?? model (??:depth-averaged dissipation rate parameter of turbulent kinetic energy), was stemmed from the revised version of k–w model (k: turbulent

L. Yu; A. M. Righetto

2001-01-01

417

Lee-Yang Theorems and the Complexity of Computing Averages

NASA Astrophysics Data System (ADS)

We study the complexity of computing average quantities related to spin systems, such as the mean magnetization and susceptibility in the ferromagnetic Ising model, and the average dimer count (or average size of a matching) in the monomer-dimer model. By establishing connections between the complexity of computing these averages and the location of the complex zeros of the partition function, we show that these averages are #P-hard to compute, and hence, under standard assumptions, computationally intractable. In the case of the Ising model, our approach requires us to prove an extension of the famous Lee-Yang Theorem from the 1950s.

Sinclair, Alistair; Srivastava, Piyush

2014-08-01

418

Robust Morphological Averages in Three Dimensions for Anatomical Atlas Construction

NASA Astrophysics Data System (ADS)

We present original methods for obtaining robust, anatomical shape-based averages of features of the human head anatomy from a normal population. Our goals are computerized atlas construction with representative anatomical features and morphopometry for specific populations. A method for true-morphological averaging is proposed, consisting of a suitable blend of shape-related information for N objects to obtain a progressive average. It is made robust by penalizing, in a morphological sense, the contributions of features less similar to the current average. Morphological error and similarity, as well as penalization, are based on the same paradigm as the morphological averaging.

Márquez, Jorge; Bloch, Isabelle; Schmitt, Francis

2004-09-01

419

Time spectral method for rotorcraft flow with vorticity confinement

NASA Astrophysics Data System (ADS)

This thesis shows that simulation of helicopter flows can adhere to engineering accuracy without the need of massive computing resources or long turnaround time by choosing an alternative framework for rotorcraft simulation. The method works in both hovering and forward flight regimes. The new method has shown to be more computationally efficient and sufficiently accurate. By utilizing the periodic nature of the rotorcraft flow field, the Fourier based Time Spectral method lends itself to the problem and significantly increases the rate of convergence compared to traditional implicit time integration schemes such as the second order backward difference formula (BDF). A Vorticity Confinement method has been explored and has been shown to work well in subsonic and transonic simulations. Vortical structure can be maintained after long distances without resorting to the traditional mesh refinement technique.

Butsuntorn, Nawee

2008-10-01

420

A frequency averaging framework for the solution of complex dynamic systems

A frequency averaging framework is proposed for the solution of complex linear dynamic systems. It is remarkable that, while the mid-frequency region is usually very challenging, a smooth transition from low- through mid- and high-frequency ranges is possible and all ranges can now be considered in a single framework. An interpretation of the frequency averaging in the time domain is presented and it is explained that the average may be evaluated very efficiently in terms of system solutions. PMID:24910518

Lecomte, Christophe

2014-01-01

421

We compare improvements in signal-to-noise in continuous-wave electron paramagnetic resonance (CW EPR) spectra resulting from adaptive signal averaging and a simple moving average. An adaptive filter module that uses a recursive least-squares (RLS) algorithm was incorporated into a CW EPR data acquisition program. After optimization, the RLS filter produces a significant improvement in the signal-to-noise ratio over conventional digital signal (spectral) averaging alone. However, conventional averaging of spectra combined with a central moving average of the data points provided equal or greater signal-to-noise improvement in the CW EPR spectra. PMID:20192519

Brinton, Christopher G; Hirsh, Donald J

2010-02-01

422

NASA Astrophysics Data System (ADS)

We compare improvements in signal-to-noise in continuous-wave electron paramagnetic resonance (CW EPR) spectra resulting from adaptive signal averaging and a simple moving average. An adaptive filter module that uses a recursive least-squares (RLS) algorithm was incorporated into a CW EPR data acquisition program. After optimization, the RLS filter produces a significant improvement in the signal-to-noise ratio over conventional digital signal (spectral) averaging alone. However, conventional averaging of spectra combined with a central moving average of the data points provided equal or greater signal-to-noise improvement in the CW EPR spectra.

Brinton, Christopher G.; Hirsh, Donald J.

2010-02-01

423

Water Molecule Residence Times

NSDL National Science Digital Library

How long will a molecule of Water stay in a particular reservoir? What is the average time a molecule of Water will stay in an ocean? What is the average time a molecule of water will stay in a river? A lake? As groundwater? A glacier? How long will a water vapor molecule stay suspended in the atmosphere? Why is the residence ...

Science, Sill -.

2010-11-16

424

Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics are examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.

Miller, Allan Ray

1987-05-01

425

On the high correlation between long-term averages of solar wind speed and geomagnetic activity

NASA Technical Reports Server (NTRS)

Six-month and yearly averages of solar-wind speed from 1962 to 1975 are shown to be highly correlated with geomagnetic activity as measured by averages of the Ap index. On the same time scale the correlation between the southward component of the interplanetary magnetic field and geomagnetic activity is poor. Previous studies with hourly averages gave opposite results. The better correlation with the southward component on an hourly time scale is explained by its large variation compared with the relatively constant solar-wind speed. However, on a yearly time scale the magnitude of the variations in both parameters are about the same. This problem can be solved by invoking an energy transfer mechanism which is proportional to the first power of the southward component and a higher power of the solar-wind speed.

Crooker, N. U.; Feynman, J.; Gosling, J. T.

1977-01-01

426

Cycle-averaged dynamics of a periodically driven, closed-loop circulation model

NASA Technical Reports Server (NTRS)

Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.

Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.

2005-01-01

427

Discrete time techniques for time delay estimation

Basic aspects of time delay estimation (TDE) based on sampled signals are investigated. The direct cross-correlation method is analyzed and compared to the average square difference function (ASDF) and the (addition only based) average magnitude difference function (AMDF) estimators, Their relative accuracy is theoretically evaluated, and previous empirical results are explained. It is shown that both the ASDF- and the

Giovanni Jacovitti; Gaetano Scarano

1993-01-01

428

Orbit-averaged guiding-center Fokker-Planck operator

A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant {psi}, the minimum-B pitch-angle coordinate {xi}{sub 0}, and the momentum magnitude p.

Brizard, A. J. [Department of Chemistry and Physics, Saint Michael's College, Colchester, Vermont 05439 (United States); Decker, J.; Peysson, Y.; Duthoit, F.-X. [CEA, IRFM, Saint-Paul-lez-Durance F-13108 (France)

2009-10-15

429

An average interstellar extinction curve for the Large Magellanic Cloud

NASA Technical Reports Server (NTRS)

LMC regions located more than 0.5 deg from the center of the 30 Doradus nebula have an average UV extinction curve that is about 2 units higher in the far UV than the average Galactic curve. Within the 30 Doradus region, the normalized far-UV extinction is much steeper than the average Galactic or average LMC curves, yet has been considered to be representative of the LMC in general in the past. The present results show typical LMC extinction properties to have much more in common with those in the Milky Way.

Fitzpatrick, Edward L.

1986-01-01

430

Averaging Theory for Description of Environmental Problems: What Have We Learned?

Advances in Water Resources has been a prime archival source for implementation of averaging theories in changing the scale at which processes of importance in environmental modeling are described. Thus in celebration of the 35th year of this journal, it seems appropriate to assess what has been learned about these theories and about their utility in describing systems of interest. We review advances in understanding and use of averaging theories to describe porous medium flow and transport at the macroscale, an averaged scale that models spatial variability, and at the megascale, an integral scale that only considers time variation of system properties. We detail physical insights gained from the development and application of averaging theory for flow through porous medium systems and for the behavior of solids at the macroscale. We show the relationship between standard models that are typically applied and more rigorous models that are derived using modern averaging theory. We discuss how the results derived from averaging theory that are available can be built upon and applied broadly within the community. We highlight opportunities and needs that exist for collaborations among theorists, numerical analysts, and experimentalists to advance the new classes of models that have been derived. Lastly, we comment on averaging developments for rivers, estuaries, and watersheds. PMID:23393409

Miller, Cass T.; Schrefler, Bernhard A.

2012-01-01

431

The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL`s). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL`s which are appropriate for material processing applications, low and intermediate average power DPSSL`s are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications.

Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

1994-01-01

432

Extended Averaged Learning Subspace Method for Hyperspectral Data Classification

Averaged learning subspace methods (ALSM) have the advantage of being easily implemented and appear to outperform in classification problems of hyperspectral images. However, there remain some open and challenging problems, which if addressed, could further improve their performance in terms of classification accuracy. We carried out experiments mainly by using two kinds of improved subspace methods (namely, dynamic and fixed subspace methods), in conjunction with the [0,1] and [-1,+1] normalization methods. We used different performance indicators to support our experimental studies: classification accuracy, computation time, and the stability of the parameter settings. Results are presented for the AVIRIS Indian Pines data set. Experimental analysis showed that the fixed subspace method combined with the [0,1] normalization method yielded higher classification accuracy than other subspace methods. Moreover, ALSMs are easily applied: only two parameters need to be set, and they can be applied directly to hyperspectral data. In addition, they can completely identify training samples in a finite number of iterations. PMID:22408524

Bagan, Hasi; Takeuchi, Wataru; Yamagata, Yoshiki; Wang, Xiaohui; Yasuoka, Yoshifumi

2009-01-01

433

Reach-averaged sediment routing model of a canyon river

Spatial complexity in channel geometry indicates that accurate prediction of sediment transport requires modeling in at least two dimensions. However, a one-dimensional model may be the only practical or possible alternative, especially for longer river reaches of practical concern in river management or landscape modeling. We have developed a one-dimensional model of the Colorado River through upper Grand Canyon that addresses this problem by reach averaging the channel properties and predicting changes in sand storage using separate source and sink functions coupled to the sand routing model. The model incorporates results from the application of a two-dimensional model of flow, sand transport, and bed evolution, and a new algorithm for setting the near-bed sand boundary condition for sand transported over an exposed bouldery bed. Model predictions were compared to measurements of sand discharge during intermittent tributary inputs and varying discharges controlled by dam releases. The model predictions generally agree well with the timing and magnitude of measured sand discharges but tend to overpredict sand discharge during the early stages of a high release designed to redistribute sand to higher-elevation deposits.

Wiele, S.M.; Wilcock, P.R.; Grams, P.E.

2007-01-01

434

Power averaging and inverse smoothing: implications for porous media flow

NASA Astrophysics Data System (ADS)

An extremely large data set of Venus elevation is used to study the behavior of spatial statistics (eg. variance) under power averaging (eg. geometric averaging over blocks). It is shown that a model of univariate and bivariate Gaussianity can be adopted for this data set. Multivariate Gaussianity, however, is clearly disproved by comparing the multiple point statistics of high and low indicator values with those from the multivariate Gaussian distribution. Power averaging is performed on the data set over increasing block sizes using a wide power parameter range going from -40 to +40. The spatial variance of each of these power averages is plotted against increasing averaging volume. Geostatistical theory predicts a decreasing trend for the variance of linear averages. This is the classical smoothing effect. However, for this elevation data set, the variance of power averages shows a significant non-decreasing trend for a range of power values from -4 to -0.5. The cause behind such ``inverse smoothing" is linked to the spatial clustering of extreme values. Note that the block-effective permeability of porous media lies, in general (but, not necessarily), within the two limiting power averages -1 (harmonic average) and +1 (linear average). Since the observed power range for inverse smoothing overlaps this interval, the consequences of inverse smoothing for permeability averaging are discussed. This inverse smoothing arises because of spatial clustering of extreme values in non-Gaussian and finite sized fields. Therefore, this bodes caution for techniques resorting to such assumptions which simplify the analytical and numerical computation of effective permeability, but, possibly, at some unexpected risk.

Krishnan, S.; Journel, A.

2004-12-01

435

Stroboscopic averaging for the nonlinear Schrodinger equation F. Castella

Stroboscopic averaging for the nonlinear SchrÂ¨odinger equation F. Castella , Ph. Chartier , F. M Stroboscopic averaging, for highly-oscillatory evolution equations posed in a (possibly infinite dimensional borrowed from the recent literature. Keywords: highly-oscillatory evolution equation, stroboscopic

MÃ©hats, Florian

436

Numerical stroboscopic averaging for ODEs and M. P. Calvo

Numerical stroboscopic averaging for ODEs and DAEs M. P. Calvo , Ph. Chartier , A. Murua , and J. M. Sanz-SernaÂ§ November 2, 2010 Abstract The stroboscopic averaging method (SAM) is a technique. The method may be seen as a purely numerical way of implementing the analytical technique of stroboscopic

Murua, Ander

437

Numerical stroboscopic averaging for ODEs and M. P. Calvo,

Numerical stroboscopic averaging for ODEs and DAEs M. P. Calvo, Ph. Chartier, A. Murua, and J. M. Sanz-SernaÂ§ October 29, 2010 Abstract The stroboscopic averaging method (SAM) is a technique. The method may be seen as a purely numerical way of implementing the analytical technique of stroboscopic

Boyer, Edmond

438

Object detection by correlation coefficients using azimuthally averaged reference projections

A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs

William V. Nicholson

2004-01-01

439

BAYESIAN MODEL AVERAGING AND MODEL SELECTION FOR MARKOV EQUIVALENCE

BAYESIAN MODEL AVERAGING AND MODEL SELECTION FOR MARKOV EQUIVALENCE CLASSES OF ACYCLIC DIGRAPHS uncertainty; model averaging; Markov equivalence; Markov chain Monte Carlo. ABSTRACT Acyclic digraphs (ADGs by the graph. The recent books by Whittaker (1990) and Lauritzen (1996) conveniently summarize the statistical

Volinsky, Chris

440

Average Fill Rate And Horizon Length Arunava Banerjee

Average Fill Rate And Horizon Length Arunava Banerjee Anand Paul September 2004 Abstract Given that average fill rate is monotonically decreasing in the number of periods in the planning horizon. This was conjectured to be true in a recent issue of this journal. Keywords: Inventory theory, finite horizon, fill

Banerjee, Arunava

441

The Number "c" in Cauchy's Average Value Theorem. Classroom Notes

ERIC Educational Resources Information Center

The location of the number "c" arising from Cauchy's Average Value Theorem is described when the size of the interval is small. This article discusses various generalizations of theorem 1, to the context of Cauchy?s Average Value Theorem--but without appealing to theorem 1. Obviously, hypotheses involving the functions "f" and "g" will be…

Mercer, Peter R.

2004-01-01

442

14-Day Boxcar averaged Terra-CERES (Outgoing Longwave Radiation)

NSDL National Science Digital Library

This animation displays one year of Outgoing Longwave Radiation (OLR) Terra-CERES data (March 1, 2000 to May 25, 2001) with a 14-day boxcar average. Endpoints have the average re-weighted for the smaller amount of data. The data are 2.5 degree resolution.

Tom Bridgman

2001-06-20

443

Renormalon Subtraction from the Average Plaquette and the Gluon Condensate

A Borel resummation scheme of subtracting the perturbative contribution from the average plaquette is proposed using the bilocal expansion of Borel transform. It is shown that the remnant of the average plaquette, after subtraction of the perturbative contribution, scales as a dim-4 condensate. A critical review of the existing procedure of renormalon subtraction is presented.

Taekoon Lee

2011-01-03

444

Five-Year Average Global Temperature Anomalies 1880 - 2010

NSDL National Science Digital Library

This color-coded map displays a progression of changing five-year average global surface temperatures anomalies from 1880 through 2010. The final frame represents global temperature anomalies averaged from 2006 to 2010. The temperature anomalies are computed relative to the base period 1951-1980.

Schmunk, Robert B.; Hansen, J.; Ruedy, R.; Sato, Mki; Lo, K.; Studio, Nasa/goddard S.

445

Measurement of the average lifetime of b hadrons

The average lifetime of b hadrons has been measured using the L3 detector at LEP, running at &surd;s ~ MZ. A b-enriched sample was obtained from 432538 hadronic Z events collected in 1990 and 1991 by tagging electrons and muons from semileptonic b hadron decays. From maximum likelihood fits to the electron and muon impact parameter distributions, the average b

O. Adriani; M. Aguilar-Benitez; S. P. Ahlen; J. Alcaraz; A. Aloisio; G. Alverson; M. G. Alviggi; G. Ambrosi; Q. An; H. Anderhub; A. L. Anderson; V. P. Andreev; T. Angelescu; L. Antonov; D. Antreasyan; P. Arce; A. Arefiev; A G Atamanchuk; T. Azemoon; T. Aziz; P. V. K. S. Baba; P. Bagnaia; J. A. Bakken; R. C. Ball; S. Banerjee; J. Bao; R. Barillère; L. Barone; A. Baschirotto; R. Battiston; A. Bay; F. Becattini; J. Bechtluft; R. Becker; U. Becker; F. Behner; J. Behrens; Gy. L. Bencze; J. Berdugo; P. Berges; B. Bertucci; B. L. Betev; M. Biasini; A. Biland; G. M. Bilei; R. Bizzarri; J. J. Blaising; Gerjan J Bobbink; R K Böck; A. Böhm; B. Borgia; M. Bosetti; D. Bourilkov; Maurice Bourquin; D. Boutigny; B T Bouwens; Elena Brambilla; J. G. Branson; I. C. Brock; M. Brooks; A T Bujak; J. D. Burger; W. J. Burger; J K Busenitz; A O Buytenhuijs; X. D. Cai; M. Capell; M. Caria; G. Carlino; A. M. Cartacci; R. Castello; M Cerrada-Canales; F. Cesaroni; Y. H. Chang; U. K. Chaturvedi; M. Chemarin; A. Chen; C. Chen; G. Chen; H. F. Chen; H. S. Chen; W. Y. Chen; G. Chiefari; C. Y. Chien; M. T. Choi; S. Chung; C. Civinini; I. Clare; R. Clare; T. E. Coan; H. O. Cohn; G. Coignet; N. Colino; A. Contin; S. Costantini; F. Cotorobai; X. T. Cui; X. Y. Cui; T. S. Dai; R. D'Alessandro; R. de Asmundis; A. Degré; K. Deiters; E. Dénes; P. Denes; F. Denotaristefani; M. Dhina; D. Dibitonto; M. Diemoz; H. R. Dimitrov; C. Dionisi; M. Ditmarr; L. Djambazov; M. T. Dova; E. Drago; D. Duchesneau; P. Duinker; I. Duran; S. Easo; H. El Mamouni; A. Engler; F. J. Eppling; F. C. Erné; P. Extermann; R. Fabbretti; M. Fabre; S. Falciano; S. J. Fan; O. Fackler; J. Fay; M. Felcini; T. Ferguson; D. Fernandez; G. Fernandez; F. Ferroni; H. Fesefeldt; E. Fiandrini; J. H. Field; F. Filthaut; P. H. Fisher; G. Forconi; L. Fredj; K. Freudenreich; W. Friebel; M. Fukushima; M. Gailloud; Yu. Galaktionov; E. Gallo; S. N. Ganguli; P. Garcia-Abia; D. Gele; S. Gentile; N. Gheordanescu; S Goldfarb; Z. F. Gong; E. Gonzalez; A. Gougas; D. Goujon; G. Gratta; M. Gruenewald; C. Gu; M. Guanziroli; J. K. Guo; V. K. Gupta; A. Gurtu; H. R. Gustafson; L. J. Gutay; K. Hangarter; B. Hartmann; A. Hasan; D. Hauschildt; C. F. He; J. T. He; T. Hebbeker; M. Hebert; A. Hervé; K. Hilgers; H Hoorani; G Q Hu; B. Ille; M. M. Ilyas; V. Innocente; H. Janssen; S. Jezequel; B. N. Jin; L. W. Jones; I. Josa-Mutuberria; A. Kasser; R. A. Khan; Yu. Kamyshkov; P. Kapinos; J. S. Kapustinsky; Y. Karyotakis; M. Kaur; S. Khokhar; M. N. Kienzle-Focacci; J. K. Kim; S. C. Kim; Y. G. Kim; W. W. Kinnison; A. Kirkby; D. Kirkby; S. Kirsch; W. Kittel; A. Klimentov; R. Klöckner; A. C. König; E. Koffeman; O. Kornadt; V. Koutsenko; A. Koulbardis; R. W. Kraemer; T. Kramer; V. R. Krastev; W. Krenz; A. Krivshich; H. Kuijten; K. S. Kumar; A. Kunin; G. Landi; D. Lanske; S. Lanzano; A. Lebedev; P Lecomte; P. Lecoq; P. Le Coultre; D. M. Lee; J. S. Lee; K. Y. Lee; I. Leedom; C. Leggett; J. M. Le Goff; R. Leiste; M. Lenti; E. Leonardi; C. Li; H. T. Li; P. J. Li; J. Y. Liao; W. T. Lin; Z. Y. Lin; F. L. Linde; B. Lindemann; L. Lista; Y. Liu; W. Lohmann; E. Longo; Y. S. Lu; J. M. Lubbers; K. Lübelsmeyer; C. Luci; D. Luckey; L. Ludovici; L. Luminari; W. Lustermann; J. M. Ma; W. G. Ma; M. MacDermott; R. Malik; A. Malinin; C. Maña; M. Maolinbay; P. Marchesini; F. Marion; A. Marin; J. P. Martin; L. Martinez-Laso; F. Marzano; G. G. G. Massaro; K. Mazumdar; P. McBride; T. McMahon; D. McNally; M. Merk; L. Merola; M. Meschini; W. J. Metzger; Y. Mi; A. Mihul; G. B. Mills; Y. Mir; G. Mirabelli; J. Mnich; M. Möller; B. Monteleoni; R. Morand; S. Morganti; N. E. Moulai; R. Mount; S. Müller; A. Nadtochy; E. Nagy; M. Napolitano; F. Nessi-Tedaldi; H. Newman; C. Neyer; M. A. Niaz; A. Nippe; H. Nowak; G. Organtini; D. Pandoulas; S. Paoletti; P. Paolucci; G Passaleva; S. Patricelli; T. Paul; M. Pauluzzi; C. Paus; F. Pauss; Y. J. Pei; S. Pensotti; D. Perret-Gallix; J. Perrier; A. Pevsner; D. Piccolo; M. Pieri; P. A. Piroué; F. Plasil; V. Plyaskin; M. Pohl; V. Pojidaev; H. Postema; Z. D. Qi; J. M. Qian; K. N. Qureshi; R. Raghavan; G. Rahal-Callot; P. G. Rancoita; M. Rattaggi; G. Raven; P. Razis; K. Read; D. Ren; Z. Ren; M. Rescigno; S. Reucroft; A. Ricker; S. Riemann; B. C. Riemers; K. Riles; O. Rind; H. A. Rizvi; S. Ro; F. J. Rodriguez; B. P. Roe; M. Röhner; L. Romero; S. Rosier-Lees; R. Rosmalen; Ph. Rosselet; W. van Rossum; S. Roth; A. Rubbia; J. A. Rubio; H. Rykaczewski; M. Sachwitz; J M Salicio; G. S. Sanders; A. Santocchia; M. S. Sarakinos; G. Sartorelli; M. Sassowsky; G. Sauvage; V. Schegelsky; D. Schmitz; P. Schmitz; M. Schneegans; H. Schopper; D. J. Schotanus; S. Shotkin; H. J. Schreiber; J. Shukla; R. Schulte; S. Schulte; K. Schultze; J. Schwenke; G. Schwering; C. Sciacca; I. Scott; R. Sehgal; P. G. Seiler; J. C. Sens; L. Servoli; I. Sheer; D. Z. Shen; S. Shevchenko; X. R. Shi; E. Shumilov; V. Shoutko

1993-01-01

446

Generalized averaging and noise levels in evoked responses

A formal relationship between the mean square noise level in an evoked potential experiment, the number of averages and the autocorrelation function of the noise is derived. The generalized averaging process is recast as a filter applied to the noise signal. This filter is computed for a number of different types of evoked potential experiments in which various weighting factors

Mark M. Stecker

2000-01-01

447

Local linear convergence for alternating and averaged nonconvex projections

Local linear convergence for alternating and averaged nonconvex projections A.S. Lewis D.R. Luke J. Malick September 16, 2008 Key words: alternating projections, averaged projections, linear conver- gence collection of closed sets having "linearly regular intersection" at a point is crucial in variational

Paris-Sud XI, UniversitÃ© de

448

Optimal average of regional temperature with sampling error estimation

This paper presents an optimal averaging scheme that yields the area average of a climate field over a region. The scheme consists of two steps: fitting the observation data to a priori covariance functions and computing the weights for each of the observation stations. A surface air temperature dataset of 23 stations in Northeast China from 1961 to 1990 is

Samuel S. Shen; Xiaochun Wang

1997-01-01

449

Optimal Average of Régional Température with Sampling Error Estimation

This paper présents an optimal averaging scheme that yields thé area average of a climate field over a région. The scheme consists oftwo steps: fitting thé observation data to a priori covariance functions and Computing thé weightsfor each ofthe observation stations. A surface air température dataset of23 stations in Northeast China from 1961 to 1990 is used to demonstrate thé

Samuel S. Shen; Xiaochun Wang

450

Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

NASA Technical Reports Server (NTRS)

Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

2008-01-01

451

The effects of spatial and temporal averaging on high-resolution transmission electron microscope (HRTEM) images and associated intensity profiles of a solid-liquid Al interface were investigated using atomic coordinates obtained from molecular dynamics simulations. It was found that intensity profiles obtained by spatial averaging across the solid-liquid interface capture the variation in structural features nearly as well as time-averaged intensity profiles. This suggests that adequate spatial averaging of a single HRTEM image can be used to study the contrast from interfaces, and thereby, the structural details, without the need for more time-consuming, computer-intensive time averaged analyses. The limitations of this method are also discussed. PMID:23142743

Eswara Moorthy, Santhana K; Mendelev, Mikhail I; Howe, James M

2013-01-01

452

ERIC Educational Resources Information Center

Average-ability students often do not take full advantage of learning in mixed-ability groups because they hardly engage in the group interaction. This study examined whether structuring collaboration by group roles and ground rules for helping behavior might help overcome this participatory inequality. In a plant biology course, heterogeneously…

Saleh, Mohammad; Lazonder, Ard W.; Jong, Ton de

2007-01-01

453

ERIC Educational Resources Information Center

In this study the "neural efficiency" phenomenon (more efficient brain function in brighter as compared to less intelligent individuals) was investigated regarding differences in intelligence (average vs. above-average intelligence) and scholastic achievement (achievers vs. underachievers). The cortical activation (assessed by event-related…

Staudt, Beate; Neubauer, Aljoscha C.

2006-01-01

454

Programmable noise bandwidth reduction by means of digital averaging

NASA Technical Reports Server (NTRS)

Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.

Poklemba, John J. (Inventor)

1993-01-01

455

Experimental demonstration of squeezed-state quantum averaging

We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented harmonic mean yields a lower value than the corresponding value obtained for the standard arithmetic-mean strategy. The effect of quantum averaging is experimentally tested for squeezed and thermal states as well as for uncorrelated and partially correlated noise sources. The harmonic-mean protocol can be used to efficiently stabilize a set of squeezed-light sources with statistically fluctuating noise levels.

Lassen, Mikael; Madsen, Lars Skovgaard; Andersen, Ulrik L. [Department of Physics, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Sabuncu, Metin [Max-Planck-Institute for the Science of Light, Guenther-Scharowsky-Strasse 1, D-91058 Erlangen (Germany); Filip, Radim [Department of Optics, Palacky University, 17 Listopadu 50, CZ-772 07 Olomouc (Czech Republic)

2010-08-15

456

Definition of average path and relativity parameter computation in CASA

NASA Astrophysics Data System (ADS)

System CASA (computer-assisted semen analysis) is a medical applicable system which gets the sperm motility and its parameters using image processing method. But there is no any authoritative administration or academic organization gives a set of criterion for CASA now result in lowering the effective compare of work between the labs or researchers. The average path and parameters relative to it as average path velocity, amplitude of lateral head displacement and beat cross frequency are often unable to compare between systems because of different algorithm. The paper presents a new algorithm that could define the average path uniquely and compute those 3 parameters above quickly and handy from any real path.

Wu, Dawei; Huang, Yan; Chen, Xiaohua; Yu, Chang

2001-09-01

457

Cascade of failures in interdependent networks with different average degree

NASA Astrophysics Data System (ADS)

Most of modern systems are coupled by two sub-networks and therefore should be modeled as interdependent networks. The study towards robustness of interdependent networks becomes interesting and significant. In this paper, mainly by numerical simulations, the robustness of interdependent Erdös-Rényi (ER) networks and interdependent scale-Free (SF) networks coupled by two sub-networks with different average degree are investigated. First, we study the robustness of interdependent networks under random attack. Second, we study the robustness of interdependent networks under targeted attack on high or low degree nodes, and find that interdependent networks with different average degree are significantly different from those interdependent networks with equal average degree.

Cheng, Zunshui; Cao, Jinde; Hayat, Tasawar

2014-12-01

458

Flux Jacobian matrices and generaled Roe average for an equilibrium real gas

NASA Technical Reports Server (NTRS)

Inviscid flux Jacobian matrices and their properties used in numerical solutions of conservation laws are extended to general, equilibrium gas laws. Exact and approximate generalizations of the Roe average are presented. Results are given for one-dimensional flow, and then extended to three-dimensional flow with time-varying grids.

Vinokur, Marcel

1988-01-01

459

Subject of this letter is the dynamics of a chain obtained performing the continuous limit of a system of links and beads. In particular, the probability distribution of the relative position between two points of the chain averaged over a given interval of time is computed. The physical meaning of the obtained result is investigated in the limiting case of a stiff chain.

Franco Ferrari; Jaroslaw Paturej; Thomas A. Vilgis; Tomasz Wydro

2008-09-12

460

Genetic analysis of average annual productivity of Nellore breeding cows (COWPROD)

With the aim of estimating the coefficient of heritability of average annual productivity of Nellore cows (COWPROD), a data set from 24,855 animals with known pedigree was analyzed. COWPROD is defined as the amount (in kilograms) of weaned calves produced yearly by one cow during her remaining time in herd ignoring a fixed period of 365 days. COWPROD was calculated

J. P. Eler; J. B. S. Ferraz; J. C. C. Balieiro; E. C. Mattos

2008-01-01

461

Crash involvement rates by driver gender and the role of average annual mileage

The effects of four predictor variables—driver age, driver gender, time of day, and average annual mileage—on crash involvement rates were estimated through the use of multivariate modelling techniques. Separate models were developed for fatal, injury, and property damage only crashes. All four predictor variables proved to be highly significant in explaining variations in observed rates. Rates predicted by the models

Dawn L. Massie; Paul E. Green; Kenneth L. Campbell

1997-01-01

462

Average inversion level, modeling, and physics of erbium-doped fiber amplifiers

We present a detailed study of a set of models for characterizing the gain, the input and output powers of single erbium-doped fiber amplifiers (EDFAs) and networks of EDFAs. The time dependent gain is described by a single ordinary differential equation for the average inversion level of an EDFA with arbitrary number of signal channels with arbitrary power levels and

Y. Sun; J. L. Zyskind; A. K. Srivastava

1997-01-01

463

Quenching in a non-ideal mechanical system and the Averaging Method

NASA Astrophysics Data System (ADS)

In this paper, for the first time, a quenching result in a non-ideal system is rigorously obtained. In order to do this a new mechanical hypothesis is assumed, it means that the moment of inertia of the rotating parts of the energy source is big. From this is possible to use the Averaging Method.

Dantas, Márcio José Horta; Balthazar, José Manoel; Felix, Jorge Luiz Palacios

2012-11-01

464

Sea Surface Temperature Climatology: Averages of the same month over a number of years.

NSDL National Science Digital Library

This site presents an interactive animation of the changes in global sea surface temperature averaged over twelve months time. By clicking on an area of the map, users may view a particular month for a specific region, change display parameters and select image output formats.

Library, Iri/ldeo C.; Ucsb

465

Identification and estimation of survivor average causal effects

In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022

Tchetgen, Eric J Tchetgen

2014-01-01

466

Sexuality and breast cancer: prime time for young patients

Sexuality and sexual functioning is a cardinal domain of health-related quality of life in breast cancer patients, namely in the younger population. Young women below 40 years of age go through a time in their lives where sexual self-identity has recently matured, their professional obligations are demanding and they bear interpersonal and childbearing expectations, all of which can suffer a devastating turnaround with cancer diagnosis and its physical and psychological aftermath. Although these women’s sexuality and directed interventions have remained largely unaddressed so far, concepts are evolving and treatment options are becoming diversified, chiefly on the field of non-hormonal pharmacological therapy of sexual dysfunction. This review will examine the definitions of female sexual dysfunction, the etiology of the disorders in young breast cancer patients, the assessment methods, the non-pharmacological and pharmacological treatment options and the challenges that lie ahead. PMID:23819031

2013-01-01

467

Pulse recirculation has been successfully demonstrated with the interaction laser system of LLNL's Thomson-Radiated Extreme X-ray (T-REX) source. The recirculation increased twenty-eight times the intensity of the light coming out of the laser system, demonstrating the capability of increasing the gamma-ray flux emitted by T-REX. The technical approach demonstrated could conceivably increase the average gamma-ray flux output by up to a hundred times.

Semenov, V

2009-05-28

468

Stroboscopic averaging in Banach spaces: Application to NLS

NASA Astrophysics Data System (ADS)

We consider general, highly-oscillatory evolution equations, posed in a Banach space (more precisely high- frequency systems, with one frequency in an infinite-dimensional setting) and we describe an averaging procedure (namely stroboscopic averaging) for which the obtained averaged system respects the intrinsic geometric properties of the original equation. The typical example we have in mind in that of nonlinear PDE's in a high-frequency regime, where only one frequency is present, and the averaged effect of the oscillations in the nonlinearities is to be computed at high order. We apply our results to general nonlinear Schrödinger equations in a high-frequency regime and show that, up to a simple and standard filtering procedure, the obtained results provide a hierarchy of models that approximate the original equations.

Castella, F.; Chartier, Ph.; Méhats, F.; Murua, A.

2012-09-01

469

Gastroschisis: what is the average gestational age of spontaneous delivery?

Background\\/PurposeTo consolidate what is known about pregnancies complicated by fetal gastroschisis through analysis of one of the largest series yet reported and to define the average gestational age of spontaneous delivery.

Andrea Y. Lausman; Jacob C. Langer; Melissa Tai; P. Gareth R. Seaward; Rory C. Windrim; Edmond N. Kelly; Greg Ryan

2007-01-01

470

Effects of spatial variability and scale on areal -average evapotranspiration

NASA Technical Reports Server (NTRS)

This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.

Famiglietti, J. S.; Wood, Eric F.

1993-01-01

471

Tamiflu Cuts 1 Day Off Average Flu Bout, Study Finds

... this page, please enable JavaScript. Tamiflu Cuts 1 Day Off Average Flu Bout, Study Finds Researchers reviewed ... the length of flu symptoms by about a day, and reduces the risk of flu-related complications ...

472

Metric structures in L1: Dimension, snowflakes, and average distortion

Metric structures in L1: Dimension, snowflakes, and average distortion James R. Lee U.C. Berkeley for the non-linear case. This is accomplished by exhibiting constant-distortion embeddings of snowflaked

Naor, Assaf

473

Does subduction zone magmatism produce average continental crust

NASA Technical Reports Server (NTRS)

The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.

Ellam, R. M.; Hawkesworth, C. J.

1988-01-01

474

The AGC total knee prosthesis at average 11 years

A retrospective study of a series of 126 consecutive primary cemented total knee replacements using the AGC prosthesis is reported. Sixty-two knees were available for long-term review with an average clinical follow-up of 11.4 years (range, 8.4-13.6 years). The survivorship was 95%, defining the endpoint as revision of all components for any reason except sepsis. The average knee flexion was

Roger H Emerson; Linda L Higgins; William C Head

2000-01-01

475

MOVIE DENOISING BY AVERAGE OF WARPED LINES Marcelo Bertalmio

MOVIE DENOISING BY AVERAGE OF WARPED LINES By Marcelo BertalmÂ´io Vicent Caselles and Â´Alvaro Pardo/624-6066 Fax: 612/626-7370 URL: http://www.ima.umn.edu #12;1 Movie Denoising by Average of Warped Lines Marcelo BertalmÂ´io, Vicent Caselles and Â´Alvaro Pardo Abstract Here we present an efficient method for movie