Note: This page contains sample records for the topic average turnaround time from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

Laboratory Turnaround Time  

PubMed Central

Turnaround time (TAT) is one of the most noticeable signs of laboratory service and is often used as a key performance indicator of laboratory performance. This review summarises the literature regarding laboratory TAT, focusing on the different definitions, measures, expectations, published data, associations with clinical outcomes and approaches to improve TAT. It aims to provide a consolidated source of benchmarking data useful to the laboratory in setting TAT goals and to encourage introduction of TAT monitoring for continuous quality improvement. A 90% completion time (sample registration to result reporting) of <60 minutes for common laboratory tests is suggested as an initial goal for acceptable TAT.

Hawkins, Robert C

2007-01-01

2

Improving satisfaction performance through faster turnaround times.  

PubMed

In order to increase satisfaction and improve department performance, Florida Hospital Memorial Medical Center focused on decreasing report turnaround time with a goal of two hours from completion of the examination by the technologist to a final signed report by the radiologist. Through the introduction of voice recognition software, not only were efficiencies gained, but there was a dramatic reduction in transcription costs--from $30,000 per month to less than $300. Turnaround times were ultimately reduced, thus improving performance and increasing satisfaction. This was made evident by the tracking over time of physician satisfaction scores and HCAHPS scores rating patient satisfaction. PMID:22043733

Kelley, Lisa

2011-01-01

3

A REGRESSION MODEL FOR VESSEL TURNAROUND TIME  

Microsoft Academic Search

Turnaround time in port industry portrays capability and ability of container terminal in providing services. This study is motivated by the rapid development in port container terminal, in providing efficient and effective services and high port productivity, with the aim to achieve optimum port performance. Research arises from the issue between port throughputs (i.e. Teus- Twenty Footer Equivalent Unit) and

Kasypi Mokhtar

2006-01-01

4

Improving Pap test turnaround time using external benchmark data and engineering process improvement tools.  

PubMed

Turnaround time for Papanicolaou (Pap) tests became an important service quality issue at our institution. We studied Pap test turnaround time using engineering process improvement tools and benchmarked turnaround time against data published as a College of American Pathologists Q-Probes study. An IDEF3 process map revealed the complexity of the Pap test process and the opportunities for process improvement. We used these data and the action-research method to initiate changes in cytopathology laboratory operations with the goal of reducing turnaround time. Before intervention, mean Pap test turnaround time was highly variable; during a 6-month period, monthly means ranged from 2.5 to 10.8 days. A cycle time study conducted over a 2-week period validated these data. After system improvements were implemented, the monthly mean turnaround time decreased and became more consistent, with 11 of 12 months having a mean turnaround time of 3 days or less (range, 1.5-3.9 days). Our study illustrates the value of publishing Q-Probes data for use as external benchmarks and the benefits of using tools from other disciplines to improve laboratory processes. PMID:12375639

Persoon, Thomas J; Zaleski, M Sue; Cohen, Michael B

2002-10-01

5

A strategy for reducing turnaround time in design optimization using a distributed computer system  

NASA Technical Reports Server (NTRS)

There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

1988-01-01

6

Mapping Turnaround Times (TAT) to a Generic Timeline: A Systematic Review of TAT Definitions in Clinical Domains  

PubMed Central

Background Assessing turnaround times can help to analyse workflows in hospital information systems. This paper presents a systematic review of literature concerning different turnaround time definitions. Our objectives were to collect relevant literature with respect to this kind of process times in hospitals and their respective domains. We then analysed the existing definitions and summarised them in an appropriate format. Methods Our search strategy was based on Pubmed queries and manual reviews of the bibliographies of retrieved articles. Studies were included if precise definitions of turnaround times were available. A generic timeline was designed through a consensus process to provide an overview of these definitions. Results More than 1000 articles were analysed and resulted in 122 papers. Of those, 162 turnaround time definitions in different clinical domains were identified. Starting and end points vary between these domains. To illustrate those turnaround time definitions, a generic timeline was constructed using preferred terms derived from the identified definitions. The consensus process resulted in the following 15 terms: admission, order, biopsy/examination, receipt of specimen in laboratory, procedure completion, interpretation, dictation, transcription, verification, report available, delivery, physician views report, treatment, discharge and discharge letter sent. Based on this analysis, several standard terms for turnaround time definitions are proposed. Conclusion Using turnaround times to benchmark clinical workflows is still difficult, because even within the same clinical domain many different definitions exist. Mapping of turnaround time definitions to a generic timeline is feasible.

2011-01-01

7

Decreased Turn-Around Time for Forensic Genetic Testing.  

National Technical Information Service (NTIS)

At the time of the initial Crime Laboratory Improvement (CLIP) grant application, the Biological Science Section of the New York State Police Forensic Investigation Center was in the process of implementing a program with dual objectives which were to (1)...

B. Duceman

2006-01-01

8

Analysis of high-level radioactive slurries as a method to reduce DWPF turnaround times  

SciTech Connect

Analysis of Defense Waste Processing Facility (DWPF) samples as slurries rather than as dried or vitrified samples is an effective way to reduce sample turnaround times. Slurries can be dissolved with a mixture of concentrated acids to yield solutions for elemental analysis by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). Slurry analyses can be performed in eight hours, whereas analyses of vitrified samples require up to 40 hours to complete. Analyses of melter feed samples consisting of the DWPF borosilicate frit and either simulated or actual DWPF radioactive sludge were typically within a range of 3--5% of the predicted value based on the relative amounts of sludge and frit added to the slurry. The results indicate that the slurry analysis approach yields analytical accuracy and precision competitive with those obtained from analyses of vitrified samples. Slurry analyses offer a viable alternative to analyses of solid samples as a simple way to reduce analytical turnaround times.

Coleman, C.J.; Bibler, N.E.; Ferrara, D.M.; Hay, M.S.

1996-06-01

9

Impact of an in-house emergency radiologist on report turnaround time.  

PubMed

ABSTRACTBackground:One of the many challenges facing emergency departments (EDs) across North America is timely access to emergency radiology services. Academic institutions, which are typically also regional referral centres, frequently require cross-sectional studies to be performed 24 hours a day with expedited final reports to accelerate patient care and ED flow.Objective:The purpose of this study was to determine if the presence of an in-house radiologist, in addition to a radiology resident dedicated to the ED, had a significant impact on report turnaround time.Methods:Preliminary and final report turnaround times, provided by the radiology resident and staff, respectively, for patients undergoing computed tomography or ultrasonography of their abdomen/pelvis in 2008 (before the implementation of emergency radiology in-house staff service) were compared to those performed during the same time frame in 2009 and 2010 (after staffing protocols were changed).Results:A total of 1,624 reports were reviewed. Overall, there was no statistically significant decrease in the preliminary report turnaround times between 2008 and 2009 (p ?=? 0.1102), 2009 and 2010 (p ?=? 0.6232), or 2008 and 2010 (p ?=? 0.0890), although times consistently decreased from a median of 2.40 hours to 2.08 hours to 2.05 hours (2008 to 2009 to 2010). There was a statistically significant decrease in final report turnaround times between 2008 and 2009 (p < 0.0001), 2009 and 2010 (p < 0.0011), and 2008 and 2010 (p < 0.0001). Median final report times decreased from 5.00 hours to 3.08 hours to 2.75 hours in 2008, 2009, and 2010, respectively. There was also a significant decrease in the time interval between preliminary and final reports between 2008 and 2009 (p < 0.0001) and 2008 and 2010 (p < 0.0001) but no significant change between 2009 and 2010 (p ?=? 0.4144).Conclusion:Our results indicate that the presence of a dedicated ED radiologist significantly reduces final report turnaround time and thus may positively impact the time to ED patient disposition. Patient care is improved when attending radiologists are immediately available to read complex films, both in terms of health care outcomes and regarding the need for repeat testing. Providing emergency physicians with accurate imaging findings as rapidly as possible facilitates effective and timely management and thus optimizes patient care. PMID:24807227

Lamb, Leslie; Kashani, Paria; Ryan, John; Hebert, Guy; Sheikh, Adnan; Thornhill, Rebecca; Fasih, Najla

2014-01-01

10

Impact of an in-house emergency radiologist on report turnaround time.  

PubMed

ABSTRACTBackground:One of the many challenges facing emergency departments (EDs) across North America is timely access to emergency radiology services. Academic institutions, which are typically also regional referral centres, frequently require cross-sectional studies to be performed 24 hours a day with expedited final reports to accelerate patient care and ED flow.Objective:The purpose of this study was to determine if the presence of an in-house radiologist, in addition to a radiology resident dedicated to the ED, had a significant impact on report turnaround time.Methods:Preliminary and final report turnaround times, provided by the radiology resident and staff, respectively, for patients undergoing computed tomography or ultrasonography of their abdomen/pelvis in 2008 (before the implementation of emergency radiology in-house staff service) were compared to those performed during the same time frame in 2009 and 2010 (after staffing protocols were changed).Results:A total of 1,624 reports were reviewed. Overall, there was no statistically significant decrease in the preliminary report turnaround times between 2008 and 2009 (p ?=? 0.1102), 2009 and 2010 (p ?=? 0.6232), or 2008 and 2010 (p ?=? 0.0890), although times consistently decreased from a median of 2.40 hours to 2.08 hours to 2.05 hours (2008 to 2009 to 2010). There was a statistically significant decrease in final report turnaround times between 2008 and 2009 (p < 0.0001), 2009 and 2010 (p < 0.0011), and 2008 and 2010 (p < 0.0001). Median final report times decreased from 5.00 hours to 3.08 hours to 2.75 hours in 2008, 2009, and 2010, respectively. There was also a significant decrease in the time interval between preliminary and final reports between 2008 and 2009 (p < 0.0001) and 2008 and 2010 (p < 0.0001) but no significant change between 2009 and 2010 (p ?=? 0.4144).Conclusion:Our results indicate that the presence of a dedicated ED radiologist significantly reduces final report turnaround time and thus may positively impact the time to ED patient disposition. Patient care is improved when attending radiologists are immediately available to read complex films, both in terms of health care outcomes and regarding the need for repeat testing. Providing emergency physicians with accurate imaging findings as rapidly as possible facilitates effective and timely management and thus optimizes patient care. PMID:24456822

Lamb, Leslie; Kashani, Paria; Ryan, John; Hebert, Guy; Sheikh, Adnan; Thornhill, Rebecca; Fasih, Najla

2014-01-01

11

Cytotechnologist screening of fine-needle aspiration specimens: Impact on turnaround time and diagnostic accuracy.  

PubMed

Fine-needle aspiration (FNA) is widely utilized due to its short turnaround time (TAT), diagnostic accuracy, and low cost. Controversy exists as to what role cytotechnologists should play in evaluation of FNAs. Some authorities believe all FNAs should be screened by cytotechnologists while others believe that cytotechnologist review is unnecessary. ?Sixty sequentially performed FNAs without initial review by cytotechnologists were selected from the files of the University of Utah, Department of Pathology. The slides were obtained along with the associated final diagnoses. The slides were reviewed by cytotechnologists given patient history and specimen site but were blinded to the initial pathologist's diagnoses. The initial cytopathologist's diagnoses and subsequent cytotechnologists' diagnoses were recorded and correlated. TATs for these cases were calculated and compared with TATs in a second set of randomly selected FNAs where cytotechnologists had initially screened the cases. ?Correlation of initial cytopathologists' diagnoses with those of cytotechnologists' revealed no instances where cytotechnologists identified diagnostically significant findings not noted by the original pathologist. TAT for the FNAs reviewed only by a cytopathologist averaged 25.9 hours with a mode of 6 hours. TATs for cases with initial cytotechnologist screening averaged 44.1 hours with a mode of 25 hours. ?Pre-sign-out screening of FNA specimens by cytotechnologists does not appear to increase detection of cytologic abnormalities. Cytotechnologist screening does substantially increased TAT from a mean of 26 hours to approximately 44 hours. Such an extensive delay may reduce the overall clinical utility of the FNA technique. Diagn. Cytopathol. 2014;42:606-608. © 2014 Wiley Periodicals, Inc. PMID:24554528

Layfield, Lester J; Esebua, Magda; Witt, Benjamin L

2014-07-01

12

Insertable system for fast turnaround time microwave experiments in a dilution refrigerator  

NASA Astrophysics Data System (ADS)

Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence.

Ong, Florian R.; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian

2012-09-01

13

Insertable system for fast turnaround time microwave experiments in a dilution refrigerator.  

PubMed

Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence. PMID:23020391

Ong, Florian R; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian

2012-09-01

14

[Study on turnaround time of biological analysis in urgent need in hospital laboratories].  

PubMed

We have assessed turnaround time (TAT) for urgent laboratory analysis. Twelve hospital laboratories participated to this study. All laboratories have organized a classification of a management system of urgent analyses. The TAT reporting were relatively homogeneous for 12 laboratories. We have defined TAT as time of specimen receipt in the laboratory to time of results reporting. This TAT divides into 4 groups: close to 50 minutes for analyses as TP, D-dimeres, CRP (C Protein Reactive), HCG, troponin, alcoholhemia, K, lipase; 35 minutes for the cytology of cerebrospinal fluid; 25 minutes for complete blood cell count and 15 minutes for blood gases. All laboratories have accepted to TAT as a quality indicator. Quality indicator data should be collected in time to identify and correct problems to implemente effective interventions and to standardize processes among clinical laboratories. PMID:21159586

Roubille, Martine; Szymanowicz, Anton; Cartier, Brigitte; Albinet, Hélène; Carlier, Alain; Goux, Alain; Lefevre, Fabrice; Pellae, Isabelle; Rozand, Isabelle; Billion, Patrick; Burtin, Marie Laure; Mas, Philippe

2010-01-01

15

Pathology turnaround time in pediatric oncology: a tool to prepare patients and families for the diagnostic waiting period.  

PubMed

There is little data on the amount of time patients and families typically wait for pathology results when pediatric malignancy is suspected. The purpose of this study was to determine the average waiting period after diagnostic intervention for pediatric cancer. Pathology reports were reviewed for pediatric patients who had their initial diagnosis and were followed in the Division of Hematology/Oncology from 2007 through 2010. The average turnaround time (TAT) for all pathology (n=266) was 6.9 days. The TAT for pathology results according to diagnosis was 10.1 days for CNS tumors (n=59), 9.7 days for sarcomas (n=40), 5.4 days for lymphomas (n=31), 5.4 days for neuroblastoma (n=13), 7.3 days for kidney tumors (n=11), 7.2 days for thyroid tumors (n=7), 9.4 days for ovarian tumors (n=7), 7.0 days for schwannomas/neurofibromas (n=5), 5.7 days for testicular tumors (n=3), 5.0 days for hepatoblastoma (n=3), and 7.0 days for nasopharyngeal carcinomas (n=2). Overall the TAT for leukemia was 3.1 days (n=76), with diagnosis by flow cytometry taking 1.2 days and results by bone marrow biopsy taking 4.0 days. The TAT for pediatric oncology pathology after diagnostic intervention varies according to diagnosis. The hope is that this information will better prepare patients and families for the agonizing waiting period associated with diagnosis. PMID:23823114

Forlenza, Christopher J; Levy, Adam S

2013-10-01

16

Root Cause Analysis (RCA) of Prolonged Laboratory Turnaround Time in a Tertiary Care Set up  

PubMed Central

Introduction: Among the multitude of daily administrative problems which are faced by the modern hospitals today, prolonged Turnaround Time (TAT) of laboratory investigations is a crucial one, which affects patient care as well as patient satisfaction adversely. Aims and Objectives: The specific objectives were to observe the TAT of common laboratory investigations, to identify cause of increased turnaround time and to formulate action plans to rectify increased TAT. Methodology: An observational, RCA study was performed on 100 randomly selected patients. A separate group of 50 patients were assisted to get their investigations done and to reduce the time intervals without actively interfering with the steps. The results which were obtained were accepted as standards. Root cause analysis of the delays which were detected in TAT was done. Time intervals of TAT in the two groups were compared by 2 tailed t-tests done for equality of means. Result and Analysis: All time intervals were high in the study group and they were found to be statistically significant (p<0.05) within a 95% confidence interval of the difference. The maximum time which was needed in the control group was within the interval between the prescription of the investigation by the doctor and writing of the requisition by the Out-patient Department (OPD) staff. For the study population, it was the interval between the writing of the requisition by the OPD staff and the reaching of the patient at the central Laboratory. The standard deviation (27.665) and range (102) were also exceptionally high for this interval in the study group. Conclusion: This study revealed that easy to implement administrative steps would help in reducing the TAT significantly and in improving the quality of services of the central laboratory. These include the setting up of sample collection counters at the outpatient department (OPD) and inpatient department (IPD), employment of minor methods like printing the directions for reaching the laboratory on the OPD ticket, the start of a single prick policy, declaring central laboratory as a separate department and integration of the administrative control under one authority.

2014-01-01

17

Turnaround Necessities  

ERIC Educational Resources Information Center

Turning around chronically low-performing schools is challenging work requiring fundamental rethinking of the change process, and a systemic rather than school-by-school approach. Without a doubt, high-impact school leaders are critical to turnaround success, and pockets of success around the country demonstrate this. However, transformational and…

Robinson, William S.; Buntrock, LeAnn M.

2011-01-01

18

Minimizing Manual Image Segmentation Turn-Around Time for Neuronal Reconstruction by Embracing Uncertainty  

PubMed Central

The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

Plaza, Stephen M.; Scheffer, Louis K.; Saunders, Mathew

2012-01-01

19

Heavy-duty explosively operated pulsed opening and closing switches - Reducing cost and turnaround time  

NASA Astrophysics Data System (ADS)

Improvements to heavy-duty, explosively operated opening and closing switches to reduce component cost, installation cost, and turnaround time without sacrificing reliability are discussed. Heavy-duty opening and closing switches operated by small explosive charges (50 g or less) are essential to operation of the 60-MJ Balcones power supply. The six independent modules can be discharged sequentially. Each delayed inductor must be isolated from the railgun circuit with a heavy-duty closing switch capable of carrying megaampere currents of millisecond duration. Similar closing switches are used to crowbar the railgun as the projectile approaches the muzzle. Efficient switching is crucial to efficient conversion: rotor kinetic energy to stored inductive energy with about 50 percent efficiency; stored inductive energy to projectile kinetic energy with about 30 percent efficiency. The switches must operate with a precision and repeatability of 10 to the -5 sec, readily achievable with explosives. The opening switches must be structurally and thermally capable of carrying megampere currents for more than 100 ms and develop 10 kV on opening, stay open for 0.01 s, and safely and reliably dissipate megajoules of inductive energy in the event of a fault, a failure of the switch to operate, or an attempt to commutate into an open circuit. An example of the severe switching requirements is presented.

Peterson, D. R.; Price, J. H.; Upshaw, J. L.; Weldon, W. F.; Zowarka, R. C., Jr.

1991-01-01

20

Diagnostic Accuracy and Turnaround Time of the Xpert MTB/RIF Assay in Routine Clinical Practice  

PubMed Central

The Xpert MTB/RIF assay was introduced for timely and accurate detection of tuberculosis (TB). The aim of this study was to determine the diagnostic accuracy and turnaround time (TAT) of Xpert MTB/RIF assay in clinical practice in South Korea. We retrospectively reviewed the medical records of patients in whom Xpert MTB/RIF assay using sputum were requested. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for the diagnosis of pulmonary tuberculosis (PTB) and detection of rifampicin resistance were calculated. In addition, TAT of Xpert MTB/RIF assay was compared with those of other tests. Total 681 patients in whom Xpert MTB/RIF assay was requested were included in the analysis. The sensitivity, specificity, PPV and NPV of Xpert MTB/RIF assay for diagnosis of PTB were 79.5% (124/156), 100.0% (505/505), 100.0% (124/124) and 94.0% (505/537), respectively. Those for the detection of rifampicin resistance were 57.1% (8/14), 100.0% (113/113), 100.0% (8/8) and 94.9% (113/119), respectively. The median TAT of Xpert MTB/RIF assay to the report of results and results confirmed by physicians in outpatient settings were 0 (0–1) and 6 (3–7) days, respectively. Median time to treatment after initial evaluation was 7 (4–9) days in patients with Xpert MTB/RIF assay, but was 21 (7–33.5) days in patients without Xpert MTB/RIF assay. Xpert MTB/RIF assay showed acceptable sensitivity and excellent specificity for the diagnosis of PTB and detection of rifampicin resistance in areas with intermediate TB burden. Additionally, the assay decreased time to the initiation of anti-TB drugs through shorter TAT.

Kwak, Nakwon; Choi, Sun Mi; Lee, Jinwoo; Park, Young Sik; Lee, Chang-Hoon; Lee, Sang-Min; Yoo, Chul-Gyu; Kim, Young Whan; Han, Sung Koo; Yim, Jae-Joon

2013-01-01

21

Finding the Missing Time in the Instantaneous Turnaround Version of the Twin Paradox  

NASA Astrophysics Data System (ADS)

The change in the reading on a distant Earth clock from the point of view of a traveling twin who instantaneously reverses direction is calculated by considering the behavior of the Earth clock during the turnaround, performing an integral, and taking a limit.

Kohler, Matthew

2006-11-01

22

Flexible time domain averaging technique  

NASA Astrophysics Data System (ADS)

Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

2013-09-01

23

Turnaround in cyclic cosmology.  

PubMed

It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion. PMID:17359014

Baum, Lauris; Frampton, Paul H

2007-02-16

24

Random time averaged diffusivities for Lévy walks  

NASA Astrophysics Data System (ADS)

We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.

Froemberg, D.; Barkai, E.

2013-07-01

25

Phase II of a Six sigma Initiative to Study DWPF SME Analytical Turnaround Times: SRNL's Evaluation of Carbonate-Based Dissolution Methods  

Microsoft Academic Search

The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical

Thomas

2005-01-01

26

Efficient Algorithms for Average Completion Time Scheduling  

NASA Astrophysics Data System (ADS)

We analyze the competitive ratio of algorithms for minimizing (weighted) average completion time on identical parallel machines and prove that the well-known shortest remaining processing time algorithm (SRPT) is 5/4-competitive w.r.t. the average completion time objective. For weighted completion times we give a deterministic algorithm with competitive ratio 1.791 + o(m). This ratio holds for preemptive and non-preemptive scheduling.

Sitters, René

27

Statistics of time averaged atmospheric scintillation  

SciTech Connect

A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

Stroud, P.

1994-02-01

28

Ambulance handovers: can a dedicated ED nurse solve the delay in ambulance turnaround times?  

PubMed

With ever increasing concern over ambulance handover delays this paper looks at the impact of dedicated A&E nurses for ambulance handovers and the effect it can have on ambulance waiting times. It demonstrates that although such roles can bring about reduced waiting times, it also suggests that using this as a sole method to achieve these targets would require unacceptably low staff utilisation. PMID:23636604

Clarey, A; Allen, M; Brace-McDonnell, S; Cooke, M W

2014-05-01

29

Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.  

PubMed

Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors. PMID:22270786

Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

2012-08-01

30

Ways to reduce patient turnaround time and improve service quality in emergency departments  

Microsoft Academic Search

Purpose – Recent years have witnessed a fundamental change in the function of emergency departments (EDs). The emphasis of the ED shifts from triage to saving the lives of shock-trauma rooms equipped with state-of-the-art equipment. At the same time walk-in clinics are being set up to treat ambulatory type patients. Simultaneously ED overcrowding has become a common sight in many

David Sinreich; Yariv Marmor

2005-01-01

31

Short Planning Turn-Around Time and High Flexibility of the Swedish Astronomy/Aeronomy Satellite Odin  

NASA Astrophysics Data System (ADS)

The Swedish small satellite Odin combines two different scientific disciplines, astronomy and atmospheric research. It has a 3-axis stabilized, zero momentum, attitude control system that operates in two different modes, an inertial pointing astronomy mode and an atmospheric mode providing pointing/scanning of the Earth limb. The reference attitude for the atmospheric mode is created by a versatile reference attitude generator, providing also capability for observing and tracking any celestial or solar system objects. The absolute pointing requirements in the scientific modes are 15 arc-seconds inertial pointing and 1.2 arc-minutes in atmospheric mode reconstructed attitude. One part of the Odin mission is to observe the formation of the ozone holes at the poles in spring and fall and another is to observe moving celestial objects such as comets. The unpredictability of when such observation opportunities occur together with rapidly changing scientific demands put high requirements on a short turn- around time for the Operations Cycle - science demand, planning and command generation, reconstruction of attitude history, feedback and delivery to scientists. A small efficient team, including also members that was deeply involved with and responsible for the design and development of the system, has been a key to provide this a short turn-around time for the Operations Cycle. It is possible to safely reconfigure the satellite for either of two disciplines with short notice and have a reconstructed attitude available to the users shortly after the observations have been done. The small efficient team together with modular software based on MATLAB also facilitates in rapidly meeting new demands on the planning and attitude reconstruction from the users.

Jakobsson, B.; Karlsson, T.; Nylund, M.; Olsson, T.; Vinterhav, E.

2002-01-01

32

The Turnaround Fallacy  

ERIC Educational Resources Information Center

For as long as there have been struggling schools in America's cities, there have been efforts to turn them around. But overall, school turnaround efforts have consistently fallen far short of hopes and expectations. Quite simply, turnarounds are not a scalable strategy for fixing America's troubled urban school systems. Fortunately, findings from…

Smarick, Andy

2010-01-01

33

Time-average dynamic speckle interferometry  

NASA Astrophysics Data System (ADS)

For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.

Vladimirov, A. P.

2014-05-01

34

Ensemble vs. time averages in financial time series analysis  

NASA Astrophysics Data System (ADS)

Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

2012-12-01

35

Rapid turnaround scanning probe nanolithography.  

PubMed

Scanning probe nanolithography (SPL) has demonstrated its potential in a variety of applications like 3D nanopatterning, 'direct development' lithography, dip-pen deposition or patterning of self-assembled monolayers. One of the main issues holding back SPL has been the limited throughput for patterning and imaging. Here we present a complete lithography and metrology system based on thermomechanical writing into organic resists. Metrology is carried out using a thermoelectric topography sensing method. More specifically, we demonstrate a system with a patterning pixel clock of 500 kHz, 20 mm s(-1) linear scan speed, a positioning accuracy of 10 nm, a read-back frequency bandwidth of 100,?000 line-pairs s(-1) and a turnaround time from patterning to qualifying metrology of 1 min. Thus, we demonstrate a nanolithography system capable of implementing rapid turnaround. PMID:21602616

Paul, Philip C; Knoll, Armin W; Holzner, Felix; Despont, Michel; Duerig, Urs

2011-07-01

36

Rapid turnaround scanning probe nanolithography  

NASA Astrophysics Data System (ADS)

Scanning probe nanolithography (SPL) has demonstrated its potential in a variety of applications like 3D nanopatterning, 'direct development' lithography, dip-pen deposition or patterning of self-assembled monolayers. One of the main issues holding back SPL has been the limited throughput for patterning and imaging. Here we present a complete lithography and metrology system based on thermomechanical writing into organic resists. Metrology is carried out using a thermoelectric topography sensing method. More specifically, we demonstrate a system with a patterning pixel clock of 500 kHz, 20 mm s - 1 linear scan speed, a positioning accuracy of 10 nm, a read-back frequency bandwidth of 100 000 line-pairs s - 1 and a turnaround time from patterning to qualifying metrology of 1 min. Thus, we demonstrate a nanolithography system capable of implementing rapid turnaround.

Paul, Philip C.; Knoll, Armin W.; Holzner, Felix; Despont, Michel; Duerig, Urs

2011-07-01

37

Analysis of Time Series by Moving Averages.  

ERIC Educational Resources Information Center

A computer program for estimation of trends in longitudinal research based on the moving average was presented and applied to a sample set of data. The program allows both standard and weighted procedures for supression of incidental variation. (Author/JKS)

Krus, David J.; Ceurvorst, Robert W.

1978-01-01

38

Phase II of a Six sigma Initiative to Study DWPF SME Analytical Turnaround Times: SRNL's Evaluation of Carbonate-Based Dissolution Methods  

SciTech Connect

The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3} fusion methods using DWPF simulated glass (part 1) and (to the extent possible) using DWPF radioactive glass (part 2). (3) Recommend specific equipment to implement the new methods and work with SRNL's Equipment Engineering Section (EES) to ensure compatibility of the equipment with the DWPF shielded cells. The focus of this report is major task activity No.2 (parts 1 and 2). The other two major task activities are to be addressed separately. The measurements supporting task activity No.1 are being conducted at the DWPF. EES is currently designing and fabricating the DWPF cell equipment needed for the new method as part of major task activity No.3.

Edwards, Thomas

2005-09-01

39

Use of the Rapid BinaxNOW Malaria Test in a 24-Hour Laboratory Associated with Accurate Detection and Decreased Malaria Testing Turnaround Times in a Pediatric Setting Where Malaria Is Not Endemic  

PubMed Central

The impact of implementing the BinaxNow malaria test was evaluated. From 288 tests, 34 malaria cases were detected. Laboratory turnaround time decreased from 9.8 to 1.7 h for report of any Plasmodium spp., 10.2 to 1.6 h for P. falciparum, and 8.6 to 1.1 h for any result.

Blecker-Shelly, D. L.

2013-01-01

40

The Gemini Observatory Fast-Turnaround Program  

NASA Astrophysics Data System (ADS)

Gemini's Fast-Turnaround Program is intended to greatly decrease the time from having an idea to acquiring the supporting data. The program will combine frequent proposal submission opportunities, rapid review, and fast preparation and execution of observations. We describe how the scheme will operate, and outline progress made towards its implementation.

Mason, Rachel; Adamson, A.; Crabtree, D.; Cote, S.; Kissler-Patig, M.; Levenson, N.

2014-01-01

41

Real-time instrument averages 100 data sets  

NASA Technical Reports Server (NTRS)

Instrument generates average curve of 100 consecutive cycles of any function that occurs as time varying electrical signal. Device has been used to average parameters on 1975 Chevrolet V-8 engine, and on Continental six cylinder aircraft engine.

Birchenough, A. G.; Rice, W. J.; Toma, G. B.

1979-01-01

42

Making Sense of School Turnarounds  

ERIC Educational Resources Information Center

Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds

Hess, Frederick M.

2012-01-01

43

Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports  

ERIC Educational Resources Information Center

If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposed…

Silva, Elena

2012-01-01

44

40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?  

Code of Federal Regulations, 2013 CFR

...arithmetic averages into appropriate averaging times and units? 60.1755 Section 60...SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units...arithmetic averages into appropriate averaging times and units? (a) Use the equation...

2013-07-01

45

Determining Average Program Execution Times and their Variance  

Microsoft Academic Search

This paper presents a general framework for determining average program execution times and their variance, based on the program's interval structure and control dependence graph. Average execution times and variance values are computed using frequency information from an optimized counter-based execution profile of the program. 1 Introduction It is important for a compiler to obtain estimates of execution times for

Vivek Sarkar

1989-01-01

46

Improving the Average Response Time in Collective I/O  

SciTech Connect

In collective I/O, MPI processes exchange requests so that the rearranged requests can result in the shortest file system access time. Scheduling the exchange sequence determines the response time of participating processes. Existing implementations that simply follow the increasing order of file ofsets do not necessary produce the best performance. To minimize the average response time, we propose three scheduling algorithms that consider the number of processes per file stripe and the number of accesses per process. Our experimental results demonstrate improvements of up to 50% in the average response time using two synthetic benchmarks and a high-resolution climate application.

Jin, Chen; Sehrish, Saba; Liao, Wei-keng; Choudhary, Alok; Schuchardt, Karen L.

2011-09-21

47

Scaling of average weighted shortest path and average receiving time on weighted hierarchical networks  

NASA Astrophysics Data System (ADS)

Recent work on the networks has focused on the weighted hierarchical networks that are significantly different from the un-weighted hierarchical networks. In this paper we study a family of weighted hierarchical networks which are recursively defined from an initial uncompleted graph, in which the weights of edges have been assigned to different values with certain scale. Firstly, we study analytically the average weighted shortest path (AWSP) on the weighted hierarchical networks. Using a recursive method, we determine explicitly the AWSP. The obtained rigorous solution shows that the networks grow unbounded but with the logarithm of the network size, while the weighted shortest paths stay bounded. Then, depending on a biased random walk, we research the mean first-passage time (MFPT) between a hub node and any peripheral node. Finally, we deduce the analytical expression of the average of MFPTs for a random walker originating from any node to first visit to a hub node, which is named as the average receiving time (ART). The obtained result shows that ART is bounded or grows sublinearly with the network order relating to the number of initial nodes and the weighted factor or grows quadratically with the iteration.

Sun, Yu; Dai, Meifeng; Xi, Lifeng

48

Adjoint sensitivity analysis of time averaged quantities for unsteady flows  

NASA Astrophysics Data System (ADS)

Sensitivity analysis is an essential gradient for data assimilation, aerodynamic design, uncertainty quantification and optimal flow control. In particular. the adjoint sensitivity analysis method has been shown to solve very high dimensional optimization problems typically found these applications. This talk focuses on recent developments in extending adjoint sensitivity analysis to unsteady flows. The adjoint equation of unsteady flows must be integrated backwards in time. Each backward time step must use the flow solution at the corresponding time. As a result, the entire time history of the flow solution must be either stored or recalculated. The invention of checkpointing schemes provides an economic solution to this challenge. In particular, the dynamic checkpointing scheme makes this solution more practical for computational fluid dynamics problems. In unsteady flows, the quantities of interest are often long time averages. We demonstrate that sensitivity analysis of these long time averaged quantities poses significant new challenge. A novel windowing scheme is developed to compute correct sensitivity for periodic unsteady flows, such as in laminar vortex shedding. Initial investigation of sensitivity analysis of chaotic unsteady flows, i.e., transitional and turbulent flows, is also discussed.

Wang, Qiqi

2011-11-01

49

The Turnaround School Library Program  

Microsoft Academic Search

\\u000a For many students, particularly for those who are at-risk for school failure, an emphasis on instructional strategies without\\u000a including the resiliency-building factors will not increase academic achievement. Students thrive academically when instructional\\u000a practices are effective and the environment in which they learn is nurturing and supporting. The “turnaround” school library\\u000a integrates effective instruction into a supportive, nurturing environment. This chapter

Jami Biles Jones; Alana M. Zambone

50

Average waiting time profiles of uniform DQDB model  

SciTech Connect

The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.

Rao, N.S.V. [Oak Ridge National Lab., TN (United States); Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D. [Old Dominion Univ., Norfolk, VA (United States). Dept. of Computer Science

1993-09-07

51

Forecasts of time averages with a numerical weather prediction model  

NASA Technical Reports Server (NTRS)

Forecasts of time averages of 1-10 days in duration by an operational numerical weather prediction model are documented for the global 500 mb height field in spectral space. Error growth in very idealized models is described in order to anticipate various features of these forecasts and in order to anticipate what the results might be if forecasts longer than 10 days were carried out by present day numerical weather prediction models. The data set for this study is described, and the equilibrium spectra and error spectra are documented; then, the total error is documented. It is shown how forecasts can immediately be improved by removing the systematic error, by using statistical filters, and by ignoring forecasts beyond about a week. Temporal variations in the error field are also documented.

Roads, J. O.

1986-01-01

52

Combining Quick-Turnaround and Batch Workloads at Scale  

NASA Technical Reports Server (NTRS)

NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

Matthews, Gregory A.

2012-01-01

53

Sample Language of Modified Contract Elements from Existing CBAs, MOUs, or EWAs to Support Turnaround  

ERIC Educational Resources Information Center

Organized by the key conditions areas for turnaround, "People, Program, Time and Money," this tool offers sample language for each contract element to serve as a model for modifications from a traditional CBA that may support a district's turnaround efforts. Sample language is offered from existing provisions in district-wide collective bargaining…

Mass Insight Education (NJ1), 2011

2011-01-01

54

Time dependence of average regional sulfur oxide concentrations  

SciTech Connect

The atmospheric burden of SO/sub 2/ and SO/sub 4/ in a large regional area has been calculated as a function of time during the July 1978 SURE intensive period. A formulism is developed in which a highly parameterized description of deposition and chemical conversion is combined with a realistic emissions inventory and with realistic trajectories as calculated with the BNL long-range air quality model, AIRSOX. Temporal variations in the regional atmospheric burden are calculated as being due to variations in the advection of material through the region, with the pollutant being advected having a characteristic half-life for its formation and decay. The calculated atmospheric burdens are divided by a nominal mixing height to derive an average ground-level concentration. Comparison is made with the corresponding concentrations derived from SURE observations. The regions for which these comparisons have been made are in the Midwest and East and are similar in size (approx. 5 x 10/sup 5/ KM/sup 2/) and location to areas commonly affected by regional sulfate episodes.

Kleinman, L I; Carney, T A; Meyers, R E

1980-03-01

55

Series Overview. Sustaining School Turnaround at Scale. Brief 1  

ERIC Educational Resources Information Center

Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…

Education Resource Strategies, 2012

2012-01-01

56

40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?  

Code of Federal Regulations, 2013 CFR

...averages into the appropriate averaging times and units? 60.3042 Section 60...SOURCES Emission Guidelines and Compliance Times for Other Solid Waste Incineration Units...averages into the appropriate averaging times and units? (a) Use Equation 1...

2013-07-01

57

School Turnarounds: The Essential Role of Districts  

ERIC Educational Resources Information Center

The inspiration for this book was a crucial observation: that if the school turnaround movement is to have widespread and lasting consequences, it will need to incorporate meaningful district involvement in its efforts. The result is a volume that considers school turnaround efforts at the district level, examining the evidence thus far and…

Zavadsky, Heather

2012-01-01

58

Turnaround Schools and the Leadership They Require  

ERIC Educational Resources Information Center

"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…

Leithwood, Kenneth; Strauss, Tiiu

2008-01-01

59

The average rate of change for continuous time models  

Microsoft Academic Search

The average rate of change (ARC) is a concept that has been misunderstood in the applied longitudinal data analysis literature,\\u000a where the slope from the straight-line change model is often thought of as though it were the ARC. The present article clarifies\\u000a the concept of ARC and shows unequivocally the mathematical definition and meaning of ARC when measurement is continuous

Ken Kelley

2009-01-01

60

Situational and organizational determinants of turnaround  

Microsoft Academic Search

Purpose – To test the ability of situational variables, manageable pre-decline resources, and specific firm responses to decline to classify performance outcomes (turnaround vs non-turnaround) in declining firms. Design\\/methodology\\/approach – Using a longitudinal methodology and a multi-firm sample, the paper studies the relative role of situational factors concerning the environment and a firm's decline, along with various internal resources and

John D. Francis; Ashay B. Desai

2005-01-01

61

Scheduling parallel tasks to minimize average response time  

Microsoft Academic Search

Abstract A parallelizable,(or malleable),task,is one,which can be run on an arbitrary number of processors, with a task execution time that depends on the,number,of processors,allotted,to it. Consider a system of M independent,parallelizable,tasks which,are,to be scheduled without preemption on a parallel computer consisting of P identical,processors. For each task, the execution time is a known,function,of the,number,of processors allotted to it. The,goal,is to,find,(1)

John Turek; Uwe Schwiegelshohn; Joel L. Wolf; Philip S. Yu; Prasoon Tiwari; Jason Glasgow

1994-01-01

62

The time-averaged paleomagnetic field 0–5 Ma  

Microsoft Academic Search

Persistent departures from the geocentric axial dipole field model of the time-aver- aged paleomagnetic field over the past 5 Myr have been analyzed using oceanic data from deep-sea cores and continental data from igneous rocks and sediments. The data set comprises the equivalent of 9490 spot readings of the field (5831 normal and 3659 reverse) from 930 groups of data.

M. W. McElhinny; P. L. McFadden; R. T. Merrill

1996-01-01

63

Detection of Mycobacterium avium complex DNA directly in clinical respiratory specimens: opportunities for improved turn-around time and cost savings.  

PubMed

We developed, evaluated, and implemented a Taqman multiplex real-time polymerase chain reaction (PCR) assay for the detection of Mycobacterium avium complex (MAC), targeting the 16S-23S rRNA internal transcribed spacer, which we have combined with an existing Mycobacterium tuberculosis complex assay for use directly in clinical respiratory specimens. Evaluation of the performance of this assay for MAC detection included 464 clinical respiratory specimens tested prospectively. This real-time PCR assay was found overall to have a sensitivity of 71.1%, a specificity of 99.5%, a positive predictive value of 98.0%, and a negative predictive value of 90.2% for MAC. The assay provides results prior to the availability of cultured material and identification, most within 24 h of specimen receipt, and may reduce the need to culture MAC-PCR-positive specimens when susceptibility testing is not requested. Additionally, we have found significant cost savings of approximately $21.00 per specimen and staff time reductions of 3.75 h per specimen with implementation of this assay. PMID:24612561

Tran, Anh C; Halse, Tanya A; Escuyer, Vincent E; Musser, Kimberlee A

2014-05-01

64

Prospective evaluation of the VITEK MS for the routine identification of bacteria and yeast in the clinical microbiology laboratory: assessment of accuracy of identification and turnaround time.  

PubMed

This study assessed the accuracy of bacterial and yeast identification using the VITEK MS, and the time to reporting of isolates before and after its implementation in routine clinical practice. Three hundred and sixty-two isolates of bacteria and yeast, consisting of a variety of clinical isolates and American Type Culture Collection strains, were tested. Results were compared with reference identifications from the VITEK 2 system and with 16S rRNA sequence analysis. The VITEK MS provided an acceptable identification to species level for 283 (78?%) isolates. Considering organisms for which genus-level identification is acceptable for routine clinical care, 315 isolates (87?%) had an acceptable identification. Six isolates (2?%) were identified incorrectly, five of which were Shigella species. Finally, the time for reporting the identifications was decreased significantly after implementation of the VITEK MS for a total mean reduction in time of 10.52 h (P<0.0001). Overall, accuracy of the VITEK MS was comparable or superior to that from the VITEK 2. The findings were also comparable to other studies examining the accuracy of the VITEK MS, although differences exist, depending on the diversity of species represented as well as on the versions of the databases used. The VITEK MS can be incorporated effectively into routine use in a clinical microbiology laboratory and future expansion of the database should provide improved accuracy for the identification of micro-organisms. PMID:24227878

Charnot-Katsikas, Angella; Tesic, Vera; Boonlayangoor, Sue; Bethel, Cindy; Frank, Karen M

2014-02-01

65

Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer  

NASA Technical Reports Server (NTRS)

Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.

Dunn, M. G.; Rae, W. J.; Rigby, D. L.

1987-01-01

66

On the evaluation of Hardy's thermomechanical quantities using ensemble and time averaging  

NASA Astrophysics Data System (ADS)

An ensemble averaging approach was investigated for its accuracy and convergence against time averaging in computing continuum quantities such as stress, heat flux and temperature from atomistic scale quantities. For this purpose, ensemble averaging and time averaging were applied to evaluate Hardy's thermomechanical expressions (Hardy 1982 J. Chem. Phys. 76 622-8) in equilibrium conditions at two different temperatures as well as a nonequilibrium process due to shock impact on a Ni crystal modeled using molecular dynamics simulations. It was found that under equilibrium conditions, time averaging requires selection of a time interval larger than the critical time interval to obtain convergence, where the critical time interval can be estimated using the elastic properties of the material. The reason for this is because of the significant correlations among the computed thermomechanical quantities at different time instants employed in computing their time average. On the other hand, the computed thermomechanical quantities from different realizations in ensemble averaging are statistically independent, and thus convergence is always guaranteed. The computed stress, heat flux and temperature show noticeable difference in their convergence behavior while their confidence intervals increase with temperature. Contrary to equilibrium settings, time averaging is not equivalent to ensemble averaging in the case of shock wave propagation. Time averaging was shown to have poor performance in computing various thermomechanical fields by either oversmoothing the fields or failing to remove noise.

Fu, Yao; To, Albert C.

2013-07-01

67

Fast Clustering for Time-series Data with Average-time-sequence-vector Generation Based on Dynamic Time Warping  

NASA Astrophysics Data System (ADS)

This paper proposes a fast clustering method for time-series data based on average time sequence vector. A clustering procedure based on an exhaustive search method is time-consuming although its result typically exhibits high quality. BIRCH, which reduces the number of examples by data squashing based on a data structure CF (Clustering Feature) tree, represents an effective solution for such a method when the data set consists of numerical attributes only. For time-series data, however, a straightforward application of BIRCH based on a Euclidean distance for a pair of sequences, miserably fails since such a distance typically differs from human's perception. A dissimilarity measure based on DTW (Dynamic Time Warping) is desirable, but to the best of our knowledge no methods have been proposed for time-series data in the context of data squashing. In order to circumvent this problem, we propose DTWS (Dynamic Time Warping Squashed) tree, which employs a dissimilarity measure based on DTW, and compresses time sequences to the average time sequence vector. An average time sequence vector is obtained by a novel procedure which estimates correct shrinkage of a result of DTW. Experiments using the Australian sign language data demonstrate the superiority of the proposed method in terms of correctness of clustering, while its degradation of time efficiency is negligible.

Nakamoto, Kazuki; Yamada, Yuu; Suzuki, Einoshin

68

Textiles, Tariffs, and Turnarounds: Profits Improved.  

ERIC Educational Resources Information Center

The U.S. textile industry may serve as a classic study on regeneration through market forces. The industry has recently made a turnaround in profits after having been recognized as an industry that was losing most of its profits to overseas producers. The reasons for the emerging strength of the industry is that it began to innovate after a…

Aronoff, Craig

1986-01-01

69

Time-averaged fluorescence intensity analysis in fluorescence fluctuation polarization sensitive experiments  

PubMed Central

In fluorescence fluctuation polarization sensitive experiments, the limitations associated with detecting the rotational timescale are usually eliminated by applying fluorescence correlation spectroscopy analysis. In this paper, the variance of the time-averaged fluorescence intensity extracted from the second moment of the measured fluorescence intensity is analyzed in the short time limit, before fluctuations resulting from rotational diffusion average out. Since rotational correlation times of fluorescence molecules are typically much lower than the temporal resolution of the system, independently of the time bins used, averaging over an ensemble of time-averaged trajectories was performed in order to construct the time-averaged intensity distribution, thus improving the signal-to-noise ratio. Rotational correlation times of fluorescein molecules in different viscosities of the medium within the range of the anti-bunching time (1-10 ns) were then extracted using this method.

Turgeman, Lior; Fixler, Dror

2013-01-01

70

Impulsive synchronization schemes of stochastic complex networks with switching topology: Average time approach.  

PubMed

In this paper, a novel impulsive control law is proposed for synchronization of stochastic discrete complex networks with time delays and switching topologies, where average dwell time and average impulsive interval are taken into account. The side effect of time delays is estimated by Lyapunov-Razumikhin technique, which quantitatively gives the upper bound to increase the rate of Lyapunov function. By considering the compensation of decreasing interval, a better impulsive control law is recast in terms of average dwell time and average impulsive interval. Detailed results from a numerical illustrative example are presented and discussed. Finally, some relevant conclusions are drawn. PMID:24657849

Li, Chaojie; Yu, Wenwu; Huang, Tingwen

2014-06-01

71

Relegation and Reversion: Longitudinal Analysis of School Turnaround and Decline  

ERIC Educational Resources Information Center

School turnaround has become a popular strategy for improvement of chronically low-performing schools. Research related to school turnaround has relied substantially upon case studies. These case studies often focus on successful turnarounds and report immediate outcomes, failing to provide information about the sustainability of the results. In…

Hochbein, Craig

2012-01-01

72

Planning approximations to the average length of vehicle routing problems with time window constraints  

Microsoft Academic Search

This paper studies approximations to the average length of vehicle routing problems (VRP) with time window, route duration, and capacity constraints. The approximations are valuable for the strategic and planning analysis of transportation and logistics problems. Using asymptotic properties of vehicle routing problems and the average probability of successfully sequencing a customer with time windows a new expression to estimate

Miguel Andres Figliozzi

2009-01-01

73

Distributions of time averages for weakly chaotic systems: The role of infinite invariant density  

NASA Astrophysics Data System (ADS)

Distributions of time averaged observables are investigated using deterministic maps with N indifferent fixed points and N-state continuous time random walk processes associated with them. In a weakly chaotic phase, namely when separation of trajectories is subexponential, maps are characterized by an infinite invariant density. We find that the infinite density can be used to calculate the distribution of time averages of integrable observables with a formula recently obtained by Rebenshtok and Barkai. As an example we calculate distributions of the average position of the particle and average occupation fractions. Our work provides the distributional limit theorem for time averages for a wide class of nonintegrable observables with respect to the infinite invariant density, in other words it deals with the situation where the Darling-Kac-Aaronson theorem does not hold.

Korabel, Nickolay; Barkai, Eli

2013-09-01

74

Minimum Mean Square Error Prediction of Autoregressive Moving Average Time Series.  

National Technical Information Service (NTIS)

A computer program is described and presented for calculating finite memory predictors and prediction variances for autoregressive moving average time series models. The Cholesky decomposition algorithm is used, and a number of simplifying results are des...

H. J. Newton M. Pagano

1981-01-01

75

A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages  

NASA Technical Reports Server (NTRS)

A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

Kundu, Prasun K.; Bell, Thomas L.

2003-01-01

76

Continuous wavelet transform analysis and visualization of additive type moire and time-average fringe patterns  

Microsoft Academic Search

An application of the continuous wavelet transform (CWT) to modulation extraction of additive moir´e fringes and time-average patterns is described. To facilitate the task of demodulating a signal with zero crossing values a two frame approach for the wavelet ridge extraction is proposed. Experimental studies of resonance vibration mode patterns by time-average interferometry provide verification of numerical findings. They agree

Krzysztof Pokorski; Krzysztof Patorski

2010-01-01

77

Leading a supply chain turnaround.  

PubMed

Just five years ago, salespeople at Whirlpool were in the habit of referring to their supply chain organization as the "sales disablers." Now the company excels at getting products to the right place at the right time--while managing to keep inventories low. How did that happen? In this first-person account, Reuben Slone, Whirlpool's vice president of Global Supply Chain, describes how he and his colleagues devised the right supply chain strategy, sold it internally, and implemented it. Slone insisted that the right focal point for the strategy was the satisfaction of consumers at the end of the supply chain. Most supply chain initiatives do the opposite: They start with the realities of a company's manufacturing base and proceed from there. Through a series of interviews with trade customers large and small, his team identified 27 different capabilities that drove industry perceptions of Whirlpool's performance. Knowing it was infeasible to aim for world-class performance across all of them, Slone weighed the costs of excelling at each and found the combination of initiatives that would provide overall competitive advantage. A highly disciplined project management office and broad training in project management were key to keeping work on budget and on benefit. Slone set an intense pace--three "releases" of new capabilities every month--that the group maintains to this day. Lest this seem like a technology story, however, Slone insists it is just as much a "talent renaissance." People are proud today to be part of Whirlpool's supply chain organization, and its new generation of talent will give the company a competitive advantage for years to come. PMID:15559580

Slone, Reuben E

2004-10-01

78

Decomposition-order effects of time integrator on ensemble averages for the Nosé-Hoover thermostat  

NASA Astrophysics Data System (ADS)

Decomposition-order dependence of time development integrator on ensemble averages for the Nosé-Hoover dynamics is discussed. Six integrators were employed for comparison, which were extensions of the velocity-Verlet or position-Verlet algorithm. Molecular dynamics simulations by these integrators were performed for liquid-argon systems with several different time steps and system sizes. The obtained ensemble averages of temperature and potential energy were shifted from correct values depending on the integrators. These shifts increased in proportion to the square of the time step. Furthermore, the shifts could not be removed by increasing the number of argon atoms. We show the origin of these ensemble-average shifts analytically. Our discussion can be applied not only to the liquid-argon system but also to all MD simulations with the Nosé-Hoover thermostat. Our recommended integrators among the six integrators are presented to obtain correct ensemble averages.

Itoh, Satoru G.; Morishita, Tetsuya; Okumura, Hisashi

2013-08-01

79

Decomposition-order effects of time integrator on ensemble averages for the Nosé-Hoover thermostat.  

PubMed

Decomposition-order dependence of time development integrator on ensemble averages for the Nosé-Hoover dynamics is discussed. Six integrators were employed for comparison, which were extensions of the velocity-Verlet or position-Verlet algorithm. Molecular dynamics simulations by these integrators were performed for liquid-argon systems with several different time steps and system sizes. The obtained ensemble averages of temperature and potential energy were shifted from correct values depending on the integrators. These shifts increased in proportion to the square of the time step. Furthermore, the shifts could not be removed by increasing the number of argon atoms. We show the origin of these ensemble-average shifts analytically. Our discussion can be applied not only to the liquid-argon system but also to all MD simulations with the Nosé-Hoover thermostat. Our recommended integrators among the six integrators are presented to obtain correct ensemble averages. PMID:23947839

Itoh, Satoru G; Morishita, Tetsuya; Okumura, Hisashi

2013-08-14

80

Time-averaged exposures to sup 220 Rn and sup 222 Rn progeny in Colorado homes  

SciTech Connect

Week-long time-averaged exposures to naturally occurring {sup 220}Rn and {sup 222}Rn progeny have been measured at several locations in Colorado by monitoring the alpha activity collected continuously on a fixed-membrane filter. The alpha-energy spectrum associated with the activity collected on the filter was recorded every 15 min using a microcomputer-controlled alpha spectrometer. The alpha counts observed in three energy regions permitted complete separation of the contributions from each decay chain, and calculation of the separate time-averaged potential alpha-energy concentrations in air from {sup 220}Rn progeny, PAEC(Tn), and from {sup 222}Rn progeny, PAEC(Rn). The time-averaged PAEC(Tn) ranged from 0.3 to 6.9 mWL at 12 indoor locations, and the time-averaged PAEC(Rn) ranged from 1.0 to 59.0 mWL. The ratios of the indoor PAEC(Tn) to indoor PAEC(Rn) ranged from 0.09 to 0.58, with an overall average ratio of 0.32 and a standard deviation of 0.15. The 10 L min-1 flow rate through the filter was selected to approximate the air intake rate of a resting human; the time-averaged PAEC thus represents the progeny concentrations that would have been inhaled by a person breathing the same atmosphere.

Martz, D.E.; Falco, R.J.; Langner, G.H. Jr. (Department of Energy, Grand Junction Projects Office, CO (USA))

1990-06-01

81

Travel Time Prediction Using a Seasonal Autoregressive Integrated Moving Average Time Series Model  

Microsoft Academic Search

Travel time estimation and prediction form an integral part of any advanced traveler information system. This paper presents a univariate time series based approach to predicting future travel times using historical travel time data. This model relies strictly on point detection data. Empirical testing of the model is performed using ITS data obtained from video detection systems in Atlanta, Georgia.

Angshuman Guin

2006-01-01

82

Manned Orbital Transfer Vehicle (MOTV). Volume 5: Turnaround analysis  

NASA Technical Reports Server (NTRS)

The development of a low-cost reliable turnaround process to employ the MOTV in enhancing the utilization of the geosynchronous space region is analyzed. It is indicated that a routine effective turnaround/maintenance plan must make maximum use of flight data for maintenance planning, a high degree of test automation, and MOTV maintainability features in order to minimize tests, facilitate repair, and reduce manpower requirements. An effective turnaround plan provides a payback of reduced risks.

Boyland, R. E.; Sherman, S. W.; Morfin, H. W.

1979-01-01

83

Quick turnaround facilities for reusable rocket launchers  

NASA Astrophysics Data System (ADS)

After defining the requirements for future vertical takeoff, recoverable/reusable vehicle launch pads on the basis of a functionl analysis, the conceptual design of a high turnaround/low maintenance facility has been performed. The results obtained show that performance goals are achievable if specific efforts are directed to the optimization of the launch pad's jet deflector. Potential damage is restricted to the support/release jaws and the jet deflector; a maintenance-free 200-launch life may be obtained through steel cladding of critical deflector areas.

Lacaze, H.; Bombled, J. P.

84

School Turnaround Leaders: Competencies for Success. Part of the School Turnaround Collection from Public Impact  

ERIC Educational Resources Information Center

Evidence collected over the last 30 years suggests that effective school leaders significantly influence student learning and other aspects of school performance. Documented experience also indicates that individual leaders in failing organizations in various sectors, including education, can effect rapid, dramatic improvements. School turnaround

Public Impact, 2008

2008-01-01

85

Turnaround Necessities: Basic Conditions for an Effective, Sustainable, and Scalable School Turnaround  

ERIC Educational Resources Information Center

Turning around chronically low-performing schools is challenging work requiring fundamental rethinking of the change process, and a systemic rather than school-by-school approach. Without a doubt, high-impact school leaders are critical to turnaround success, and pockets of success around the country demonstrate this. However, transformational and…

Robinson, William S.; Buntrock, LeAnn M.

2011-01-01

86

ATS simultaneous and turnaround ranging experiments  

NASA Technical Reports Server (NTRS)

This report explains the data reduction and spacecraft position determination used in conjunction with two ATS experiments - Trilateration and Turnaround Ranging - and describes in detail a multilateration program that is used for part of the data reduction process. The process described is for the determination of the inertial position of the satellite, and for formating input for related programs. In the trilateration procedure, a geometric determination of satellite position is made from near simultaneous range measurements made by three different tracking stations. Turnaround ranging involves two stations; one, the master station, transmits the signal to the satellite and the satellite retransmits the signal to the slave station which turns the signal around to the satellite which in turn retransmits the signal to the master station. The results of the satellite position computations using the multilateration program are compared to results of other position determination programs used at Goddard. All programs give nearly the same results which indicates that because of its simplicity and computational speed the trilateration technique is useful in obtaining spacecraft positions for near synchronous satellites.

Watson, J. S.; Putney, B. H.

1971-01-01

87

Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio  

NASA Astrophysics Data System (ADS)

Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.

Li, Shenghong; Bi, Guoan

2014-12-01

88

Using Competencies to Improve School Turnaround Principal Success  

ERIC Educational Resources Information Center

This paper aims first to shed light on one element of leadership: the characteristics--or "competencies"--of turnaround leaders who succeed in driving rapid, dramatic change. Second, it recounts the elements of support that districts must provide these leaders to enable and sustain a portfolio of successful school turnarounds. Fortunately, decades…

Steiner, Lucy; Hassel, Emily Ayscue

2011-01-01

89

Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?  

ERIC Educational Resources Information Center

This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…

Mette, Ian M.

2013-01-01

90

A moving average based filtering system with its application to real-time QRS detection  

Microsoft Academic Search

This paper presents a novel real-time QRS detection algorithm designed based on a simple moving average filter. The proposed algorithm demands no redundant preprocessing step, thus allowing a simple architecture for its implementation as well as low computational cost. Algorithm performance was validated against a subset of the MIT-BIH arrhythmia database. Consequently, numerical results showed that the proposed algorithm correctly

H. C. Chen; S. W. Chen

2003-01-01

91

Distribution of Residual Autocorrelations in Autoregressive-Integrated Moving Average Time Series Models  

Microsoft Academic Search

Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values,

G. E. P. Box; David A. Pierce

1970-01-01

92

Violation of Homogeneity of Variance Assumption in the Integrated Moving Averages Time Series Model.  

ERIC Educational Resources Information Center

This study is an analysis of the robustness of the Box-Tiao integrated moving averages model for analysis of time series quasi experiments. One of the assumptions underlying the Box-Tiao model is that all N values of alpha subscript t come from the same population which has a variance sigma squared. The robustness was studied only in terms of…

Gullickson, Arlen R.; And Others

93

The Estimation of Theta in the Integrated Moving Average Time-Series Model.  

ERIC Educational Resources Information Center

Through Monte Carlo procedures, three different techniques for estimating the parameter theta (proportion of the "shocks" remaining in the system) in the Integrated Moving Average (0,1,1) time-series model are compared in terms of (1) the accuracy of the estimates, (2) the independence of the estimates from the true value of theta, and (3) the…

Martin, Gerald R.

94

Dwell Time and Average Local Speed in a Resonant Tunneling Structure.  

National Technical Information Service (NTIS)

We show that the dwell times and the average local speeds of an electron in a resonant tunneling structure depend sensitively on the matching parameter at the interfaces. We point out that there is a need to carry out microscopic calculations to find out ...

L. N. Pandey D. Sahu T. F. George

1989-01-01

95

Dwell Time and Average Local Speed in a Resonant Tunneling Structure.  

National Technical Information Service (NTIS)

The dwell times and the average local speeds of an electron in a resonant tunneling structure depend sensitively on the matching parameter at the interfaces. There is a need to carry out microscopic calculations to find out which matching parameter is app...

D. Sahu L. N. Pandey T. F. George

1989-01-01

96

Reaction-time binning: a simple method for increasing the resolving power of ERP averages.  

PubMed

Stimulus-locked, response-locked, and ERP-locked averaging are effective methods for reducing artifacts in ERP analysis. However, they suffer from a magnifying-glass effect: they increase the resolution of specific ERPs at the cost of blurring other ERPs. Here we propose an extremely simple technique-binning trials based on response times and then averaging-which can significantly alleviate the problems of other averaging methods. We have empirically evaluated the technique in an experiment where the task requires detecting a target in the presence of distractors. We have also studied the signal-to-noise ratio and the resolving power of averages with and without binning. Results indicate that the method produces clearer representations of ERPs than either stimulus-locked and response-locked averaging, revealing finer details of ERPs and helping in the evaluation of the amplitude and latency of ERP waves. The method is applicable to within-subject and between-subject averages. PMID:20070576

Poli, Riccardo; Cinel, Caterina; Citi, Luca; Sepulveda, Francisco

2010-05-01

97

Spatial and Temporal scales of time-averaged 700 MB height anomalies  

NASA Technical Reports Server (NTRS)

The monthly and seasonal forecasting technique is based to a large extent on the extrapolation of trends in the positions of the centers of time averaged geopotential height anomalies. The complete forecasted height pattern is subsequently drawn around the forecasted anomaly centers. The efficacy of this technique was tested and time series of observed monthly mean and 5 day mean 700 mb geopotential heights were examined. Autocorrelation statistics are generated to document the tendency for persistence of anomalies. These statistics are compared to a red noise hypothesis to check for evidence of possible preferred time scales of persistence. Space-time spectral analyses at middle latitudes are checked for evidence of periodicities which could be associated with predictable month-to-month trends. A local measure of the average spatial scale of anomalies is devised for guidance in the completion of the anomaly pattern around the forecasted centers.

Gutzler, D.

1981-01-01

98

A real-time QRS detection method based on moving-averaging incorporating with wavelet denoising  

Microsoft Academic Search

In this paper, a simple moving average-based computing method for real-time QRS detection is proposed. In addition, for signal preprocessing our detection algorithm also incorporates a wavelet-based denoising procedure to effectively reduce the noise level for electrocardiogram (ECG) data. The overall computational structure of the proposed algorithm allows the QRS detection to be performed and implemented in real-time with high

Szi-wen Chen; Hsiao-chen Chen; Hsiao-lung Chan

2006-01-01

99

Back on track. Leading a hospital turnaround.  

PubMed

Taking a failing hospital from the brink of financial ruin and turning it into a thriving, profitable entity takes more than a dose of good luck. It takes a strong leader who has the vision, determination and skill to execute a successful turnaround and put that hospital back on track. In addition to addressing financial factors, an effective leader will promote a good working relationship between the hospital management, board, staff, patients, caregivers and community members. Following are seven California hospitals that have all faced financial crises in recent years. These hospitals have made the transition out of the red and into the black with the kind of leadership that relies on trust, teamwork, common sense and ingenuity. PMID:10128881

1993-01-01

100

Solitary-wave description of condensate micromotion in a time-averaged orbiting potential trap  

SciTech Connect

We present a detailed theoretical analysis of micromotion in a time-averaged orbiting potential trap. Our treatment is based on the Gross-Pitaevskii equation, with the full time-dependent behavior of the trap systematically approximated to reduce the trapping potential to its dominant terms. We show that within some well specified approximations, the dynamic trap has solitary-wave solutions, and we identify a moving frame of reference which provides the most natural description of the system. In that frame eigenstates of the time-averaged orbiting potential trap can be found, all of which must be solitary-wave solutions with identical, circular center of mass motion in the laboratory frame. The validity regime for our treatment is carefully defined, and is shown to be satisfied by existing experimental systems.

Challis, K.J.; Ballagh, R.J.; Gardiner, C.W. [Department of Physics, University of Otago, P.O. Box 56, Dunedin (New Zealand); School of Chemical and Physical Sciences, Victoria University, Wellington (New Zealand)

2004-11-01

101

Minimum average-cost path for real time 3D coronary artery segmentation of CT images.  

PubMed

In this paper, we propose a Minimum Average-cost Path (MACP) model for segmenting 3D coronary arteries by minimizing the average edge cost along path in discrete 4D graph constructed by image voxels and associated radii. Prim's Minimum Spanning Tree method is used for efficient optimization of the MACP model. The centerline and the radii of the cross sections of the coronary artery are extracted simultaneously during the optimization. The method does not need any image preprocessing steps and has been intensively validated as an effective approach with the Rotterdam Coronary Artery Algorithm Evaluation Framework. The computational cost of the proposed method is particularly low (7.467 seconds per segment, 18.5mm/s on average), which makes real time segmentation of coronary artery possible. Shortcut problem, which is a classic issue of the minimal path techniques, can also be overcome by the proposed method. PMID:22003729

Zhu, Ning; Chung, Albert C S

2011-01-01

102

Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry  

SciTech Connect

The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.

Decker, A.J.

1984-06-01

103

Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry  

NASA Technical Reports Server (NTRS)

The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.

Decker, A. J.

1984-01-01

104

Time-average-based Methods for Multi-angular Scale Analysis of Cosmic-Ray Data  

NASA Astrophysics Data System (ADS)

Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10°, disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

Iuppa, R.; Di Sciascio, G.

2013-04-01

105

Detecting determinism in short time series using a quantified averaged false nearest neighbors approach.  

PubMed

We propose a criterion to detect determinism in short time series. This criterion is based on the estimation of the parameter E2 defined by the averaged false neighbors method for analyzing time series [Cao, Physica D 110, 43 (1997)]. Using surrogate data testing with several chaotic and stochastic simulated time series, we show that the variation coefficient of E2 over a few values of the embedding dimension d defines a suitable statistic to detect determinism in short data sequences. This result holds for a time series generated by a high-dimensional chaotic system such as the Mackey-Glass one. Different decreasing lengths of the time series are included in the numerical experiments for both synthetic and real-world data. We also investigate the robustness of the criterion in the case of deterministic time series corrupted by additive noise. PMID:17930320

Ramdani, Sofiane; Bouchara, Frédéric; Casties, Jean-François

2007-09-01

106

Time-averaging fluctuating seafloor hydrothermal plumes: measurements by remote acoustic sensing  

NASA Astrophysics Data System (ADS)

Several short time series of acoustic data were collected in July 2000 in order to characterize the plume rising from Grotto Vent in Main Endeavour Field, Juan de Fuca Ridge as part of the USA NSF Endeavour Observatory. Acoustic imaging data uses backscatter from small (generally metallic) particles (<100um) to detect the particle concentration distribution within black-smoker-type plumes. The buoyancy of the black smoker fluid results in a highly turbulent, convective, buoyancy driven flow. Despite the intense turbulence and the resulting complex instantaneous boundary of the plume, the time-averaged boundary of the plume is quite smooth. We observe the smoothing effects of time averaging in our acoustic images as the complex isosurfaces of backscattering cross-section in a single scan smooth out to simple surfaces as more scans are averaged. The ideal averaging period would be much greater than the overturning time of the largest eddies. Assuming that the largest eddies are no wider than the plume and overturn at speeds not much greater than the mean vertical velocities, we can use acoustic imaging and Doppler data to estimate the longest overturning time as 20 s (at 16 m above Grotto Vent, the plume is 5.03 m wide and has a mean vertical velocity of 0.25 m/s). Since our data has two samples every 4 minutes over a 10-20 minute interval, we should have a reasonable time average so long as there are enough samples. Comparison of slices through the plume for time-averages of 1 to 13 samples suggests that 5-6 samples is barely adequate and 10-15 samples would be better. Statistical analysis of the data on a horizontal slice show that even the minimal average of 5-6 samples has a well-defined smooth Gaussian profile for the particle concentration distribution; however, the root mean square fluctuations and intrinsic error suggest that small-scale features (in both space and time) are smeared and measurements of turbulent properties will be biased. Characterization of the time-averaged plume can be used to quantify the entrainment rate, the vertical decrease of buoyancy (or scalar properties like particle concentration), and other mean properties predicted by integral models. For Grotto Vent, we measured entrainment rates of 0.07 to 0.18. The lower measurements correspond to slack tide conditions and are similar to entrainment rates measured in laboratory conditions. , Using the mean velocity from the Doppler data, at 16 m above Grotto Vent, we estimated a volume flux of 6.9 m^3/s and a particle flux of 6.6x10-4 m^2/s with particle concentration in units of backscattering cross-section.

Bemis, K.; Rona, P.; Jackson, D.; Jones, C.

2003-04-01

107

A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages  

NASA Technical Reports Server (NTRS)

Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.

Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

2002-01-01

108

Time-averaged in-line digital holographic interferometry for vibration analysis  

SciTech Connect

Time-averaged in-line digital holography is applied for vibration analysis. In particular, by use of a double-exposure approach, simultaneous determination of vibration mode shape and mean static state deformation during a vibration cycle are obtained. The subtraction of two numerically reconstructed digital holograms recorded at the same resonant frequency but with a small difference in amplitude shows the mixing of Bessel-type time-averaged fringes owing to vibration and of the double-exposure fringes owing to differences in the mean deformation of the object. It is shown that separation of these fringe patterns can be readily accomplished numerically. An experimental demonstration of this effect by use of in-line digital holography for relatively small membranes is demonstrated.

Asundi, Anand; Singh, Vijay Raj

2006-04-10

109

Time-average hologram interferometry of periodic, non-cosinusoidal vibrations  

Microsoft Academic Search

In the present paper, the method of time-average hologram interferometry has been applied to study periodic, non-cosinusoidal\\u000a vibrations represented by a Jacobian elliptic function. For such vibrations the characteristic fringe function has been evaluated\\u000a by making use of an equation derived from considerations of the effect of motion on coherence. Techniques of holographic addition\\u000a and subtraction and extended pulse stroboscopic

P. C. Gupta; K. Singh

1975-01-01

110

Time-averaged exposures to sup 220 Rn and sup 222 Rn progeny in Colorado homes  

Microsoft Academic Search

Week-long time-averaged exposures to naturally occurring ²²°Rn and ²²²Rn progeny have been measured at several locations in Colorado by monitoring the alpha activity collected continuously on a fixed-membrane filter. The alpha-energy spectrum associated with the activity collected on the filter was recorded every 15 min using a microcomputer-controlled alpha spectrometer. The alpha counts observed in three energy regions permitted complete

D. E. Martz; R. J. Falco; G. H. Jr. Langner

1990-01-01

111

Time scales and variability of area-averaged tropical oceanic rainfall  

NASA Technical Reports Server (NTRS)

A statistical analysis of time series of area-averaged rainfall over the oceans has been conducted around the diurnal time scale. The results of this analysis can be applied directly to the problem of establishing the magnitude of expected errors to be incurred in the estimation of monthly area-averaged rain rate from low orbiting satellites. Such statistics as the mean, standard deviation, integral time scale of background red noise, and spectral analyses were performed on time series of the GOES precipitation index taken at 3-hour intervals during the period spanning December 19, 1987 to March 31, 1988 over the central and eastern tropical Pacific. The analyses have been conducted on 2.5 x 2.5 deg and 5 x 5 deg grid boxes, separately. The study shows that rainfall measurements by a sun-synchronous satellite visiting a spot twice per day will include a bias due to the existence of the semidiurnal cycle in the SPCZ ranging from 5 to 10 percentage points. The bias in the ITCZ may be of the order of 5 percentage points.

Shin, Kyung-Sup; North, Gerald R.; Ahn, Yoo-Shin; Arkin, Phillip A.

1990-01-01

112

Asynchronous input gear damage diagnosis using time averaging and wavelet filtering  

NASA Astrophysics Data System (ADS)

Vibration signals are often used for fault diagnosis in mechanical systems because they are containing dynamic information of mechanical elements. Vibration signals from a gearbox are usually noisy and the signal-to-noise ratio (SNR) is so low that feature extraction of signal components is very difficult, especially in practical situations. One of the solutions to this problem is applying signal time-averaging techniques in time domain for signal denoising, but using this method is only possible when gearbox input shaft rotation is constant or synchronous. In this paper, a new noise canceling method, based on time-averaging method for asynchronous input, is developed, and then complex Morlet wavelet is implemented for feature extraction and diagnosis of different kind of local gear damages. The complex Morlet wavelet, used in this work, is adaptive because the parameters are not fixed. The proposed method is implemented on a simulated signal and real test rig of Yahama motorcycle gearbox. Both simulation and experimental results have proved that the method is very promising in analysis of the signal and fault diagnosis of gearbox.

Jafarizadeh, M. A.; Hassannejad, R.; Ettefagh, M. M.; Chitsaz, S.

2008-01-01

113

A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms  

NASA Technical Reports Server (NTRS)

A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

1992-01-01

114

Time Average Holography Study of Human Tympanic Membrane with Altered Middle Ear Ossicular Chain  

NASA Astrophysics Data System (ADS)

Computer-assisted time average holographic interferometry was used to study the vibration of the human tympanic membrane (TM) in cadaveric temporal bones before and after alterations of the ossicular chain. Simultaneous laser Doppler vibrometer measurements of stapes velocity were performed to estimate the conductive hearing loss caused by ossicular alterations. The quantified TM motion described from holographic images was correlated with stapes velocity to define relations between TM motion and stapes velocity in various ossicular disorders. The results suggest that motions of the TM are relatively uncoupled from stapes motion at frequencies above 1000 Hz.

Cheng, Jeffrey T.; Ravicz, Michael E.; Rosowski, John J.; Hulli, Nesim; Hernandez-Montes, Maria S.; Furlong, Cosme

2009-02-01

115

Probe shapes that measure time-averaged streamwise momentum and cross-stream turbulence intensity  

NASA Technical Reports Server (NTRS)

A method and apparatus for directly measuring the time-averaged streamwise momentum in a turbulent stream use a probe which has total head response which varies as the cosine-squared of the angle of incidence. The probe has a nose with a slight indentation on its front face for providing the desired response. The method of making the probe incorporates unique design features. Another probe may be positioned in a side-by-side relationship to the first probe to provide a direct measurement of the total pressure. The difference between the two pressures yields the sum of the squares of the cross-stream components of the turbulence level.

Rossow, Vernon J. (inventor)

1993-01-01

116

Time-averaged quantum dynamics and the validity of the effective Hamiltonian model  

SciTech Connect

We develop a technique for finding the dynamical evolution in time of an averaged density matrix. The result is an equation of evolution that includes an effective Hamiltonian, as well as decoherence terms in Lindblad form. Applying the general equation to harmonic Hamiltonians, we confirm a previous formula for the effective Hamiltonian together with a additional decoherence term which should, in general, be included and whose vanishing provides the criteria for validity of the effective Hamiltonian approach. Finally, we apply the theory to examples of the ac Stark shift and three-level Raman transitions, recovering a decoherence effect in the latter.

Gamel, Omar; James, Daniel F. V. [Department of Physics, University of Toronto, 60 St. George Street, Toronto, Ontario M5S 1A7 (Canada)

2010-11-15

117

Turnaround command effects on USB and SGLS satellite downlinks  

Microsoft Academic Search

Many satellites using commercial off-the-shelf USB and SGLS transponders such as the L3-Com CXS 2000 do not have turnaround command (TAC) suppression in their downlinks. Without TAC suppression, these satellites exhibit greater downlink service modulation losses for carrier, ranging, and telemetry. Depending on the selected uplink command modulation index and the turnaround ratio, these additional modulation losses could vary from

J. Kreng; J. Yoh; S. Raghavan; A. Mathur

2010-01-01

118

Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay  

USGS Publications Warehouse

Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.

Robbins, J. A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K. A.; Rudnick, D.

2000-01-01

119

Average excitation energies from time-dependent density functional response theory  

NASA Astrophysics Data System (ADS)

The authors present an occupation number averaging scheme for time-dependent density functional response theory (TD-DFRT) in frequency domain. The known problem that TD-DFRT within the local (spin) density approximation (LDA/LSDA) inaccurately predicts Rydberg and charge-transfer excitation energies has been reexamined from the methodology of linear response, without explicit correction of the exchange-correlation potential. The working equations of TD-DFRT are adapted to treat arbitrary difference of orbital occupation numbers, using the nonsymmetric matrix form of Casida's formulation of TD-DFRT [M. E. Casida, in Recent Advances in Density Functional Methods, edited by D. P. Chong (World Scientific, Singapore, 1995), Pt. I, p. 155]. The authors' scheme is applied to typical closed-shell and open-shell molecular systems by examining the dependence of excitation energies on the fraction of excited electron. Good performance of this modified linear response scheme is shown, and is consistent with the authors' previous examination by the real-time propagation approach, suggesting that the calculation of average excitation energies might be one of the ways to better decode excitation energies from LDA/LSDA. Different techniques for treating singlet, triplet, and doublet states are discussed.

Hu, Chunping; Sugino, Osamu

2007-02-01

120

Time-averaged properties of unstable periodic orbits and chaotic orbits in ordinary differential equation systems.  

PubMed

It has recently been found in some dynamical systems in fluid dynamics that only a few unstable periodic orbits (UPOs) with low periods can give good approximations to the mean properties of turbulent (chaotic) solutions. By employing three chaotic systems described by ordinary differential equations, we compare time-averaged properties of a set of UPOs and those of a set of segments of chaotic orbits. For every chaotic system we study, the distributions of a time average of a dynamical variable along UPOs with lower and higher periods are similar to each other and the variance of the distribution is small, in contrast with that along chaotic segments. The distribution seems to converge to some limiting distribution with nonzero variance as the period of the UPO increases, although that along chaotic orbits inclines to converge to a delta -like distribution. These properties seem to lie in the background of why only a few UPOs with low periods can give good mean statistical properties in dynamical systems in fluid dynamics. PMID:19257096

Saiki, Yoshitaka; Yamada, Michio

2009-01-01

121

Applicability of time-averaged holography for micro-electro-mechanical system performing non-linear oscillations.  

PubMed

Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

2014-01-01

122

Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations  

PubMed Central

Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.

Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

2014-01-01

123

Non-invasive assessment of distribution volume ratios and binding potential: tissue heterogeneity and interindividually averaged time-activity curves  

Microsoft Academic Search

Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in

M. Reimold; W. Mueller-Schauenburg; G. A. Becker; G. Reischl; B. M. Dohmen; R. Bares

2004-01-01

124

Spatial Averaging of Water Content by Time Domain Reflectometry: Implications for Twin Rod Probes with and without Dielectric Coatings  

Microsoft Academic Search

The averaging of apparent relative dielectric permittivities by time domain reflectometry (TDK) is examined for properties varying along TDK waveguides and in the plane perpendicular to a TDR probe. A square root averaging model with uniform weighting factors describes the measured apparent relative dielectric permittivity for axially varying materials; variations in the transverse plane are described by an inverse averaging

P. A. Ferré; D. L. Rudolph; R. G. Kachanoski

1996-01-01

125

The time course of top-down control on saccade averaging.  

PubMed

When objects in a visual scene are positioned in close proximity, eye movements to these objects tend to land at an intermediate location between the objects (i.e. the global effect). This effect is most pronounced for short latency saccades and is therefore believed to be reflexive and dominantly controlled by bottom-up information. At longer latencies this effect can be modulated by top-down factors. The current study established the time course at which top-down information starts to have an influence on bottom-up averaging. In a standard global effect task two peripheral stimuli (a red and a green abrupt onset) were positioned within an angular distance of 20°. In the condition in which observers received no specific target instruction, the eyes landed in between the red and green element establishing the classic global effect. However, when observers were instructed to make a saccade to the red element during a whole block or when the target color varied from trial-to-trial (red or green), a clear effect of the target instruction on the accuracy of the landing position of the primary saccade was found. With increasing saccade latencies, the eyes landed closer to the instructed target. Crucially, however, this effect was even seen for the shortest saccade latencies (as early as 200ms), suggesting that saccade averaging is affected early on by top-down processes. PMID:24732567

Heeman, J; Theeuwes, J; Van der Stigchel, S

2014-07-01

126

Vortex pump for Bose-Einstein condensates utilizing a time-averaged orbiting potential trap  

NASA Astrophysics Data System (ADS)

We show that topological vortex pumping can be implemented for a dilute Bose-Einstein condensate confined in a magnetic time-averaged orbiting potential trap with axial optical confinement. Contrary to earlier proposals for the vortex pump, we do not employ an additional optical potential to trap the condensate in the radial direction, but instead, the radial confinement is provided by the magnetic field throughout the pumping cycle. By performing numerical simulations based on the spin-1 Gross-Pitaevskii equation, we find that several pumping cycles can be carried out to produce a highly charged vortex before a majority of the particles escape from the trap or before the vortex splits into singly charged vortices. On the other hand, we observe that an additional, relatively weak optical plug potential is efficient in preventing splitting and reducing particle loss. With these results, we hope to bring the vortex pump closer to experimental realization.

Kuopanportti, Pekko; Anderson, Brian P.; Möttönen, Mikko

2013-03-01

127

Time-averaged adiabatic potentials: versatile matter-wave guides and atom traps.  

PubMed

We demonstrate a novel class of trapping potentials, time-averaged adiabatic potentials (TAAP), which allows the generation of a large variety of traps for quantum gases and matter-wave guides for atom interferometers. Examples include stacks of pancakes, rows of cigars, and multiple rings or sickles. The traps can be coupled through controllable tunneling barriers or merged altogether. We present analytical expressions for pancake-, cigar-, and ring-shaped traps. The ring geometry is of particular interest for guided matter-wave interferometry as it provides a perfectly smooth waveguide of widely tunable diameter and thus adjustable sensitivity of the interferometer. The flexibility of the TAAP would make possible the use of Bose-Einstein condensates as coherent matter waves in large-area atom interferometers. PMID:17930945

Lesanovsky, Igor; von Klitzing, Wolf

2007-08-24

128

Time-Averaged Adiabatic Potentials: Versatile Matter-Wave Guides and Atom Traps  

SciTech Connect

We demonstrate a novel class of trapping potentials, time-averaged adiabatic potentials (TAAP), which allows the generation of a large variety of traps for quantum gases and matter-wave guides for atom interferometers. Examples include stacks of pancakes, rows of cigars, and multiple rings or sickles. The traps can be coupled through controllable tunneling barriers or merged altogether. We present analytical expressions for pancake-, cigar-, and ring-shaped traps. The ring geometry is of particular interest for guided matter-wave interferometry as it provides a perfectly smooth waveguide of widely tunable diameter and thus adjustable sensitivity of the interferometer. The flexibility of the TAAP would make possible the use of Bose-Einstein condensates as coherent matter waves in large-area atom interferometers.

Lesanovsky, Igor; Klitzing, Wolf von [Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, P.O. Box 1527, GR-71110 Heraklion (Greece)

2007-08-24

129

ARECIBO MULTI-FREQUENCY TIME-ALIGNED PULSAR AVERAGE-PROFILE AND POLARIZATION DATABASE  

SciTech Connect

We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.

Hankins, Timothy H. [Physics Department, New Mexico Tech, Socorro, NM 87801 (United States); Rankin, Joanna M. [Physics Department, University of Vermont, Burlington, VT 05401 (United States)], E-mail: thankins@nrao.edu, E-mail: Joanna.Rankin@uvm.edu

2010-01-15

130

Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration  

NASA Technical Reports Server (NTRS)

The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

Chelton, Dudley B.; Schlax, Michael G.

1991-01-01

131

Analysis of the time-averaged properties of sediment motion in a local scour process  

NASA Astrophysics Data System (ADS)

We examined the kinematics of sediment particles in the proximity of a vertical plate attached to the side wall of a pressurized duct. The laboratory phenomenon investigated was archetypal of local scour near bridge abutments in covered rivers. We conducted a clear-water scour experiment where we measured the temporal evolution of the local erosion depth; the latter was found to follow a logarithmic trend. The concentration and velocity of the moving particles were measured on the basis of digital movies taken at various evolution stages of the process. The spatial pattern of the sediment motion showed that the action of the principal vortex system becomes progressively more evident with the general lowering of the sediment transport intensity detected as the scour hole increases in time. Furthermore, the phenomenology of local sediment transport was analyzed; key findings of the research include (1) the time-averaged values of the sediment transport rate are linked with those of concentration and velocity through relationships which are almost invariant, regardless of the experimental time and of the location within the scour hole; (2) intermittency and irregularity of the sediment fluxes increase as their mean intensity decreases; (3) the relative weight of the temporal fluctuations of concentration and velocity in the time-mean sediment transport activity is significant throughout the entire process. Potential implications of the present results for the modeling of local scour processes in pressurized-flow conditions are discussed. The considerations made are likely to have analogous applicability to local scour with a free surface, since the large-scale evolutions of the erosion depth in the two conditions may not be greatly different. Given the qualitative analogy of the present findings with earlier ones obtained for one-dimensional bed load, it is suggested that similar considerations may hold also for the small-scale modeling of a wider class of sediment transport processes.

Radice, Alessio; Porta, Giovanni; Franzetti, Silvio

2009-03-01

132

Time-weighted average SPME analysis for in planta determination of cVOCs.  

PubMed

The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592

Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G

2012-03-20

133

The state averaged multiconfigurational time-dependent Hartree approach: vibrational state and reaction rate calculations.  

PubMed

The multiconfigurational time-dependent Hartree (MCTDH) approach uses optimized sets of time-dependent basis functions, called single-particle functions, to represent multidimensional wavefunctions and thereby facilitates efficient multidimensional quantum dynamics studies. The single-particle function bases are usually optimized for a single wavefunction. Here, an approach is studied which utilizes a common single-particle function basis to represent several wavefunctions simultaneously, i.e., the single-particle function basis is constructed to result in an optimized averaged description of a number of wavefunctions. The approach can favorably be used to obtain eigenstates of Hamiltonians or to represent thermal ensembles. Test calculations studying the vibrational states of CH(3) and the thermal rate constant of the H+CH(4)-->H(2)+CH(3) reaction are presented. It is found that the required size of the single-particle functions basis does not increase when the number of wavefunctions described simultaneously is increased. As a consequence, the new approach offers an increased efficiency, e.g., for MCTDH rate constant calculations. PMID:18282029

Manthe, Uwe

2008-02-14

134

The state averaged multiconfigurational time-dependent Hartree approach: Vibrational state and reaction rate calculations  

NASA Astrophysics Data System (ADS)

The multiconfigurational time-dependent Hartree (MCTDH) approach uses optimized sets of time-dependent basis functions, called single-particle functions, to represent multidimensional wavefunctions and thereby facilitates efficient multidimensional quantum dynamics studies. The single-particle function bases are usually optimized for a single wavefunction. Here, an approach is studied which utilizes a common single-particle function basis to represent several wavefunctions simultaneously, i.e., the single-particle function basis is constructed to result in an optimized averaged description of a number of wavefunctions. The approach can favorably be used to obtain eigenstates of Hamiltonians or to represent thermal ensembles. Test calculations studying the vibrational states of CH3 and the thermal rate constant of the H+CH4-->H2+CH3 reaction are presented. It is found that the required size of the single-particle functions basis does not increase when the number of wavefunctions described simultaneously is increased. As a consequence, the new approach offers an increased efficiency, e.g., for MCTDH rate constant calculations.

Manthe, Uwe

2008-02-01

135

Respiratory sinus arrhythmia: time domain characterization using autoregressive moving average analysis  

NASA Technical Reports Server (NTRS)

Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.

Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.

1995-01-01

136

Analytical study of flow phenomena in SSME turnaround duct geometries  

NASA Astrophysics Data System (ADS)

The SSME fuel turbopump hot gas manifold was identified as a source of loss and flow distortion which significantly affects the performance and durability of both the drive turbine and the LOX injector area of the main combustion chamber. Two current SSME geometries were studied, the full power level (FPL) and the first manned orbital flight (FMOF) configuration. The effects of turnaround duct geometry on flow losses and distortions, by varying wall curvature and flow area variation in the 180 deg turnaround region were examined. The effects of the duct inlet flow phenomena such as the radial distortion of the inlet flow and inlet swirl level on turnaround duct performance were also investigated. It is shown that of the two current geometries, the FMOF configuration had lower pressure losses and generated less flow distortion, but had a small flow separation bubble at the 180 deg turnaround exit. It is found that by optimizing wall curvature and flow diffusion in the turnaround, improved duct performance can be achieved.

McLallin, K. L.

1985-04-01

137

The Aroma of Tacoma: Time-Varying Average Derivatives and the Effect of a Superfund Site on House Prices  

Microsoft Academic Search

A variant of the average derivative estimator is developed to construct an index measuring the effects of a copper smelter on house prices in Tacoma, Washington. The estimated average derivative index is smooth and has lower standard errors than regression coefficients estimated separately across discrete time periods. The dataset spans the time of initial rumors of Superfund site designation, the

Daniel P McMillen; Paul Thorsnes

2003-01-01

138

New device for time-averaged measurement of volatile organic compounds (VOCs).  

PubMed

Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes through a glass cell containing adsorbent material where the VOCs are retained. The adsorbent used, made in LABAQUA, is a mixture of alginic acid and activated carbon. Due to its high permeability it allows the passage and retention of THMs in a suitable way, thus solving many of the problems of other common adsorbents. Also, to avoid degradation of the adsorbent, it is wrapped in a low density polyethylene (LDPE) membrane. After a sampling period of between 1 and 14days, the adsorbent is collected and analyzed in the laboratory to quantify the VOC average concentration. This device resolves some of the limitations of the classical sampling system (spot samples), since we will take into account the fluctuations in the concentration of VOCs by averaging the same over time. This study presents the results obtained by the device for quantifying the VOCs legislated in the Directive 2000/60/EC. We present the validation of linearity over time and the limits of quantification, as well as the results of sample rate (Rs) obtained for each compound. The results demonstrate the high robustness and high sensitivity of the device. In addition the system has been validated in real waste water samples, comparing the results obtained with this device with the values of classical spot sampling, obtaining excellent results. PMID:24388502

Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio

2014-07-01

139

Mapping spatio-temporal variables: The impact of the time-averaging window width on the spatial accuracy  

NASA Astrophysics Data System (ADS)

Spatial mapping of variables that vary in space and time is a common procedure in many research fields. Very often it is of interest to map the time-average or time-integration of the variable over the whole period of interest. Normally, such a map is produced by spatially interpolating the whole period averages of the observed data. An alternative option is to first spatially interpolate narrow time slice averages of the variable and then sum the resultant maps. This paper discusses the latter option, and the accuracy of the spatio-temporal variable interpolation as a function of the width of the time-averaging window. Theoretically, using a linear and data-value independent operator to interpolate a complete data set (i.e. without missing data), the accuracy is independent of the width of the time-averaging window. However, using a nonlinear or a data-value dependent interpolation operator, and/or in the presence of missing data, the accuracy of the interpolation can vary with the averaging window width. The concept is demonstrated using a set of half-hourly SO2 concentrations measured at 20 monitoring stations in Haifa Bay area, Israel, during the years 1996-2002. Cross-validated interpolation accuracy measures calculated for this data set vary significantly with the time-averaging window width, showing a clear minimum at daily averaging. The results and their general implications for the interpolation of spatio-temporal variables are discussed.

Yuval; Broday, David M.; Carmel, Yohay

140

Cochlear Modeling Using Time-Averaged Lagrangian" Method:. Comparison with VBM, PST, and ZC Measurements  

NASA Astrophysics Data System (ADS)

In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (?CT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.

Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.

2009-02-01

141

Distribution of Time-Averaged Pressure Fluctuations Along the Boundary of a Round Subsonic Jet  

NASA Technical Reports Server (NTRS)

A semiempirical analysis of the equation for incompressible fluctuations in a turbulent fluid, using similarity relations for round subsonic jets with uniform exit velocity, is used to predict the shape of the time-averaged fluctuation-pressure distribution along the mean-velocity boundary of jets. The predicted distribution is independent of distance downstream of the nozzle exit along the mixing region, inversely proportional to the distance downstream along the region of mean-velocity self-preservation, and proportional to the inverse square of the distance downstream along the fully developed region. Experimental results were in fair agreement with the theory. However, the measured fluctuation-pressure distributions were found to be very sensitive to changes in jet temperature and jet-nozzle profile, especially near the nozzle. These factors are not included in the theory. Increased jet temperatures produce increased pressure fluctuations and violation of similarity conditions. Nozzle-profile modifications may lead to violation of the uniform-exit-velocity requirement imposed in the theory.

Howes, Walton L.

1960-01-01

142

Holographic microscope for measuring displacements of vibrating microbeams using time-averaged, electro-optic holography  

NASA Astrophysics Data System (ADS)

An optical microscope, utilizing the principles of time- averaged hologram interferometry, is described for microelectromechanical systems (MEMS) applications. MEMS are devices fabricated via techniques such as microphotolithography to create miniature actuators and sensors. Many of these sensors are currently deployed in automotive applications which rely on, or depend on, the dynamic behavior of the sensor, e.g., airbag sensors, ride monitoring suspensions sensors, etc. Typical dimensions of current MEMS devices are measured in micrometers, a small fraction of the diameter of a human hair, and the current trends is to further decrease the size of MEMS devices to submicrometer dimensions. However, the smaller MEMS become, the more challenging it is to measure with accuracy the dynamic characteristics of these devices. An electro-optic holographic microscope (EOHM) for the purpose of studying the dynamic behavior of MEMS type devices is described. Additionally, by performing phase measurements within an EOHM image, object displacements are determined as illustrated by representative examples. With the EOHM, devices with surface sizes ranging from approximately 35 X 400 to 5 X 18 micrometers are studied while undergoing resonant vibrations at frequencies as high as 2 MHz.

Brown, Gordon C.; Pryputniewicz, Ryszard J.

1998-05-01

143

Influence of the averaging time on the quality of reconstruction of small-scale wind turbulence characteristics in acoustic sounding  

NASA Astrophysics Data System (ADS)

Influence of the averaging period on the quality of reconstruction of the wind velocity structure functions and characteristics from the data of minisodar measurements is investigated. Values of the structure functions and characteristics averaged over 10, 20, 25, 30, and 60-min periods are presented. Results of their calculations using an original algorithm demonstrate that the structure characteristic decreases with increasing averaging time. Thus, for the 10-min averaging period, the velocity structure characteristic calculated for one series of minisodar measurements was 0.00376 m4/3/s2 for sounding altitude of 100 m; it significantly decreased (by 37%) for the 20-min averaging period, and decreased by less than 5% for the 25-min averaging period. This allows us to recommend the 20-min averaging period as optimal one for investigation of the dynamics of smallscale wind turbulence using acoustic sounding.

Kapegesheva, O. F.; Krasnenko, N. P.; Stafeev, P. G.; Shamanaeva, L. G.

2013-03-01

144

Extension of the Averaging Time in Eddy-Covariance Measurements and Its Effect on the Energy Balance Closure  

NASA Astrophysics Data System (ADS)

The modified ogive analysis and the block ensemble average were employed to investigate the impact of the averaging time extension on the energy balance closure over six land-use types. The modified ogive analysis, which requires a steady-state condition, can extend the averaging time up to a few hours and suggests that an averaging time of 30 min is still overall sufficient for eddy-covariance measurements over low vegetation. The block ensemble average, which does not require a steady-state condition, can extend the averaging time to several days. However, it can improve the energy balance closure for some sites during specific periods, when secondary circulations exist in the vicinity of the sensor. These near-surface secondary circulations mainly transport sensible heat, and when near-ground warm air is transported upward, the sensible heat flux observed by the block ensemble average will increase at longer averaging times. These findings suggest an alternative energy balance correction for a ground-based eddy-covariance measurement, in which the attribution of the residual depends on the ratio of sensible heat flux to the buoyancy flux. The fraction of the residual attributed to the sensible heat flux by this energy balance correction is larger than in the energy balance correction that preserves the Bowen ratio.

Charuchittipan, Doojdao; Babel, Wolfgang; Mauder, Matthias; Leps, Jens-Peter; Foken, Thomas

2014-04-01

145

Uncertainty and variability in historical time-weighted average exposure data.  

PubMed

Beginning around 1940, private companies began processing of uranium and thorium ore, compounds, and metals for the Manhattan Engineer District and later the U.S. Atomic Energy Commission (AEC). Personnel from the AEC's Health and Safety Laboratory (HASL) visited many of the plants to assess worker exposures to radiation and radioactive materials. They developed a time-and-task approach to estimating "daily weighted average" (DWA) concentrations of airborne uranium, thorium, radon, and radon decay products. While short-term exposures greater than 10(5) dpm m(-3) of uranium and greater than 10(5) pCi L(-1) of radon were observed, DWA concentrations were much lower. The HASL-reported DWA values may be used as inputs for dose reconstruction in support of compensation decisions, but they have no numerical uncertainties associated with them. In this work, Monte Carlo methods are used retrospectively to assess the uncertainty and variability in the DWA values for 63 job titles from five different facilities that processed U, U ore, Th, or 226Ra-222Rn between 1948 and 1955. Most groups of repeated air samples are well described by lognormal distributions. Combining samples associated with different tasks often results in a reduction of the geometric standard deviation (GSD) of the DWA to less than those GSD values typical of individual tasks. Results support the assumption of a GSD value of 5 when information on uncertainty in DWA exposures is unavailable. Blunders involving arithmetic, transposition, and transcription are found in many of the HASL reports. In 5 out of the 63 cases, these mistakes result in overestimates of DWA values by a factor of 2 to 2.5, and in 2 cases DWA values are underestimated by factors of 3 to 10. PMID:18188049

Davis, Adam J; Strom, Daniel J

2008-02-01

146

Mercury's Time-Averaged and Induced Magnetic Fields from MESSENGER Observations  

NASA Astrophysics Data System (ADS)

Observations from MESSENGER's Magnetometer (MAG) have allowed the construction of a baseline, time-averaged model for Mercury's magnetosphere. The model, constructed with the approximation that the magnetospheric shape can be represented as a paraboloid, includes two external (magnetopause and magnetotail) current systems and an internal (dipole) field. We take advantage of the geometry of the orbital MAG data to constrain all but one of the model parameters, and their ranges, directly from the observations. These parameters are then used as a priori constraints in the magnetospheric model, and the remaining parameter, the dipole moment, is estimated from a grid search. The model provides an excellent fit to the MAG observations, with a root-mean-square misfit of less than 20 nT globally. The mean distance from the planetary dipole origin to the magnetopause subsolar point, RSS, is 1.45 RM (where RM = 2440 km) and the mean planetary dipole moment is 190 nT- RM3. Temporal variations in the global-scale magnetic fields result from changes in solar wind ram pressure, Pram, at Mercury that arise from the planet's 88-day eccentric orbit around the Sun and from transient, rapid changes in solar wind conditions. For a constant planetary dipole moment, RSS varies as Pram-1/6. However, magnetopause crossings obtained from several Mercury years of MESSENGER observations indicate that RSS is proportional to Pram-1/a where a is greater than 6, suggesting induction in Mercury's highly conducting metallic interior. We obtain an effective dipole moment that varies by up to ?15% about its mean value. We further investigate the periodic 88-day induction signature and use the paraboloid model to describe the spatial structure in the inducing magnetopause field, together with estimates for the outer radius of Mercury's liquid core and possible overlying solid iron sulfide layer, to calculate induced core fields. The baseline magnetospheric model is adapted to include the 88-day periodic induction signature, and residuals to this time-varying global model from magnetically quiet orbits are then used to investigate structure at higher degree and order in the internal and external fields.

Johnson, C. L.; Winslow, R. M.; Anderson, B. J.; Purucker, M. E.; Korth, H.; Al Asad, M. M.; Slavin, J. A.; Baker, D. N.; Hauck, S. A.; Phillips, R. J.; Zuber, M. T.; Solomon, S. C.

2012-12-01

147

How Does the Supply Requisitioning Process Affect Average Customer Wait Time Onboard U.S. Navy Destroyers.  

National Technical Information Service (NTIS)

The Navy's current inventory and requisition management procedures for issuing repair parts onboard ships have remained relatively unchanged for decades. As a result of current practices, many ships are experiencing higher average customer wait times (ACW...

A. Phillips P. R. Saucedo

2013-01-01

148

Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques  

USGS Publications Warehouse

Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..

Guedes, R. M. C.; Calliari, L. J.; Holland, K. T.; Plant, N. G.; Pereira, P. S.; Alves, F. N. A.

2011-01-01

149

Importing Leaders for School Turnarounds: Lessons and Opportunities  

ERIC Educational Resources Information Center

One of the biggest challenges in education today is identifying talented candidates to successfully lead turnarounds of persistently low-achieving schools. Evidence suggests that the traditional principal pool is already stretched to capacity and cannot supply enough leaders to fix failing schools. But potentially thousands of leaders capable of…

Kowal, Julie; Hassel, Emily Ayscue

2011-01-01

150

Can Principals' Emotional Intelligence Matter to School Turnarounds?  

ERIC Educational Resources Information Center

Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: the…

Cai, Qijie

2011-01-01

151

The Turnaround Mindset: Aligning Leadership for Student Success  

ERIC Educational Resources Information Center

This book provides a valuable balance between what one must know and what one must do to turn around low-performing schools. The 3-E framework simplifies this complex process by focusing resources on the environment, the executive, and the execution of the turnaround plan. Central to each of these components is a spotlight on the values supporting…

Fairchild, Tierney Temple; DeMary, Jo Lynne

2011-01-01

152

129. Back side technical facilities passageways between turnaround trans. & ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

129. Back side technical facilities passageways between turnaround trans. & building no. 104, "plans, sections & elevations" - architectural, AS-BLT AW 36-25-13, sheet 9 of 40, dated 23 November, 1960. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

153

Teams: Harnessing the Driving Force for School Turnaround  

Microsoft Academic Search

Using quality circles and teams is fairly new to education--and is constrained by how schools are organized and a past emphasis on teacher autonomy. Can school leaders borrow from the business practice of using teams to promote student learning in their schools? This note discusses the information gleaned from participants in the Turnaround Specialist Program that is part of a

June West

154

Diagnostic quality of time-averaged ECG-gated CT data  

NASA Astrophysics Data System (ADS)

Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.

Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan

2009-02-01

155

Time Averaging Procedure for Calculating the Mass and Energy Transfer Rates in Adiabatic Two Phase Flow.  

National Technical Information Service (NTIS)

To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not...

L. V. Boccaccini

1986-01-01

156

Daily Average Consumption of 2 Long-Acting Opioids: An Interrupted Time Series Analysis  

PubMed Central

Background Oxycodone controlled release (CR) and oxymorphone extended release (ER) are frequently prescribed long-acting opioids, which are approved for twice-daily dosing. The US Food and Drug Administration approved a reformulated crush-resistant version of oxycodone CR in April 2010. Objective To compare the daily average consumption (DACON) for oxycodone CR and for oxymorphone ER before and after the introduction of the reformulated, crush-resistant version of oxycodone CR. Methods This was a retrospective claims database analysis using pharmacy claims from the MarketScan database for the period from January 2010 through March 2011. The interrupted time series analysis was used to evaluate the impact of the introduction of reformulated oxycodone CR on the DACON of the 2 drugs—oxycodone CR and oxymorphone ER. The source of the databases included private-sector health data from more than 150 medium and large employers. All prescription claims containing oxycodone CR and oxymorphone ER dispensed to members from January 1, 2010, to March 31, 2011, were included in the analysis. Prescription claims containing duplicate National Drug Codes, missing member identification, invalid quantities or inaccurate days supply of either drug, and DACON values of <1 and >500 were removed. Results The database yielded 483,063 prescription claims for oxycodone CR and oxymorphone ER from January 1, 2010, to March 31, 2011. The final sample consisted of 411,404 oxycodone CR prescriptions (traditional and reformulated) dispensed to 85,150 members and 62,656 oxymorphone ER prescriptions dispensed to 11,931 members. Before the introduction of reformulated oxycodone CR, DACON values for the highest strength available for each of the 2 drugs were 0.51 tablets higher for oxycodone CR than for oxymorphone ER, with mean DACON values of 3.5 for oxycodone CR and 3.0 for oxymorphone ER (P <.001). The differences of mean DACON between the 2 drugs for all lower strengths were 0.46 tablets, with mean DACON values of 2.7 for oxycodone CR and 2.3 for oxymorphone ER (P <.001). After the introduction of the new formulation, the difference in mean DACON between the 2 drugs was slightly lower: 0.45 tablets for the highest-strength and 0.40 tablets for the lower-strength pairs. Regression analyses showed that the immediate and overall impact of the reformulation of oxycodone CR on the DACON of oxycodone CR was minimal, whereas no changes were seen in the DACON of oxymorphone ER. The estimated DACON for oxycodone CR decreased by 0.1 tablets, or 3.7% (P <.001), 6 months after the new formulation was introduced. Conclusion The mean DACON was 0.4 tablets per day higher for oxycodone CR compared with oxymorphone ER for all dosage strengths for the entire study period. After the introduction of the reformulated oxycodone CR, the DACON for this drug was slightly mitigated; however, there was a minimal impact on the mean differences between oxycodone CR and oxymorphone ER.

Puenpatom, R. Amy; Szeinbach, Sheryl L.; Ma, Larry; Ben-Joseph, Rami H.; Summers, Kent H.

2012-01-01

157

Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain  

NASA Technical Reports Server (NTRS)

The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (aspect ratios and slope factors).

Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.

1992-01-01

158

Development of an Effective Travel Time Prediction Method Using Modified Moving Average Approach  

Microsoft Academic Search

Prediction of travel time on road network has emerged as a crucial research issue in intelligent transportation system (ITS).\\u000a Travel time prediction provides information that may allow travelers to change their routes as well as departure time. To\\u000a provide accurate travel time for travelers is the key challenge in this research area. In this paper, we formulate two new\\u000a methods

Nihad Karim Chowdhury; Rudra Pratap Deb Nath; Hyunjo Lee; Jaewoo Chang

2009-01-01

159

Effects of core excess reactivity and coolant average temperature on maximum operable time of NIRR-1 miniature neutron source reactor  

Microsoft Academic Search

We appraised in this study the effects of core excess reactivity and average coolant temperature on the operable time of the Nigeria Research Reactor-1 (NIRR-1), which is a miniature neutron source reactor (MNSR). The duration of the reactor operating time and fluence depletion under different operation mode as well as change in core excess reactivity with temperature coefficient was investigated

Y. A. Ahmed; I. B. Mansir; I. Yusuf; G. I. Balogun; S. A. Jonah

2011-01-01

160

Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes  

NASA Technical Reports Server (NTRS)

Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

1977-01-01

161

An analog front end for high speed fast turnaround modems  

Microsoft Academic Search

A single-chip analog front end (AFE) for V.32 9600-b\\/s fast turnaround modems is presented. The AFE mixes analog and digital circuits to achieve a high level of integration of modem front end functions. The AFE integrates 80 poles of switched-capacitor filters for all transmission modes with DSP (digital signal processor) and microprocessor interfaces, 8-bit linear A\\/D (analog-to-digital) and D\\/A (digital-to-analog)

R. Halim; D. Shamlou; J. Illgner

1989-01-01

162

The BBSome controls IFT assembly and turnaround in cilia  

PubMed Central

The bidirectional movement of intraflagellar transport (IFT) particles, which are composed of motors, IFT-A and IFT-B subcomplexes, and cargos, is required for cilia biogenesis and signaling 1, 2. A successful IFT cycle depends on the massive IFT particle to be properly assembled at the ciliary base and turned around from anterograde to retrograde transport at the ciliary tip. However, how IFT assembly and turnaround are regulated in vivo remains elusive. From a whole-genome mutagenesis screen in C. elegans, we identified two hypomorphic mutations in dyf-2 and bbs-1 as the only mutants showing normal anterograde IFT transport but defective IFT turnaround at the ciliary tip. Further analyses revealed that the BBSome 3, 4, a group of conserved proteins affected in human Bardet-Biedl syndrome (BBS) 5, assembles IFT complexes at the ciliary base, then binds to anterograde IFT particle in a DYF-2- (an ortholog of human WDR19) and BBS-1-dependent manner, and lastly reaches the ciliary tip to regulate proper IFT recycling. Our results unravel the BBSome as the key player regulating IFT assembly and turnaround in cilia.

Wei, Qing; Zhang, Yuxia; Li, Yujie; Zhang, Qing; Ling, Kun; Hu, Jinghua

2012-01-01

163

Empirical wind model for the middle and lower atmosphere. Part 1: Local time average  

NASA Technical Reports Server (NTRS)

The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.

Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.

1993-01-01

164

Exploring Granger causality between global average observed time series of carbon dioxide and temperature  

SciTech Connect

Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

Kodra, Evan A [ORNL; Chatterjee, Snigdhansu [University of Minnesota; Ganguly, Auroop R [ORNL

2010-01-01

165

Turnaround Aid Raising Hopes, Also Concerns  

ERIC Educational Resources Information Center

As the U.S. Department of Education prepares to throw $3 billion in one-time money on the table to improve perennially foundering schools, a gulf is emerging between what federal officials would like to see done with the funds and what many districts say is their capacity--and inclination--to deliver. While some districts say the federal largess…

Klein, Alyson

2009-01-01

166

On estimating effective frequencies and average waiting times for indirect connections  

Microsoft Academic Search

This paper is concerned with the problem of estimating two important service properties of transportation systems in which a traveler may have to use more than one vehicle to complete his journey, such as in urban public transport or intercity air, bus or rail systems. The service properties are waiting time for connections and effective frequency of service from origin

Ashish K. Sen; Edward K. Morlok

1976-01-01

167

On the Possibility of Consensus in Asynchronous Systems with Finite Average Response Times  

Microsoft Academic Search

It has long been known that the consensus problem can- not be solved deterministically in completely asynchronous distributed systems, i.e., systems (1) without assumptions on communication delays and relative speed of processes and (2) without access to real-time clocks. In this paper 1 we de- fine a new asynchronous system model: Instead of assuming reliable channels with finite transmission delays,

Christof Fetzer; Ulrich Schmid; Martin Süßkraut

2005-01-01

168

Grade Point Average as a Predictor of Timely Graduation from Associate Degree Registered Nursing Programs  

ERIC Educational Resources Information Center

The purpose of this study was to determine if admission selection strategies that utilize cumulative and/or pre-requisite GPA are predictive of timely graduation for associate degree nursing (RN-AD) students. Data were obtained from de-identified records of 437 associate degree nursing students enrolled in three Midwest community colleges from…

Jackson, Delores J.

2010-01-01

169

A new time series prediction algorithm based on moving average of nth-order difference  

Microsoft Academic Search

As a typical research topic, time series analysis and prediction face a continuously rising interest and have been widely applied in various domains. Current approaches focus on a large number of data collections, using mathematics, statistics and artificial intelligence methods, to process and make a prediction on the next most probable value. This paper proposes a new algorithm using moving

Yang Lan; Daniel Neagu

2007-01-01

170

Extending Hierarchical Reinforcement Learning to Continuous-Time, Average-Reward, and Multi-Agent Models.  

National Technical Information Service (NTIS)

Hierarchical reinforcement learning (HRL) is a general framework that studies how to exploit the structure of actions and tasks to accelerate policy learning in large domains. Prior work on HRL has been limited to the discrete-time discounted reward semi-...

M. Ghavamzadeh S. Mahadevan R. Makar

2003-01-01

171

The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.  

ERIC Educational Resources Information Center

The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

Doerann-George, Judith

172

Limit theorems on a linear explosive stochastic model for time series with moving average error  

Microsoft Academic Search

Summary  LetX(t) be a linear autoregressively generated explosive time series, with autoregressive coefficientsb\\u000a 1,…,bq, and a constant termb\\u000a 0, and an error term\\u000a $$\\\\bar \\\\varepsilon \\\\left( t \\\\right) = a_0 \\\\varepsilon \\\\left( t \\\\right) + a_1 \\\\varepsilon \\\\left( {t - 1} \\\\right) + \\\\cdots + a_l \\\\varepsilon \\\\left( {t - l} \\\\right)$$\\u000a ; a0=1. Where ?(t),t?1 are independent, E?(t)=0, and E?

K. N. Venkataraman; K. Suresh Chandra

1984-01-01

173

Simple and accurate method to evaluate the time-averaged performance of flat-plate solar collectors  

SciTech Connect

The article demonstrates an improved version of a method that the authors developed over the last 10 years. The method has two stages. First, computational procedures, graphs, and tables are presented for the average day of a given time period and for a specific reference state of the collector (horizontal position and the working fluid average temperature kept constant during the day). Then procedures are developed to use these graphs and tables for different cloud cover classes of day and collector states characterized by any tilt, orientation, and thermal condition and for operational regimes. The central concept of the method is the noon equivalent time, which plays a role similar to that of utilizability in other computational methods. The required tables and graphs must be presented all at once for a given location and time period. Toward this aim, detailed data are needed on the dependence of solar global irradiance on tilt and orientation and cloud cover. Several computational examples are shown.

Zamfir, E.; Oancea, C.; Badescu, V.; Binder, C.; Radulescu, L. [Polytechnic Univ. of Bucharest (Romania); Ciocoiu, I. [National Meteorological Inst., Bucharest (Romania)

1996-09-01

174

Compact, high average power, fiber-pumped terahertz source for active real-time imaging of concealed objects.  

PubMed

We have modeled and demonstrated a scalable, compact, fiber-pumped terahertz source based on difference frequency mixing (DFM) in zinc germanium phosphide (ZGP) capable of producing high average and peak-power pulses. Currently, our terahertz source produces 2 mW of average THz power and >40 W of peak power with sub-nanosecond pulses at a repetition rate of 100 kHz in the range of 2-3 THz without cryogenic cooling or ultra-fast optics. This high average power laser-based terahertz output enables the real-time imaging of concealed objects using an off-the-shelf uncooled microbolometer focal-plane array. With this THz system, we have imaged objects obscured inside in a common shipping envelope, demonstrating the potential of compact laser-based terahertz sources for use in security screening applications. PMID:19546954

Creeden, Daniel; McCarthy, John C; Ketteridge, Peter A; Schunemann, Peter G; Southward, Timothy; Komiak, James J; Chicklis, Evan P

2007-05-14

175

Autoregressive moving average model for analyzing edge localized mode time series on Axially Symmetric Divertor Experiment (ASDEX) Upgrade tokamak  

SciTech Connect

An approach to analysis of time series of edge localized modes (ELMs) is proposed. It is based on the use of the autoregressive moving average model, which decomposes time series into deterministic and noise components. Despite the inclusion of nonlinearity in the model, the resulting deterministic equations for the ELM time series measured on Axially Symmetric Divertor Experiment Upgrade tokamak turn out to be linear. This contrasts with the findings on JAERI tokamak (JT-60U) and tokamak a configuration variable that ELMs exhibit features of chaotic dynamics, namely, the presence of unstable periodic orbits. This methodology for distinguishing chaotic behavior is examined, and found to be susceptible to misinterpretation.

Zvejnieks, G.; Kuzovkov, V.N.; Dumbrajs, O.; Degeling, A.W.; Suttrop, W.; Urano, H.; Zohm, H. [Institute of Solid State Physics University of Latvia, Euratom-University of Latvia Association, 8 Kengaraga str., Riga LV-1063 (Latvia); Helsinki University of Technology, Euratom-Tekes Association, FIN-02015 HUT Helsinki (Finland); Centre de Recherches en Physique des Plasmas, Ecole Polytechnique Federale de Lausanne, Association EURATOM, Confederation Suisse, Lausanne CH-1015 (Switzerland); Max-Planck-Institut fuer Plasmaphysik, Euratom Association, Garching D-85740 (Germany)

2004-12-01

176

On various averaging methods for a nonlinear oscillator with slow time-dependent potential and a nonconservative perturbation  

Microsoft Academic Search

The main aim of the paper is to compare various averaging methods for constructing asymptotic solutions of the Cauchy problem\\u000a for the one-dimensional anharmonic oscillator with potential V (x, ?) depending on the slow time ? = ?t and with a small nonconservative term ?g($$\\u000a\\\\dot x\\u000a$$, x, ?), ? ? 1. This problem was discussed in numerous papers,

S. Yu. Dobrokhotov; D. S. Minenkov

2010-01-01

177

Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet  

NASA Astrophysics Data System (ADS)

The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.

Hsu, C. M.; Huang, R. F.

2013-07-01

178

Average recovery time from a standardized intravenous sedation protocol and standardized discharge criteria in the general dental practice setting.  

PubMed Central

Intravenous sedation has been used in dentistry for many years because of its perceived advantages over general anesthesia, including shorter recovery times. However, there is limited literature available on recovery from intravenous dental sedation, particularly in the private general practice setting. The aim of this study was to describe the recovery times when sedation was conducted in private dental practice and to consider this in relation to age, weight, procedure type, and procedure time. The data were extracted from the intravenous sedation records available with 1 general anesthesia-trained dental practitioner who provides ambulatory sedation services to a number of private general dental practices in the Perth, Western Australia Metropolitan Area. Standardized intravenous sedation techniques as well as clear standardized discharge criteria were utilized. The sedatives used were fentanyl, midazolam, and propofol. Results from 85 patients produced an average recovery time of 19 minutes. Recovery time was not associated with the type or length of dental procedures performed.

Lepere, A. J.; Slack-Smith, L. M.

2002-01-01

179

SSME 3-D Turnaround Duct flow analysis - CFD predictions  

NASA Technical Reports Server (NTRS)

CFD analysis is presently employed to obtain an improved flowfield for an individual flowpath in the case of the Space Shuttle Main Engine's High Pressure Fuel Turbopump Turn-Around Duct (TAD), which conducts the flow exiting from the gas turbines into the fuel bowl. It is demonstrated that the application of CFD to TAD flow analysis, giving attention to the duct's configuration and to the number, shape, and alignment of the diffuser struts, can enhance understanding of flow physics and result in improved duct design and performance.

Brankovic, Andreja; Stowers, Steven T.; Mcconnaughey, Paul

1988-01-01

180

Development of a low-cost system for measuring conditional time-averaged gradients of SO2 and NH3.  

PubMed

A conditional time-averaged gradient (COTAG) system has been developed to provide direct long-term (weekly to monthly) average flux gradient measurements for a range of trace gases, between land and atmosphere. Over daily periods, atmospheric conditions can range from high stability, where the vertical gradients of ambient concentration are enhanced due to very small diffusivity, to highly unstable conditions, in which concentration gradients are small due to the intense turbulent activity of the surface layer. The large vertical gradients generated by high stability would bias the estimate of the actual flux: to avoid this, the COTAG system samples conditionally, within a carefully refined range of stability. A comparison with a continuous flux gradient system suggested that the removal of stable conditions from the sampling period does not substantially modify the evaluation of the long-term fluxes. PMID:19184491

Famulari, D; Fowler, D; Nemitz, E; Hargreaves, K J; Storeton-West, R L; Rutherford, G; Tang, Y S; Sutton, M A; Weston, K J

2010-02-01

181

Nanoparticle-induced rectification in a single cylindrical nanopore: Net currents from zero time-average potentials  

NASA Astrophysics Data System (ADS)

Rectification in nanopores is usually achieved by a fixed asymmetry in the pore geometry and charge distribution. We show here that nanoparticle blocking of a cylindrical pore induces rectifying properties that can support significant net currents with zero time-average potentials. To describe experimentally this effect, the steady-state current-voltage curves of a single nanopore are obtained for different charge states and relative sizes of the pore and the charged nanoparticles, which are present only on one side. The rectification phenomena observed can find applications in the area of nanofluidics and involves physical concepts that are also characteristic of the blocking of protein ion channels by ionic drugs.

Ali, Mubarak; Ramirez, Patricio; Nasir, Saima; Nguyen, Quoc-Hung; Ensinger, Wolfgang; Mafe, Salvador

2014-01-01

182

School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform  

ERIC Educational Resources Information Center

School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…

Peck, Craig; Reitzug, Ulrich C.

2014-01-01

183

Dancing in a Minefield: An Analysis of Turnaround Specialists in Arizona Schools  

ERIC Educational Resources Information Center

In 2008, educational leaders from the Arizona Department of Education (ADE) assigned a group of turnaround specialists to work in four failing public schools in a large, urban school district in Phoenix, Arizona in hopes of improving those schools. The utilization of turnaround specialists in failing schools was Arizona's method of enacting…

McMillie, Kyann L.

2010-01-01

184

High-Pressure Rocket Engine Turnaround Duct Computational Fluid Dynamics Analysis.  

National Technical Information Service (NTIS)

Current and projected high-pressure, staged-combustion rocket engine designs, such as the XLR129 and SSME, have introduced a type of turnaround duct flowpath wherein the turnaround is accompanied by an increase in mean radius and a decrease in fluid veloc...

G. B. Cox

1984-01-01

185

Average contaminant concentration and mass flow in aquifers from time-dependent pumping well data: Analytical framework  

NASA Astrophysics Data System (ADS)

Conventional water samples are representative for small subsurface volumes relative to the scale of many natural (geological) heterogeneities. We present an analytical framework for estimation of representative field-scale average concentrations and mass flows on the basis of much larger sampling volumes that are obtained through so-called integral pumping tests. The contaminant concentration is then measured as a function of time in a pumping well and used for estimation of the conditions in the aquifer prior to (and after) pumping, increasing the observation scale to the size of the well capture zone. This method complements (and provides an alternative to) conventional monitoring grids, where mass flow and concentration may be misinterpreted or plumes even missed because of problems related to grid spacing. The (not measured) initial spatial concentration distribution and the time-dependent concentration measured at the well are related through a Volterra integral equation of the first kind. For limiting cases of short and long dimensionless pumping duration, two closed form analytical solutions are given, from which the mass flow and average concentration can be evaluated. Furthermore, a new solution for evaluating integral pumping tests of any duration is provided and used for investigating the applicability of the simple, analytical closed form solutions for interpreting test results from seven large-scale contaminated sites in Europe.

Bayer-Raich, Martí; Jarsjö, Jerker; Liedl, Rudolf; Ptak, Thomas; Teutsch, Georg

2004-08-01

186

Vibration measurement of the tympanic membrane of guinea pig temporal bones using time-averaged speckle pattern interferometry  

NASA Astrophysics Data System (ADS)

``Time-averaged holography'' and ``holographic interferometry'' enable recording of the complete vibration pattern of a surface within several seconds. The results appear in the form of fringes. Vibration amplitudes smaller than 100 nm are not readily measurable by these techniques, because such small amplitudes produce variations in gray level, but not fringes. In practice, to obtain clear fringes in these measurements, stimulus sound pressures higher than 100 dB SPL must be used. The phase of motion is also not obtainable from such fringe techniques. In this study, a sinusoidal phase modulation technique is described, which allows detection of both small amplitudes of motion and their phase from time-averaged speckle pattern interferometry. In this technique, the laser injection current is modulated and digital image processing is used to analyze the measured patterns. When the sound-pressure level of stimuli is between 70 and 85 dB SPL, this system is applied to measure the vibratory response of the tympanic membrane (TM) of guinea pig temporal bones at frequencies up to 4 kHz where complicated vibration modes are observed. The effect of the bulla on TM displacements is also quantified. Results indicate that this system is capable of measuring the nanometer displacements of the TM, produced by stimuli of 70 dB SPL.

Wada, Hiroshi; Ando, Masayoshi; Takeuchi, Masataka; Sugawara, Hironori; Koike, Takuji; Kobayashi, Toshimitsu; Hozawa, Koji; Gemma, Takashi; Nara, Makoto

2002-05-01

187

High Performance High Repetition Rate Miniature Plasma Focus Device: Record Time Averaged Neutron Yield at 200 J with Enhanced Reproducibility  

NASA Astrophysics Data System (ADS)

A high performance high repetition rate plasma focus device with significant time averaged neutron yield and greater shot to shot reproducibility could be a highly valuable neutron source for Homeland Security applications. The single module fast miniature plasma focus `FMPF-2' (2.4 ?F, 56 ± 3 nH, 89 kA @ 14.0 kV, T/4~575 ns) was upgraded to four module `FMPF-3' (2.4 ?F, 34 ± 2 nH, 103 kA @ 14.0 kV, T/4~458 ns) device. The time averaged neutron output of (1.4 ± 0.6) × 106 n/sec at 1 Hz operation was enhanced to the record value of (1.4 ± 0.2) × 107 n/sec at 10 Hz operation for the burst length of 50 consecutive shots for deuterium filling gas pressure of 5.5 mbar in FMPF-3 device at storage energy of ~200 J. Other key findings of the investigation were: (i) the deuterium filling gas pressure for single shot and repetitive modes of device operation were needed to be optimized separately, and (ii) in the repetitive mode of operation the deuterium filling gas pressure was higher than that obtained for single shot mode and also yielded better reproducibility in neutron emission.

Verma, Rishi; Rawat, R. S.; Lee, P.; Springham, S. V.; Tan, T. L.

2013-02-01

188

Time-averaging approximation in the interaction picture: Anisotropy of vibrational pump-probe experiments for coupled chromophores with application to liquid water  

NASA Astrophysics Data System (ADS)

A time-averaging approximation method developed to efficiently calculate the short-time dynamics of coupled vibrational chromophores using mixed quantum/classical theories is extended in order to be applicable to the study of vibrational dynamics at longer time scales. A quantum mechanical time propagator for long times is decomposed into the product of short-time propagators, and a time-averaging approximation is then applied to each of the latter. Using the extended time-averaging approximation, we calculate the anisotropy decay of the data obtained from impulsive vibrational pump-probe experiments on the OH stretching modes of water, which is in excellent agreement with numerically exact results.

Yang, Mino

2012-10-01

189

Reynolds number dependence of an upper bound for the long-time-averaged buoyancy flux in plane stratified Couette flow  

NASA Astrophysics Data System (ADS)

We derive an improved rigorous upper bound for the long-time-averaged vertical buoyancy flux for stably stratified Couette flow; i.e. the flow of a Boussinesq fluid (with reference density rho_0, kinematic viscosity nu, and thermal diffusivity kappa) confined between two parallel horizontal plates separated by a distance d, which are driven at a constant relative velocity uDelta U, and are maintained at a constant (statically stable) temperature difference leading to a constant density difference uDelta rho. We construct the bound by means of a numerical solution to the ‘background method’ variation problem as formulated by Constantin and Doering using a one-dimensional uni-directional background. The upper bound so constructed is the best possible bound with the imposed constraints for streamwise independent mean flows that are statistically steady, and is calculated up to asymptotically large Reynolds numbers. We find that the associated (dimensional) upper bound {cal B}(*_{scriptsize) max} on the long-time-averaged and volume averaged buoyancy flux {cal B}(*:=lim_{t) -> infty} (1/t) int(t_0) < rho u_3 > g/rho_{0} dskew2tilde{t} (where u_3 is the vertical velocity, g is the acceleration due to gravity, and angled brackets denote volume averaging) does not depend on either the bulk Richardson number J=guDelta rho d/(rho_0 uDelta U(2)) of the flow, or the Prandtl number sigma=nu/kappa of the fluid. We show that {cal B}(*_{scriptsize) max} has the same inertial characteristic scaling as the (dimensional) mechanical energy dissipation rate {cal E}(*_B) , and {cal B}(*_{scriptsize) max}=0.001267uDelta U(3/d) as Re->infty. The associated flow structure exhibits velocity boundary layers embedded within density boundary layers, with local gradient Richardson numbers Ri =O(sigma/Re)? 1 in the vicinity of the horizontal plates. There is a correspondence between the predicted flow structure and the flow structure at a lower Reynolds number associated with the upper bound on the mechanical energy dissipation rate {cal E}(*_{scriptsize) max} in an unstratified fluid. We establish that, for the flow that maximizes the buoyancy flux, the flux Richardson number Ri_f -> 1/3 as Re -> infty, independently to leading order of both Re and J. There is a generic partition of the energy input by the shear into the fluid into three equal parts: viscous dissipation of kinetic energy by the mean flow; viscous dissipation of kinetic energy by perturbation velocities; and vertical buoyancy flux.

Caulfield, C. P.; Tang, W.; Plasting, S. C.

2004-01-01

190

Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements  

NASA Astrophysics Data System (ADS)

Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.

Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

2013-12-01

191

Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround  

NASA Astrophysics Data System (ADS)

An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.

Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.

192

Time-averaged albumin predicts the long-term prognosis of IgA nephropathy patients who achieved remission  

PubMed Central

Background Primary IgA nephropathy (IgAN) is the most common form of idiopathic glomerulonephritis worldwide. Although most patients are able to achieve remission with the current therapy, a large number of patients will still progress to end-stage renal disease. This study aimed to evaluate kidney disease progression and the risk factors for progression in IgAN patients who achieved remission. Methods Patients from a prospective database with IgAN were included in this study. All the subjects had achieved a complete remission (CR) or partial remission (PR) following 6 months of therapy. Renal survival and the relationship between the clinical parameters and composite renal outcomes were assessed. Results The study comprised 878 IgAN patients recruited between January 2005 and December 2010. Overall, 632 patients were enrolled in this study. The data from the 369 patients who achieved remission were analyzed; the mean follow-up time was 49 months. The median serum creatinine (SCr) concentration at baseline was 91.3 ?mol/L, and the time-averaged creatinine (TA-SCr) was 91.8 ?mol/L. The mean serum albumin (ALB) level at baseline was 39.4 g/L, and the time-averaged serum albumin (TA-ALB) was 42.1 g/L. Multivariate Cox regression analyses revealed that the TA-ALB and TA-SCr levels were independently associated with the composite renal outcome. The patients with a TA-SCr value?>?120 ?mol/L and a TA-ALB level?

2014-01-01

193

Generalized Seasonal Autoregressive Integrated Moving Average Models for Count Data with Application to Malaria Time Series with Low Case Numbers  

PubMed Central

Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.

Briet, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope

2013-01-01

194

Detection of turbulent coherent motions in a forest canopy part II: Time-scales and conditional averages  

NASA Astrophysics Data System (ADS)

Turbulent exchanges between plant canopies and the atmosphere are known to be strongly affected by intermittent coherent motions, which appear on time traces of turbulent variables as periodic, large-amplitude excursions from the mean. Detecting these features requires objective and powerful signal analysis techniques. We investigate here the possibilities offered by the recently developed wavelet transform, presented in a companion paper. For this purpose, a set of data acquired in a 13.5 m high pine forest in southwestern France was used, which provided time series of wind velocities and air temperature recorded at two levels simultaneously, under moderately unstable conditions. Firstly, a duration scale of the active part of coherent motions was estimated from the wavelet variance. Then, we focused on the detection itself of large-scale features; several wavelet functions were tested, and the results compared with those obtained from more classical conditional sampling methods such as VITA and WAG. A mean time interval ?=1.8 h/u * ( h being the canopy height and u * the friction velocity) between contiguous coherent motions was obtained. The features extracted from the various traces and ensemble-averaged over 30 min periods appeared very similar throughout the four hours of data studied. They provided a dynamic description of the ejection-sweep process, readily observable at both levels. An alternate Reynolds decomposition of the instantaneous turbulent fields, using the conditionally averaged signals, allowed the relative importance of large- and small-scale contributions to momentum and heat fluxes to be estimated. The results were found to be in good agreement with comparable studies.

Collineau, Serge; Brunet, Yves

1993-10-01

195

Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures  

PubMed Central

On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

Long, Leroy L.; Srinivasan, Manoj

2013-01-01

196

A measurement of time-averaged aerosol optical depth using air-showers observed in stereo by HiRes  

NASA Astrophysics Data System (ADS)

Air fluorescence measurements of cosmic ray energy must be corrected for attenuation of the atmosphere. In this paper, we show that the air-showers themselves can yield a measurement of the aerosol attenuation in terms of optical depth, time-averaged over extended periods. Although the technique lacks statistical power to make the critical hourly measurements that only specialized active instruments can achieve, we note the technique does not depend on absolute calibration of the detector hardware, and requires no additional equipment beyond the fluorescence detectors that observe the air showers. This paper describes the technique, and presents results based on analysis of 1258 air-showers observed in stereo by the High Resolution Fly’s Eye over a four year span.

High Resolution Fly'S Eye Collaboration; Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Belov, K.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Boyer, J. H.; Cannon, C. T.; Cao, Z.; Connolly, B. M.; Fedorova, Y.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Manago, N.; Mannel, E. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Reil, K.; Roberts, M. D.; Schnetzer, S. R.; Seman, M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

2006-03-01

197

On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods  

NASA Astrophysics Data System (ADS)

Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what result in the detection of some control tests that are no longer adequate, due to different reasons. After the modification of some test, based in our experience, a set of quality control tests is now presented, updated according to technology advances and classified. The presented set of quality tests allows radiation and meteorological data to be tested in order to know their plausibility to be used as inputs in theoretical or empirical methods for scientific research. The research leading to those results has partly receive funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 262892 (ENDORSE project).

Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

2012-04-01

198

Characteristic length scales and time-averaged transport velocities of suspended sediment in the mid-Atlantic Region, USA  

USGS Publications Warehouse

Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.

Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis

2014-01-01

199

An Exploration of Discontinuous Time Synchronous Averaging for Helicopter HUMS Using Cruise and Terminal Area Vibration Data  

NASA Technical Reports Server (NTRS)

Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local stationarity diminishes as the temporal duration of the cycle increases. This is most evident for a planetary mesh cycle, which can take several minutes to complete.

Huff, Edward M.; Mosher, Marianne; Barszcz, Eric

2002-01-01

200

Determination of Average Failure Time and Microstructural Analysis of Sn-Ag-Bi-In Solder Under Electromigration  

NASA Astrophysics Data System (ADS)

Despite the extensive use of Sn-Ag-Cu as a Pb-free solder alloy, its melting point is significantly higher than that of eutectic Sn-Pb solder. Sn-Ag-Bi-In solder is an alternative Pb-free solder, with a melting temperature close to that of eutectic Sn-Pb. This study elucidates the electromigration behavior of Sn-Ag-Bi-In solder and then compares the results with those of the Sn-Ag-Bi system. The behavior of Pb-free Sn-Ag-Bi-In solder strips under electromigration is examined by preparing them in Si (001) U-grooves. The samples are then tested under various temperatures and current densities. Although the compounds thicken near both electrodes with current stressing, the thickness at the anode exceeds that at the cathode. Experimental results of the average failure time indicate that Sn-Ag-Bi-In solder has a longer lifetime than does Sn-Ag-Bi, which is attributed to the ? phase. Additionally, the ? phase dissolved by the current in the early stage replenishes the outgoing atomic flux. These atomic fluxes also enhance the growth of abnormally large particles in the middle of the strips. Field-emission electron probe microanalysis (FE-EPMA) results indicate that the amount of indium is reduced after the ? phase near the cathode is exhausted for extended current stressing time.

Wu, Albert T.; Sun, K. H.

2009-12-01

201

BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra  

NASA Technical Reports Server (NTRS)

We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.

Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.

1996-01-01

202

Analysis of trace contaminants in hot gas streams using time-weighted average solid-phase microextraction: proof of concept.  

PubMed

Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low concentrations of tar in gasification streams. PMID:23398991

Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C

2013-03-15

203

Time average neutralized migma: A colliding beam/plasma hybrid physical state as aneutronic energy source — A review  

NASA Astrophysics Data System (ADS)

A D + beam of kinetic energy Ti = 0.7 MeV was stored in a "simple mirror" magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times, ion energy distribution, nuclear reaction rate, and reaction product spectrum. Migma formed a disc 20 cm in diameter and 0.5 cm thick. Its ion density was sharply peaked in the center; the ion-to-electron temperature ratio was TiTe ˜ 10 3; ion-electron temperature equilibrium was never reached. The volume average and central D + density were n = 3.2 × 10 9 cm -3 and nc = 3 × 10 10 cm -3 respectively, compared to the space charge limit density nsc = 4 × 10 8 cm -3. The energy confinement time was ?c = 20-30 s, limited by the change exchange reactions with the residual gas in the vacuum (5 × 10 -9 Torr). The ion energy loss rate was 1.4 keV/s. None of the instabilities that were observed in mirrors at several orders of magnitude lower density occurred. The proton energy spectrum for dd + d ? T + p + 4 MeV shows that dd collided at an average crossing angle of 160°. Evidence for exponential density buildup has also been observed. Relative to Migma III results and measured in terms of the product of ion energy E, density n, and confinement time ?, device performance was improved by a factor of 500. Using the central fast ion density, we obtained the triple product: Tn? ? 4 × 10 14 keV s cm -3, which is greater than that of the best fusion devices. The luminosity (collision rate per unit cross section) was ˜ 10 29 cm -2s -1, with o.7 A ion current through the migma center. The stabilizing features of migma are: (1) large Larmor radius; (2) small canonical angular momentum; (3) short axial length z (disc shape); (4) nonadiabatic motions in r and z; (5) precession and enegy spread; (6) ambipolar potential; (7) radial density gradient;(8) large ion-to-electron temperature ratio; (9) metal walls in z; (10) suitability for external stabilization techniques (small volume, recurrent surfacing of ions); and particularly (11) diamagnetic well. Extrapolition of the results to reactor densities led to a D + 3He reactor "basic migma disc" 0.5 m in diameter and 0.25 m thick, suspended in a 10 T field, generating 1.5 MW(th). A 2-MW(e) power plant would consist of 3 such discs in a common volume. referred to as a triplet migmacell. Its specific power is projected to be 1 MW(e)/ton. A large power plant of any size would consist of a large number of migma discs in a common volume. The advantages f such modular systems are: (1) economy of mass production: capital cost per kW will be 25% that of fission and 10% that of DT fusion; (2) plants will be economical for all sizes above 10 kW(e); (3) minimal heat pollution, thanks to direct conversion of the changed ion kinetic energy into electricity; (4) no proliferative potential; and (5) large power-to-weight ratio due to absence of shielding. Anticipated physics problems in density increase are discussed.

Maglich, Bogdan C.

1988-08-01

204

Teacher Pay-for-Performance in School Turnaround: How Bonuses and Differentiated Pay Scales Can Help Support School Turnaround. Meeting the Turnaround Challenge: Strategies, Resources & Tools to Transform a Framework into Practice  

ERIC Educational Resources Information Center

Given the importance of good teaching and leadership for school success, turnaround schools should think carefully about how to structure professional environments that reward and motivate excellence. A system of "Pay-for-Contribution" that includes tools such as hard-to-staff and skill shortage pay, performance pay, and/or retention pay, will…

Mass Insight Education (NJ1), 2009

2009-01-01

205

17 CFR 240.17Ad-2 - Turnaround, processing, and forwarding of items.  

Code of Federal Regulations, 2013 CFR

...issue for which it performs transfer agent functions shall turnaround ninety percent...possible. (f) A registered transfer agent which receives items...premises at which it performs transfer agent functions shall have appropriate...

2013-04-01

206

Can Granger causality delineate natural versus anthropogenic drivers of climate change from global-average multivariate time series?  

NASA Astrophysics Data System (ADS)

The Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) notes with a high degree of certainty that global warming can be attributed to anthropogenic emissions. Detection and attribution studies, which attempt to delineate human influences on regional- and decadal-scale climate change or its impacts, use a variety of techniques, including Granger causality. Recently, Granger causality was used as a tool for detection and attribution in climate based on a spatio-temporal data mining approach. However, the degree to which Granger causality may be able to delineate natural versus anthropogenic drivers of change in these situations needs to be thoroughly investigated. As a first step, we use multivariate global-average time series of observations to test the performance of Granger causality. We apply the popular Granger F-tests to Radiative Forcing (RF), which is a transformation of carbon dioxide (CO2), and Global land surface Temperature anomalies (GT). Our preliminary results with observations appear to suggest that RF Granger-causes GT, which seem to become more apparent with more data. However, carefully designed simulations indicate that these results are not reliable and may, in fact, be misleading. On the other hand, the same observation- and simulation-driven methodologies, when applied to the El Niño Southern Oscillation (ENSO) index, clearly show reliable Granger-causality from ENSO to GT. We develop and test several hypotheses to explain why the Granger causality tests between RF and GT are not reliable. We conclude that the form of Granger causality used in this study, and in past studies reported in the literature, is sensitive to data availability, random variability, and especially whether the variables arise from a deterministic or stochastic process. Simulations indicate that Granger causality in this form performs poorly, even in simple linear effect cases, when applied to one deterministic and one stochastic time series rather than two stochastic time series. We observe that this problem may amplify problems with limited data availability and random variability. This form of Granger causality also appears to be sensitive to the possible presence of latent variables which may drive the causal relationships. In particular, there is a possibility that the effects of RF are overshadowed by the effects of other more statistically significant causal variables, such as the ENSO index. Our findings motivate the investigation of a modified form of Granger causality testing which is robust to all of the aforementioned conditions. While such a modified test may be able to delineate natural versus anthropogenic drivers of climate change and impacts, results and insights reported in the literature which are derived from the existing form of Granger causality may need to be carefully re-evaluated.

Kodra, E. A.; Chatterjee, S.; Ganguly, A. R.

2009-12-01

207

Agile turnaround using post-stall maneuvers for tail-sitter VTOL UAVs  

Microsoft Academic Search

Miniature vertical take-off and landing unmanned aerial vehicles (VTOL UAVs) make various missions possible alone such as surveillance in partially-destroyed building and at broad hazard area where many obstacle exist. In such missions, agile turnaround using post-stall maneuvers is useful to avoid obstacles. This paper discusses agile turnaround strategies utilizing post-stall maneuvers for tail-sitter VTOL UAVs. Two agile turn strategies

Takaaki Matsumoto; Atsushi Konno; Ren Suzuki; Atsushi Oosedo; Kenta Go; Masaru Uchiyama

2010-01-01

208

Relationship of the Average Integrated Biceps EMG and Pulmonary Ventilation to Torque and Time during Isometric Supination.  

National Technical Information Service (NTIS)

An isometric supination experiment was performed during which the subjects' average integrated biceps EMG and pulmonary ventilation were measured as indicators of work stress. The significance of both myogram height and pulmonary ventilation as indicators...

H. Gage C. Gold R. Jarvik S. Krewer M. Miller

1971-01-01

209

Compact, high average power, fiber-pumped terahertz source for active real-time imaging of concealed objects  

Microsoft Academic Search

We have modeled and demonstrated a scalable, compact, fiber-pumped terahertz source based on difference frequency mixing (DFM) in zinc germanium phosphide (ZGP) capable of producing high average and peak-power pulses. Currently, our terahertz source produces 2mW of average THz power and >40W of peak power with sub-nanosecond pulses at a repetition rate of 100kHz in the range of 2-3THz without

Daniel Creeden; John C. McCarthy; Peter A. Ketteridge; Peter G. Schunemann; Timothy Southward; James J. Komiak; Evan P. Chicklis

2007-01-01

210

Time-averaged images and quantifications of seafloor hydrothermal plumes from acoustic imaging data: a case study at Grotto Vent, Endeavour Segment Seafloor Observatory  

NASA Astrophysics Data System (ADS)

Many simple models of plumes predict time-averaged behavior. Although instantaneous measurements of plumes are still useful, time-averaged measurements are more directly comparable. Averages of varying numbers of consecutive acoustic images are quantified and the results compared. The acoustic images of Grotto Vent in the Main Endeavour Vent Field were obtained during the VIP (Vents Imaging Pacific) 2000 cruise using the Simrad SM2000 sonar system mounted on ROV Jason. The acoustic images were calibrated, filtered and interpolated onto a uniform grid. The acoustic intensity is proportional to the particle load and herein referred to as such. For each horizontal slice, the summed mass, the area, the local maximum and the position of the local maximum were calculated within a threshold-bounded area. The summed mass, the local maximum (and its position) and the average radius (from the area) proved insensitive to the averaging process. A slight trend for the local maximum to decrease with the number of images averaged was observed. Variation of the local maximum in even these short time intervals may influence the results of averaging. The consistency of the summed mass with averaging variable numbers of images may relate to the conservation of mass by the plume, which would require the total mass at a height to be the same unless the mass supplied at the source changed. The distribution of particle load within the plume becomes progressively smoother as more consecutive images are averaged together. Averages of 1, 2, 5, and 10 images were compared. The three-dimensional pattern of the time-averaged particle load distribution suggests that the plume/chimney interface was about 10 m above the sonar system at approximately 20 m horizontal distance. Particle load decreases with distance from the plume centerline for all averages, but that decrease varies from irregular (single image) to smooth (10 images). For averages of 5-10 images, the distribution of particle load within the plume is approximately Gaussian on horizontal slices. The degree of misfit will be quantified by visual and numerical comparison. It is concluded that a minimal set of 5 consecutive images is required to produce an adequate time-averaged image. This time-averaged image of the plume shows a clear picture of the plume structure, which facilitates comparison with models.

Bemis, K. G.; Rona, P. A.; Jackson, D. R.; Jones, C.; Mitsuzawa, K.; Palmer, D.; Silver, D.; Gudlavalletti, R.

2001-12-01

211

Conformations of the iduronate ring in short heparin fragments described by time-averaged distance restrained molecular dynamics.  

PubMed

The polyconformational behavior of L-iduronic acid (L-IdoA2S) in heparin derivatives has been previously analyzed in terms of intra-ring proton-proton vicinal coupling constants ((3)JHH) through mathematical fit of experimental and theoretical values (Ferro DR, Provasoli A, Ragazzi M, Casu B, Torri G, Bossennec V, Perly B, Sinay P, Petitou M, Choay J. 1990. Conformer Populations of L-Iduronic Acid Residues in Glycosaminoglycan Sequences. Carbohydr Res. 195:157-167; Muñoz-García JC, López-Prados J, Angulo J, Díaz-Contreras I, Reichardt N, de Paz JL, Martín-Lomas M, Nieto PM. 2012. Effect of the substituents of the neighboring ring in the conformational equilibrium of iduronate in heparin-like trisaccharides. Chemistry. 18:16319-16331.). However, this methodology is subjected to the experimental uncertainties of the J-coupling measurements, the force field deviations and the goodness of the least-squares fit. In the present work, we have used time-averaged distance restrained molecular dynamics (tar-MD) to largely reduce these errors, which enables accurate quantification of the population of conformers, or puckers, of the L-IdoA2S residue, in a set of eight heparin-like trisaccharides following the general sequence d-glucosamine (GlcN)-IdoA-GlcN, directly from the time evolution of the puckering coordinates ? and . Thus, by carrying out tar-MD simulations in explicit water, with the exclusive nuclear overhauser enhancement (NOE)-derived distance H2-H5 of the L-IdoA2S (2)SO conformer as the unique imposed constraint, we have been able to accurately and easily determine the different extents to which the iduronate ring populates the polar ((1)C4 chair) and equatorial ((2)SO skew-boat) areas of the puckering sphere depending on the sulfation pattern of the flanking GlcN residues, under low temperature conditions (278 K). The results indicate that 6-O-sulfation at the reducing-end GlcN residue facilitates the (1)C4 to (2)SO transitions of the iduronate ring by augmenting the flexibility of the C2-C3 torsion, driving the conformational equilibrium toward a majority of equatorial conformers. PMID:23903025

Muñoz-García, Juan Carlos; Corzana, Francisco; de Paz, José L; Angulo, Jesús; Nieto, Pedro M

2013-11-01

212

Time-weighted average sampling of airborne propylene glycol ethers by a solid-phase microextraction device.  

PubMed

A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler. However, the effects of temperature and humidity have been observed. Therefore, adjustments of experimental sampling constants at different environmental conditions will be necessary. PMID:22651222

Shih, H C; Tsai, S W; Kuo, C H

2012-01-01

213

Average Speed  

NSDL National Science Digital Library

This lab is not an inquiry activity but will help students understand the meaning of average velocity. Students do not easily understand from a textbook that an object's velocity changes over a period of acceleration. They have difficulty understanding wh

Horton, Michael

2009-05-30

214

Full-field vibration amplitude recovery from high-resolution time-averaged speckle interferograms and digital holograms by regional inverting of the Bessel function  

Microsoft Academic Search

High-resolution time-averaged speckle interferometry and quasi-binary electronic speckle interferometry are two methods used in mechanical vibration measurement. The first method is used to increase the spatial resolution of time–average patterns. Detecting with subpixel resolution the minima of the Bessel-type fringes becomes thus possible. The second method is used to obtain a high contrast of the fringe pattern and a measurement

Dan N. Borza

2006-01-01

215

Full-field vibration amplitude recovery from high-resolution time-averaged speckle interferograms and digital holograms by regional inverting of the Bessel function  

Microsoft Academic Search

High-resolution time-averaged speckle interferometry and quasi-binary electronic speckle interferometry are two methods used in mechanical vibration measurement. The first method is used to increase the spatial resolution of time average patterns. Detecting with subpixel resolution the minima of the Bessel-type fringes becomes thus possible. The second method is used to obtain a high contrast of the fringe pattern and a

Dan N. Borza

2006-01-01

216

Time-averaged near-bed suspended sediment concentrations under waves and currents: Comparison of measured and model estimates  

NASA Astrophysics Data System (ADS)

Profiles of suspended sediment concentration and velocity were measured over a 15-day period at a near-shore site off Santa Cruz, CA in Monterey Bay. The concentration and velocity data were collected with an Acoustic Backscattering System (ABS) and Acoustic Current Profiler (ACP) that were mounted on a bottom tripod. High-resolution bottom scanning sonar was also attached to the tripod to provide images of bed features during the experiment. Hourly time-averaged near-bed concentrations of suspended sediment were calculated from three models and compared with the measurements. Surface waves and currents that were generated by a storm of moderate intensity caused bed stresses that exceeded threshold stress for D50=0.02 cm, the median size of the moderately well-sorted bottom sediment, over a period of about 7 days. Estimates of the concentration at 1 cm above the bottom, Ca1, were obtained using the ABS measurements. These observations have been compared with predictions for the concentration at 1 cm above the bottom, C1. Nielsen's models for reference concentration Co [Nielsen, P., 1986. Suspended sediment concentrations under waves. Coastal Engineering 10, 32-31; Nielsen, P., 1992. Coastal Bottom Boundary Layers and Sediment Transport, Advanced Series on Ocean Engineering. World Scientific, Hackensack, NJ.] are purely wave-based and do not include effects of bottom currents on bed stress and bedform scales. C1 calculated from this model compared well with measured Ca1 when currents were weak and small oscillatory ripples were observed in the sonar images. However, during the 3-day period of highest bottom stresses modeled C1 did not compare well to Ca1. The other two models for C1, Glenn and Grant [Glenn, S.M., Grant, W.D., 1987. A suspended sediment stratification correction for combined wave and current flows. Journal of Geophysical Research 92(C8), 8244-8264.] and van Rijn and Walstra [Van Rijn, L.C., Walstra, D.J.R., 2004. Description of TRANSPOR2004 and implementation in Delft3D-ONLINE. Interim Report prepared for DG Rijkswaterstaat, Rijksinstituut voor Kust en Zee. Delft Hydraulics Institute, The Netherlands.], accounted for combined wave-current stresses and included different formulations for predicting bedform scales. C1 calculated from these models compared well with measurements throughout the early and middle period of the storm, but underpredicted measured values during the latter part of the experiment. An alternative method based on dimensional analysis provided a straightforward way to relate C1 to bed stress and sediment parameters. This new relationship based on dimensional analysis explained about 77% of the variability in Ca1 derived from the ABS data.

Cacchione, David A.; Thorne, Peter D.; Agrawal, Yogesh; Nidzieko, Nicholas J.

2008-02-01

217

I. Criterion Equivalence in Discrete Dynamic Programming. II. Stochastic Games with Perfect Information and Time Average Payoff.  

National Technical Information Service (NTIS)

It is shown that for finite state and action space Markovian decision processes, a policy is 1-optimal if and only if it is average overtaking optimal so the two criteria are equivalent. A counterexample to an alleged extension of the Hardy-Littlewood The...

S. A. Lippman T. M. Liggett

1968-01-01

218

Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe  

SciTech Connect

This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina)] [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina) [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

2013-12-15

219

Upscaling Flow in Fractured Media: Equivalence Between the Large Scale Averaging Theory and the Continuous Time Random Walk Method  

Microsoft Academic Search

In a recent paper, (McNabb, 1978), we set up a method allowing to compute both the transient and steady-state exchange terms between the matrix and fractured regions of a naturally fractured porous medium using the continuous time random walk method (CTRW). In particular, the exchange coefficient a parametrizing the large-scale exchange term was computed on physical grounds using a time

B. NŒtinger; T. Estebenet; M. Quintard

2001-01-01

220

A new approach for analyzing average time complexity of population-based evolutionary algorithms on unimodal problems.  

PubMed

In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically. PMID:19336324

Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin

2009-10-01

221

A space variant maximum average correlation height (MACH) filter for object recognition in real time thermal images for security applications  

NASA Astrophysics Data System (ADS)

We propose a space variant Maximum Average Correlation Height (MACH) filter which can be locally modified depending upon its position in the input frame. This can be used to detect targets in an environment from varying ranges and in unpredictable weather conditions using thermal images. It enables adaptation of the filter dependant on background heat signature variances and also enables the normalization of the filter energy levels. The kernel can be normalized to remove a non-uniform brightness distribution if this occurs in different regions of the image. The main constraint in this implementation is the dependence on computational ability of the system. This can be minimized with the recent advances in optical correlators using scanning holographic memory, as proposed by Birch et al. [1] In this paper we describe the discrimination abilities of the MACH filter against background heat signature variances and tolerance to changes in scale and calculate the improvement in detection capabilities with the introduction of a nonlinearity. We propose a security detection system which exhibits a joint process where human and an automated pattern recognition system contribute to the overall solution for the detection of pre-defined targets.

Gardezi, Akber; Alkandri, Ahmed; Birch, Philip; Young, Rupert; Chatwin, Chris

2010-10-01

222

Temporal averaging of phase measurements in the presence of spurious phase drift - Application to phase-stepped real-time holographic interferometry  

NASA Technical Reports Server (NTRS)

A technique that compensates for low spatial frequency spurious phase changes during an interference experiment is developed; it permits temporal averaging of multiple-phase measurements, made before and after object displacement. The method is tested with phase-stepped real-time holographic interferometry applied to cantilever bending of a piezoelectric bimorph ceramic. Results indicate that temporal averaging of the corrected data significantly reduces the white noise in a phase measurement without incurring systematic errors or sacrificing spatial resolution. White noise is reduced from 3 deg to less than 1 deg using these methods.

Ovryn, B.; Haacke, E. M.

1993-01-01

223

The study of time series of monthly averaged values of F10.7 from 1950 to 2010  

NASA Astrophysics Data System (ADS)

Prior to 1947, the activity of the Sun was assessed by the relative numbers of sunspots (W). The 10.7 cm radio emission (frequency of 2.8 GHz) for observations of the variability of radiation of chromosphere and the lower corona (F10.7) became used from 1947. For the F10,7 are available more detailed observational archive data, so this activity index more often than the other indices is used in the prediction and monitoring of the solar activity. We have made the analysis of time series of F10.7 with the use of different mother wavelets: Daubechies 10, Symlet 8, Meyer, Gauss 8 and Morlet. Wavelet spectrum allows us not only to identify cycles, but analyze their change in time. Each wavelet has its own characteristic features, so sometimes with the help of different wavelets it can be better identify and highlight the different properties of the analyzed signal. We intended to choose the mother wavelet, which is more fully gives information about the analyzed index F10.7. We have received, that all these wavelets show similar values to the maximums of the cyclic activity. However, we can see the difference when using different wavelets. There are also a number of periods, which, perhaps, are the harmonics of main period. The mean value of 11-year cycle is about 10.2 years. All the above examples show that the best results we get when using wavelets Morlet, Gauss (real-valued) and multiparameter family of wavelets Morlet and Gauss (complex-valued).

Bruevich, E. A.; Bruevich, V. V.; Yakunina, G. V.

2014-03-01

224

IET control building (TAN620) turnaround area. probably facing north. ventilation ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

IET control building (TAN-620) turnaround area. probably facing north. ventilation ducts near ceiling. sign says, "parking position no. 3, use this space last." poured concrete walls. INEEL negative no. HD-21-5-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

225

Choosing a School Turnaround Provider. Lessons Learned. Volume 1, Issue 3  

ERIC Educational Resources Information Center

Droves of school turnaround providers are chasing the massive federal infusion of funds flowing into failing schools. They arrive armed with glossy materials, impressive sounding claims, and, often, citing their prior relationships or experiences with one's school to support their promises of great service and impressive outcomes. But, are their…

Lockwood, Anne Turnbaugh; Fleischman, Steve

2010-01-01

226

Turnarounds and sustainable competitive advantage: An integration and study of decline and recovery  

Microsoft Academic Search

This research combines the turnaround, industrial organization economics, and resource based theory, and decline literatures to create a framework for investigating the relationship between external and internal changes that cause declines in business organizations. The relationship between the actions that generate recovery and the cause of decline is investigated. The executive succession literature is used to investigate CEO change in

David Allan Krueger

1997-01-01

227

Investigating the Role of Human Resources in School Turnaround: Evidence from Two States  

ERIC Educational Resources Information Center

Teachers are generally recognized as the schooling factor accounting for the highest proportion of student learning outcomes (Aaronson et al., 2007; Hanushek, 1986). This implies the quick and dramatic improvement in school performance observed in turnaround (TA) schools was associated with a major change in the performance of its teachers. This…

Hansen, Michael

2012-01-01

228

Operational Authority, Support, and Monitoring of School Turnaround. NCEE Evaluation Brief. NCEE 2014-4008  

ERIC Educational Resources Information Center

The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…

Herman, Rebecca; Graczewski, Cheryl; James-Burdumy, Susanne; Murray, Matthew; Perez-Johnson, Irma; Tanenbaum, Courtney

2013-01-01

229

Tinkering and Turnarounds: Understanding the Contemporary Campaign to Improve Low-Performing Schools  

ERIC Educational Resources Information Center

An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The article…

Duke, Daniel L.

2012-01-01

230

Turnaround Principals: An Unmistakable Conclusion--Site-Level Success Stems from Superintendent Support  

ERIC Educational Resources Information Center

The authors of this article were recently privileged to spend five days with a group of 10 talented principals who have been charged with turning around academically low-performing schools in their respective districts. These principals were selected as participants in a Turnaround Specialist Program initiated by Gov. Mark R. Warner of Virginia…

Burbach, Harold J.; Butler, Alfred R., IV

2005-01-01

231

Chronically Low-Performing Schools and Turnaround: Evidence from Three States  

ERIC Educational Resources Information Center

The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This paper…

Hansen, Michael; Choi, Kilchan

2012-01-01

232

Rural Economic Development Consequences of the Population Turnaround in Northern Lower Michigan.  

ERIC Educational Resources Information Center

Consequences of population turnaround for rural economic development are examined in a 9-county region of Northern Lower Michigan. Data from census reports and 374 usable responses to a questionnaire mailed to a random sample of property owners drawn from 1982 county tax assessment rolls were used to test competing hypotheses about rural…

West, Patrick C.; And Others

233

CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success  

ERIC Educational Resources Information Center

Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…

Gorman, Lynn

2012-01-01

234

Feed intake, average daily gain, feed efficiency, and real-time ultrasound traits in Duroc pigs: II. Genomewide association.  

PubMed

Efficient use of feed resources has become a clear challenge for the U.S. pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio, residual feed intake (RFI), real-time ultrasound back fat thickness (BF), ultrasound muscle depth, intramuscular fat content (IMF), birth weight (BW at birth), and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes B models with 35,140 SNP on 18 autosomes after quality control. Significance of nonoverlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes factor, and nonparametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166 to 140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream of the MC4R gene, was significantly associated with ADFI, ADG, and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN, and PENK. No QTL were identified for RFI, IMF, and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24962532

Jiao, S; Maltecca, C; Gray, K A; Cassady, J P

2014-07-01

235

Time-averaged acoustic force and torque exerted on an arbitrarily shaped rigid particle in a viscous fluid using boundary element method  

NASA Astrophysics Data System (ADS)

The time-averaged acoustic forces and torques experienced by an arbitrary shaped rigid particle located in a viscous fluid are formulated analytically and calculated by the BEM (Boundary Element Method) algorithm. There are many advantages of the method established in this paper, for example, no restrictions of the incident wave type and size and shape of the particle, and high efficiency and accuracy in numerical simulations.

Wang, Jingtao; Dual, Jurg

2012-05-01

236

Compliant structures with time-varying moment of inertia and non-zero averaged momentum and their application in angular rate microsensors  

NASA Astrophysics Data System (ADS)

In this work we introduce a new class of fully compliant structures performing vibratory motion, yet characterized by non-zero averaged momentum, appearing due to time-dependency of the inertial parameters. The work is motivated by microelectromechancial systems (MEMS) applications, where an implementation of unidirectional, non-vibratory motion involving relative motion of parts is not desirable for reliability reasons. Instead of changing the mass, which is challenging on the microscale, the moment of inertia of the proof mass performing tilting vibrations is controlled in such a way that it is higher or lower depending on the sign of the velocity. This results in a non-zero angular momentum averaged over the period. The equations describing the dynamics of a generic structure with a time-varying inertia and in a rotating coordinate frame are derived by using a variational principle. Simple approximate expressions for the averaged momentum and steady tilting angle are obtained and validated numerically. Based on the model results for different operational scenarios, we demonstrate that these devices can be efficiently used in fully compliant actuators and vibratory angular rate sensors (microgyros) with a steady response in a sensing mode ("pseudospinning disk gyros"), as well as in a parametrically excited gyro. The structure can be viewed also as a first step toward the realization of dynamic materials (DM) which are substances with material properties that may change in space and time.

Krylov, S.; Lurie, K.; Ya'akobovitz, A.

2011-09-01

237

Combining Site Occupancy, Breeding Population Sizes and Reproductive Success to Calculate Time-Averaged Reproductive Output of Different Habitat Types: An Application to Tricolored Blackbirds  

PubMed Central

In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005–2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we employ should be useful in other metapopulation studies for calculating time-averaged reproductive output for different sites.

Holyoak, Marcel; Meese, Robert J.; Graves, Emily E.

2014-01-01

238

A Q-GERT analysis of the space shuttle ground turnaround system at Vandenberg Air Force Base  

NASA Astrophysics Data System (ADS)

The efficient ground turnaround of the Space Shuttle is critical to the execution of national policies. Research identified four major subsystems within the Vandenberg Ground Turnaround System; the Orbiter Maintenance Subsystem, External Tank Subsystem, Solid Rocket Booster Subsystem, and Launch Pad Subsystem. A Q-GERT simulation of the ground Turnaround System was conducted to investigate the system and observe the interactions between the major subsystems. The Q-GERT model simulated the integration of Space Shuttle components to determine the subsystem most sensitive to change. The results indicate the Launch Pad Subsystem is the most critical in the Vandenberg Ground Turnaround System. However, further research is recommended in areas of logistics support, spares availability, and transportation of components.

Graham, S.; Jones, T. W.

1982-09-01

239

Endothelial cell responses to atheroprone flow are driven by two separate flow components: low time-average shear stress and fluid flow reversal  

PubMed Central

To simulate the effects of shear stress in regions of the vasculature prone to developing atherosclerosis, we subjected human umbilical vein endothelial cells to reversing shear stress to mimic the hemodynamic conditions at the wall of the carotid sinus, a site of complex, reversing blood flow and commonly observed atherosclerosis. We compared the effects of reversing shear stress (time-average: 1 dyn/cm2, maximum: +11 dyn/cm2, minimum: ?11 dyn/cm2, 1 Hz), arterial steady shear stress (15 dyn/cm2), and low steady shear stress (1 dyn/cm2) on gene expression, cell proliferation, and monocyte adhesiveness. Microarray analysis revealed that most differentially expressed genes were similarly regulated by all three shear stress regimens compared with static culture. Comparisons of the three shear stress regimens to each other identified 138 genes regulated by low average shear stress and 22 genes regulated by fluid reversal. Low average shear stress induced increased cell proliferation compared with high shear stress. Only reversing shear stress exposure induced monocyte adhesion. The adhesion of monocytes was partially inhibited by the incubation of endothelial cells with ICAM-1 blocking antibody. Increased heparan sulfate proteoglycan expression was observed on the surface of cells exposed to reversing shear stress. Heparinase III treatment significantly reduced monocyte adhesion. Our results suggest that low steady shear stress is the major impetus for differential gene expression and cell proliferation, whereas reversing flow regulates monocyte adhesion.

Conway, Daniel E.; Williams, Marcie R.; Eskin, Suzanne G.

2010-01-01

240

Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity  

NASA Astrophysics Data System (ADS)

Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

2013-06-01

241

Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers  

USGS Publications Warehouse

Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

2011-01-01

242

10.5 W Time-Averaged Power Mid-IR Supercontinuum Generation Extending Beyond 4 m With Direct Pulse Pattern Modulation  

Microsoft Academic Search

A novel, all-fiber-integrated supercontinuum (SC) laser is demonstrated and provides up to 10.5 W time-averaged power with a continuous spectrum from ~0.8 to 4 mum. The SC is generated in a combination of standard single-mode fibers and ZrF4-BaF2-LaF3-AlF3-NaF (ZBLAN) fluoride fibers pumped by a laser-diode-based cladding-pumped fiber amplifier system. The output SC pulse pattern can be modulated by directly modulating

Chenan Xia; Zhao Xu; Mohammed N. Islam; Mike J. Freeman; Andy Zakel; Jeremiah Mauricio

2009-01-01

243

[Initial and final changes in the signal-averaged QRS in the time and frequency domain in a case of hypertrophic myocardiopathy].  

PubMed

We report on a patient suffering from asymmetrical hypertrophic cardiomyopathy, with alterations occurring in the signal-averaged electrocardiogram (SAQRS). Recordings of 3-lead orthogonal ECG were obtained and analyzed by a Del Mar Avionics 750A Innovator device. The time domain analysis showed late potentials and a slow inscription of the initial portion of the SAQRS, that we called "early potentials". On the frequency domain there was a high degree of spectral turbulence at the beginning and at the end of the SAQRS. We discuss the possibility that the slow and fragmented conduction of the initial portion of the SAQRS could be related to the electrical instability of the disease. PMID:8013767

Canciani, B; Mormino, P; Scanavacca, G; Della Rocca, F

1994-02-01

244

Beyond long memory in heart rate variability: an approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity.  

PubMed

Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation. PMID:23822468

Leite, Argentina; Rocha, Ana Paula; Silva, Maria Eduarda

2013-06-01

245

H-Coal pilot plant 1980 turnaround report. Revision  

Microsoft Academic Search

Since the start of H-Coal operations on March 11, 1980, the plant has operated for a total of 2949 hours. Of this total time on stream, 2572 hours were expended circulating oil and 377 hours with coal feed to the unit. Approximately 1148 tons of coal were processed during this period. The coal processed was Kentucky number11. On November 14,

Searles

1981-01-01

246

A New Method of Deriving Time-Averaged Tropospheric Column Ozone over the Tropics Using Total Ozone Mapping Spectrometer (TOMS) Radiances: Intercomparison and Analysis Using TRACE A Data  

NASA Technical Reports Server (NTRS)

Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass burning in South America and Africa dominates the 03 budget in the tropical southern hemisphere during the study period.

Kim, J. H.; Hudson, R. D.; Thompson, A. M.

1996-01-01

247

Finding a near integrable nonlinear lattice using a convenient time averaging scheme and control of beam halo formation through nonlinear transport.  

NASA Astrophysics Data System (ADS)

A condition for improved dynamic aperture of nonlinear, alternating gradient transport systems has been derived using Lie-transform perturbation theory. The Lie transformation method exploits the property of canonical invariance of Poisson brackets thereby making the formulation less tedious especially when the form of the Hamiltonian is complex. Using this method, we have shown that one can transform to a time averaged reference frame where the focusing is azimuthally symmetric under certain conditions. These conditions are met when sextupoles or octupoles are inserted exactly between the alternate gradient quadrapoles and themselves alternate in sign. This symmetry in the time averaged frame renders the system nearly integrable in the laboratory frame leading to reduced chaos and improved confinement when compared to a system that is not close to integrability. Our numerical calculations show that this is indeed the case. The effects of space charge shall also be discussed. We propose the use of nonlinear focusing to mitigate beam halo formation. Various simulation results shall also be presented to demonstrate this.

Sonnad, Kiran G.; Cary, John R.

2002-11-01

248

Field demonstration of rapid turnaround, multilevel groundwater screening  

SciTech Connect

A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.

Tingle, A.R. [Analysas Corp., Oak Ridge, TN (United States); Baker, L. [Automated Sciences Group, Inc., Oak Ridge, TN (United States); Long, D.D. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States). Hazardous Waste Remedial Actions Program; Miracle, M. [Advanced Sciences, Inc., Oak Ridge, TN (United States)

1994-09-01

249

An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series  

NASA Technical Reports Server (NTRS)

Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

Wilson, Robert M.

2001-01-01

250

Turnaround Principals  

ERIC Educational Resources Information Center

The Obama administration has grand hopes for turning around the nation's lowest-performing schools, in part by allocating $3.5 billion for School Improvement Grants. Unfortunately, there simply aren't enough qualified principals to replace those mandated to be fired under two of the four school improvement models that the federal government says…

McLester, Susan

2011-01-01

251

Turnover turnaround.  

PubMed

The Foundation for Long Term Care (FLTC) created a program called Growing Strong Roots to help long-term care providers retain certified nurse aides (CNAs) and to enhance the quality of care that new CNAs provide. To adapt elements of existing peer-mentoring programs from around the country, Growing Strong Roots revised them and added new components, including administrator training, supervisor training, "booster sessions," newsletters, and a multisite evaluation. The program focuses on helping new CNAs become part of a facility's culture. Growing Strong Roots was evaluated in 11 diverse nursing homes. Research shows that nursing homes using the system increased their new CNA retention rates by 25 percent. A control group showed an increase of only 10 percent. All nursing homes participating in the project improved their retention rates, some by as much as 41 percent. In March 2005, the American Society on Aging, in collaboration with the Brookdale Center on Aging of Hunter College, New York City, honored Growing Strong Roots with a Best Practices in Human Resources and Aging Award. PMID:16350898

Hegeman, Carol R

2005-01-01

252

Simulation of the daily average PM 10 concentrations at Ta-Liao with Box-Jenkins time series models and multivariate analysis  

NASA Astrophysics Data System (ADS)

Southern Taiwan has experienced severe PM 10 problems for over a decade. The present paper describes the establishment of a simulation model for the daily average PM 10 concentrations at Ta-Liao, southern Taiwan. The study used a regression with time series error models (RTSE models) (multivariate ARIMA time series model), including an explanatory variable resulting from principal component analyses to complete the PM 10 simulation. Factor 1 estimated from the factor analyses explained the variance of 44-49%, which indicated the important contribution from the neighbor-city PM 10 at Mei-Nung, Lin-Yuang, Zuoying, Chao-Chou, local ozone and NOx. Factor 1 can be interpreted with regional PM 10 plus photochemical reactions. To improve the predictability of extremely high PM 10, different results from the principal component analysis were introduced to the RTSE models. We constructed four kinds of RTSE models: RTSE model without PC, with PC4S (PM 10 at Mei-Nung, Lin-Yuang, Zuoying, and Chao-Chou), with PCTL (meteorological variables and co-pollutants at Ta-Liao), and with PCTL4S (the combination of the above two) and evaluated the statistics model performance. Ozone, dew point temperature, NOx, wind speed, wind directions, and the PC trigger were the significant variables in the RTSE models most of time. When the neighbor-city PM 10 was included in the PC trigger, the predictability was apparently improved. The closeness of fit with the inclusion of PC4S and PCTL4S was improved by reducing SEE from 0.117 to 0.092. Using the RTSE models with PC4S or PCTL4S, POD was improved by an increase of 33%, FAR was reduced 30%, and CSI was increased 39%, when simulating the daily average PM 10 > 150 ?g m -3. Evidently we need to survey source impacts prior to establishing a simulation model. Factor analysis is a useful method to investigate sources that contributed PM 10 to a target site prior to establishing a simulation model.

Liu, Pao-Wen Grace

253

Average time until fixation of a mutant allele in a finite population under continued mutation pressure: Studies by analytical, numerical, and pseudo-sampling methods.  

PubMed

We consider a single locus, and denote by A the wild-type allele and by A' the mutant allele that is produced irreversibly in each generation from A at the rate v. Let 1 + s, 1 + h, and 1 be, respectively, the relative fitnesses of mutant homozygote A'A', mutant heterozygote A'A, and wild-type homozygote AA. Then, it is shown, on the basis of the diffusion equation method, that the average time until fixation of the mutant allele (A') in a randomly mating population of effective size N(e), given that the initial frequency is p, is [Formula: see text] in which B(x) = (S/2)x(2) + Hx(1 - x), S = 4N(e)s, H = 4N(e)h, and V = 4N(e)v. Of particular interest are the cases in which the mutant allele is deleterious (s = -s', s' > 0). Three cases are considered; the mutant is: (i) completely dominant s = h = -s', (ii) completely recessive s = -s', h = 0, and (iii) semidominant s = -s', h = -s'/2, in which s' is the selection coefficient against the mutant homozygote. It is shown that the average time until fixation is shorter when the deleterious mutant allele is dominant than when it is recessive if 4N(e)v is larger than 1. On the other hand, the situation is reversed if 4N(e)v is smaller than 1. It is also shown that for a mutant allele for which N(e)s' > 10, it takes such a long time until fixation that we can practically ignore the occurrence of random fixation of a deleterious allele under continued mutation pressure. To supplement the analytical treatment, extensive simulation experiments were performed by using a device called the pseudo-sampling variable, which can enormously accelerate the process of simulation by a computer. This method simulates the diffusion process itself rather than the binominal sampling process (in population genetics the diffusion model is usually regarded as an approximation of the discrete binomial sampling process). PMID:16592764

Kimura, M

1980-01-01

254

Iterative diagonalization in the state-averaged multi-configurational time-dependent Hartree approach: excited state tunneling splittings in malonaldehyde.  

PubMed

An iterative block Lanczos-type diagonalization scheme utilizing the state-averaged multi-configurational time-dependent Hartree (MCTDH) approach is introduced. Combining propagation in real and imaginary time and using a set of initial seed wavefunctions corresponding to excitations via the different components of the dipole moment vector, the scheme can favorably be used to selectively compute vibrational states which show high intensities in vibrational absorption spectra. Tunneling splitted vibrational states in double well systems can be described particularly efficient employing an increased set of seed wavefunctions which includes symmetric and anti-symmetric wavefunctions simultaneously. The new approach is used to study the tunneling splittings of the vibrationally excited states of malonaldehyde. Full-dimensional multi-layer MCTDH calculations are performed and results for the tunneling splittings of several excited vibrational states can be obtained. The calculated tunneling splittings agree reasonably well with available experimental data. Order of magnitude differences between tunneling splittings of different vibrationally excited states are found and interpreted. PMID:22320723

Hammer, Thorsten; Manthe, Uwe

2012-02-01

255

On radar time and the twin ``paradox''  

Microsoft Academic Search

In this paper we apply the concept of radar time (popularized by Bondi in his work on k calculus) to the well-known relativistic twin ``paradox.'' Radar time is used to define hypersurfaces of simultaneity for a class of traveling twins, from the ``immediate turn-around'' case, through the ``gradual turn-around'' case, to the ``uniformly accelerating'' case. We show that this definition

Carl E. Dolby; Stephen F. Gull

2001-01-01

256

Numerical study of laminar and turbulent flows inside a turnaround duct with and without guide vanes  

NASA Technical Reports Server (NTRS)

The purpose of this study is to examine in detail incompressible laminar and turbulent flows inside a turnaround duct with and without guide vanes and to investigate the effects of vanes on the flow characteristics. To perform this study, an implicit finite difference code cast in general curvilinear coordinates is further developed. The code is based on the method of pseudo-compressibility and utilize ADI or implicit approximate factorization algorithm to achieve computational efficiency. Method of segmental sweeping is developed to overcome the multiple-zone problem due to imposition of guide vanes. In the present study, several test cases have been computed. These test cases include laminar and turbulent flows inside a turnaround duct without and with two or three guide vanes. The study reveals that: (1) there exists large recirculation zones inside the duct if no vanes are present; (2) properly shaped and positional guide vanes are effective in eliminating flow separation; and (3) laminar and turbulent flows have similar flow features. But turbulent flow has less total pressure drop.

Lin, S.-J.; Chang, James L. C.

1987-01-01

257

Instantaneous and time-averaged flow structures around a blunt double-cone with or without supersonic film cooling visualized via nano-tracer planar laser scattering  

NASA Astrophysics Data System (ADS)

In a Mach 3.8 wind tunnel, both instantaneous and time-averaged flow structures of different scales around a blunt double-cone with or without supersonic film cooling were visualized via nano-tracer planar laser scattering (NPLS), which has a high spatiotemporal resolution. Three experimental cases with different injection mass flux rates were carried out. Many typical flow structures were clearly shown, such as shock waves, expansion fans, shear layers, mixing layers, and turbulent boundary layers. The analysis of two NPLS images with an interval of 5 ?s revealed the temporal evolution characteristics of flow structures. With matched pressures, the laminar length of the mixing layer was longer than that in the case with a larger mass flux rate, but the full covered region was shorter. Structures like K—H (Kelvin—Helmholtz) vortices were clearly seen in both flows. Without injection, the flow was similar to the supersonic flow over a backward-facing step, and the structures were relatively simpler, and there was a longer laminar region. Large scale structures such as hairpin vortices were visualized. In addition, the results were compared in part with the schlieren images captured by others under similar conditions.

Zhu, Yang-Zhu; Yi, Shi-He; He, Lin; Tian, Li-Feng; Zhou, Yong-Wei

2013-01-01

258

Theoretical and numerical calculations for the time-averaged acoustic force and torque acting on a rigid cylinder of arbitrary size in a low viscosity fluid.  

PubMed

In this paper, theoretical calculations as well as numerical simulations are performed for the time-averaged acoustic force and torque on a rigid cylinder of arbitrary size in a fluid with low viscosity, i.e., the acoustic boundary layer is thin compared to the cylinder radius. An exact analytical solution and its approximation are proposed in the form of an infinite series including Bessel functions. These solutions can be evaluated easily by a mathematical software package such as mathematica and matlab. Three types of incident waves, plane traveling wave, plane standing wave, and dual orthogonal standing waves, are investigated in detail. It is found that for a small particle, the viscous effects for an incident standing wave may be neglected but those for an incident traveling wave are notable. A nonzero viscous torque is experienced by the rigid cylinder when subjected to dual orthogonal standing waves with a phase shift even when the cylinder is located at equilibrium positions without imposed acoustic forces. Furthermore, numerical simulations are carried out based on the FVM algorithm to verify the proposed theoretical formulas. The theoretical results and the numerical ones agree with each other very well in all the cases considered. PMID:21682376

Wang, Jingtao; Dual, Jurg

2011-06-01

259

A Segmented Chirped-Pulse Fourier Transform Mm-Wave Spectrometer (260-295 Ghz) with Real-Time Signal Averaging Capability  

NASA Astrophysics Data System (ADS)

The design and performance of a 260-295 GHz segmented chirped-pulse Fourier transform mm-wave spectrometer is presented. The spectrometer uses an arbitrary waveform generator to create an excitation and detection waveform. The excitation waveform is a series of chirped pulses with 720 MHz bandwidth at mm-wave and about 200 ns pulse duration. The excitation pulses are produced using an x24 active multiplier chain with a peak power of 30 mW. Following a chirped pulse excitation, the molecular emission from all transitions in the excitation bandwidth is detected using heterodyne detection. The free induction decay (FID) is collected for about 1.5 microseconds and each segment measurement time period is 2 microseconds. The local oscillator for the detection in each segment is also created from the arbitrary waveform generator. The full excitation waveform contains 50 segments that scan the chirped pulse frequency and LO frequency across the 260-295 GHz frequency range in a total measurement time of 100 microseconds. The FID from each measurement segment is digitized at 4 GSamples/s, for a record length of 400 kpts. Signal averaging is performed by accumulating the FID signals from each sweep through the spectrum in a 32-bit FPGA. This allows the acquisition of 16 million sequential 260-295 GHz spectra in real time. The final spectrum is produced from fast Fourier transform of the FID in each measurement segment with the frequency calculated using the segment's LO frequency. The agility of the arbitrary waveform generator light source makes it possible to perform several coherent spectroscopic measurements to speed the analysis of the spectrum. In particular, high-sensitivity double-resonance measurements can be performed by applying a "pi-pulse" to a selected molecular transition and observing the changes to all other transitions in the 260-295 GHz frequency range of the spectrometer. In this mode of operation, up to 50 double-resonance frequencies can be used in each segment with the double-resonance signal collection taking place in real time.

Harris, Brent J.; Steber, Amanda L.; Pate, Brooks H.

2013-06-01

260

High-Latitude Paleomagnetic and Ar-Ar Study of 0 - 6 MA Lavas from Eastern Iceland: Contribution to the Time-Averaged Field Initiative  

NASA Astrophysics Data System (ADS)

Statistical analyses of paleomagnetic data from sequential lava flows allow us to study the geomagnetic field behavior on kyr to Myr timescales. Previous paleomagnetic studies have lacked high-latitude, high-quality measurements and resolution necessary to investigate the persistence of high-latitude geomagnetic field anomalies observed in the recent and historical field records, and replicated in some numerical geodynamo simulations. As part of the Time-Averaged Field Initiative (TAFI) project, the lava sequences found in Nordurdalur (by Fljótsdalur) and Jökuldalur in eastern Iceland provide an excellent opportunity to improve high-latitude data suitable for investigating the 0-5 Ma TAF and paleosecular variation. These adjacent valleys, separated by 40 km, are known to comprise a fairly continuous record of lava flows erupted from the Northern Rift Zone between 0.5 and 5-7 Ma. During a five weeks field campaign in summer 2013, we collected a total of ~1900 cores (10-16 cores/site; mean = ~13 cores/site) from ~140 separate lava flows (165 in total) along eight stratigraphic profiles in Nordurdalur and Jökuldalur. In addition, hand samples were collected from ~70 sites to deliver ~40 new 40Ar/39Ar radiometric age measurements. We present a preliminary composite magnetostratigraphic interpretation of the exposed volcanic pile in Nordurdalur and Jökuldalur. The new data will be compared and contrasted with previously published paleomagnetic and geochronological results. In addition, determinations of the anisotropy of the magnetic susceptibility of individual lava flows is sought to deliver fossil lava flow directions. The aim of the study is ultimately to present a high-quality study of paleomagnetic directions and intensities from Iceland spanning the past 6-7 Myr. The new Fjlotsdalur and Jökuldalur data will be combined with previously published paleomagnetic results.

Døssing, A.; Muxworthy, A. R.; Mac Niocaill, C.; Riishuus, M. S.

2013-12-01

261

Multi-component Reynolds-averaged Navier–Stokes simulations of Richtmyer–Meshkov instability and mixing induced by reshock at different times  

NASA Astrophysics Data System (ADS)

Turbulent mixing generated by shock-driven acceleration of a perturbed interface is simulated using a new multi-component Reynolds-averaged Navier-Stokes (RANS) model closed with a two-equation - model. The model is implemented in a hydrodynamics code using a third-order weighted essentially non-oscillatory finite-difference method for the advection terms and a second-order central difference method for the gradients in the source and diffusion terms. In the present reshocked Richtmyer-Meshkov instability and mixing study, an incident shock with Mach number is generated in air and progresses into a sulfur hexafluoride test section. The time evolution of the predicted mixing layer widths corresponding to six shock tube test section lengths are compared with experimental measurements and three-dimensional multi-mode numerical simulations. The mixing layer widths are also compared with the analytical self-similar power-law solution of the simplified model equations prior to reshock. A set of model coefficients and initial conditions specific to these six experiments is established, for which the widths before and after reshock agree very well with experimental and numerical simulation data. A second set of general coefficients that accommodates a broader range of incident shock Mach numbers, Atwood numbers, and test section lengths is also established by incorporating additional experimental data for , , and with and with and previous RANS modeling. Terms in the budgets of the turbulent kinetic energy and dissipation rate equations are examined to evaluate the relative importance of turbulence production, dissipation and diffusion mechanisms during mixing. Convergence results for the mixing layer widths, mean fields, and turbulent fields under grid refinement are presented for each of the cases.

Morán-López, J. T.; Schilling, O.

2014-05-01

262

The First 90 Days of the New Middle School Principal in a Turnaround School: In-Depth Case Study of the Transition Period (First 90 Days)  

ERIC Educational Resources Information Center

This study analyzed skills, strategies, and theories that new middle school principals used to be successful during their transition period (the first 90 days) in turnaround schools. Based on research on transitions, three research questions guided the study: 1. Do middle school principals in a turnaround school situation find the transition…

Baeza, Marco A.

2010-01-01

263

Time-Averaged Rate of Detachment Faulting at Atlantis Bank, Southwest Indian Ridge: Evidence for Highly Asymmetric Spreading Rates During the Formation of Oceanic Core-Complexes  

NASA Astrophysics Data System (ADS)

Determining the rates of oceanic detachment faulting and how they compare to rates of seafloor spreading is of fundamental importance for constraining how these features form and their relationship to tectonic processes, including magmatic accretion, in slow-spreading environments. We use Pb/U ages of igneous zircon from lower crustal gabbros and sea-surface magnetic data from Atlantis Bank to determine half-spreading rates during detachment faulting. We suggest that these rates, which record crustal accretion and cooling of lower crust in the Antarctic plate, also reflect the time-averaged rate of motion on the detachment fault. During the time Atlantis Bank formed, from 11.2-12.5Ma, both techniques yield a half-spreading rate of 14±1km/Ma. The magnetically determined full-spreading rate has been 14km/Ma since 20Ma and magnetic data from neighboring segments show that this full spreading rate did not increase during the period of detachment faulting. Our data also show that the period of highly asymmetric spreading was confined to the segment containing Atlantis Bank, and so was associated with ridge migration and closure of the non-transform discontinuity to the East. Therefore, detachment faulting at Atlantis Bank occurred during a period of highly asymmetric spreading; a result consistent with sea-surface magnetic data from the FUJI dome at 63° 45'E on the SWIR. Our data require a re-evaluation of existing models that predict structural asymmetry but not asymmetric plate spreading rates during the formation of oceanic core-complexes. The half-spreading rate during detachment faulting suggests that for ~1.3Ma the detachment fault acted as the primary plate boundary and that the axial valley was (within resolution) stationary with respect to the hanging wall (African Plate). If volcanism continued within the axial valley, a thick layer of extrusive material would have formed. This has not been identified. A combination of hydrothermal cooling and strain localization on the detachment fault might lead to limited diking and volcanism within the hanging wall, thus restricting crustal accretion to the lower crust and producing a thick plutonic layer in the Antarctic Plate, a prediction consistent with seismic velocities and crustal thickness observed beneath Atlantis Bank.

Baines, A. G.; Schwartz, J. J.; Cheadle, M. J.; John, B. E.

2005-12-01

264

Spherical harmonic analyses of paleomagnetic data: The time-averaged geomagnetic field for the past 5 Myr and the Brunhes-Matuyama reversal  

NASA Astrophysics Data System (ADS)

Maxwell's multiple pole theory provides the basis for a convenient means of determining the spherical harmonics of geomagnetic fields from directional paleomagnetic data. The relationship between the Maxwell poles and axes and the corresponding spherical harmonics was given by Maxwell. We show that the distribution of virtual geomagnetic poles (VGPs) is symmetrical about the Maxwell axes and converge to their poles. Utilizing this property of VGPs and a complimentary property of the distribution of equatorial virtual poles (EVPs), which are defined as points 90° from the VGP on the site meridian, leads to a means of obtaining the Gaussian spherical harmonic coefficients from paleomagnetic data. This VGP method involves the minimization of the horizontal components of the magnetic fields at VGPs and the radial components of the magnetic field at EVPs to yield the best fitting spherical harmonic coefficients. A hybrid variant of the VGP method involving this minimization and fitting the available mean vectors measured at sites has also been developed. Tests of the VGP and hybrid methods on model fields derived from the International Geomagnetic Reference Field (IGRF) 1995 demonstrate that they are effective means of determining the Gauss coefficients for any field requiring no a priori assumptions about the field. The hybrid method has been applied in a preliminary analysis of the time-averaged paleomagnetic data for the past 5 Myr and for the Bruhnes chron. The resulting mean fields were dominantly dipolar, but there were also persistent second order features, suggesting some long-term smaller-scale control over the dynamo process. The hybrid method was also applied to data from the Brunhes-Matuyama reversal. Four models were obtained: (1) a mean field model for reversed polarity, (2) a mean field model for normal polarity, (3) a mean field model for the entire reversal, and (4) a time sequence field model. The results were consistent with the Americas being a persistent site of inward field lines in the nondipole field and suggested that the reversal is initiated by decreases in strength of dipole, quadrupole, and octupole terms. The true dipole path of the transitional field tracks across eastern Asia, but a strong radially inward flux bundle moves over Africa. The paucity of demonstrably reliable data in these reversal records, however, requires that these results be interpreted cautiously.

Shao, J.-C.; Fuller, M.; Tanimoto, T.; Dunn, J. R.; Stone, D. B.

1999-03-01

265

Real-time cross-sectional averaged void fraction measurements in vertical annulus gas-liquid two-phase flow by neutron radiography and X-ray tomography techniques  

Microsoft Academic Search

A Real-Time Neutron Radiography (RTNR) system and a high speed X-ray Computed Tomography (X-CT) system are used to determine the flow regime, the instantaneous cross-sectional averaged void fraction, and the time averaged void fraction in a vertical annulus flow channel. A standard optical video system is also used to observe the flow regime. The annulus flow channel is operated as

G. D. Harvel; K. Hori; K. Kawanishi; J. S. Chang

1996-01-01

266

Portfolio District Reform Meets School Turnaround: Early Implementation Findings from the Los Angeles Public School Choice Initiative  

ERIC Educational Resources Information Center

Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…

Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan

2013-01-01

267

The Reading Turn-Around: A Five Part Framework for Differentiated Instruction. Practitioners Bookshelf, Language & Literacy Series  

ERIC Educational Resources Information Center

This book demonstrates a five-part framework for teachers, reading specialists, and literacy coaches who want to help their least engaged students become powerful readers. Merging theory and practice, the guide offers successful strategies to reach your "struggling" learners. The authors show how you can "turn-around" your instructional practice,…

Jones, Stephanie; Clarke, Lane; Enriquez, Grace

2009-01-01

268

The Circular F-Actin Bundles Provide a Track for Turnaround and Bidirectional Movement of Mitochondria in Arabidopsis Root Hair  

PubMed Central

Background The movement of organelles in root hairs primarily occurs along the actin cytoskeleton. Circulation and “reverse fountain” cytoplasmic streaming constitute the typical forms by which most organelles (such as mitochondria and the Golgi apparatus) in plant root hair cells engage in bidirectional movement. However, there remains a lack of in-depth research regarding the relationship between the distribution of the actin cytoskeleton and turnaround organelle movement in plant root hair cells. Results In this paper, Arabidopsis seedlings that had been stably transformed with a GFP-ABD2-GFP (green fluorescent protein-actin-binding domain 2-green fluorescent protein) construct were utilized to study the distribution of bundles of filamentous (F)-actin and the directed motion of mitochondria along these bundles in root hairs. Observations with a confocal laser scanning microscope revealed that there were widespread circular F-actin bundles in the epidermal cells and root hairs of Arabidopsis roots. In root hairs, these circular bundles primarily start at the sub-apical region, which is the location where the turnaround movement of organelles occurs. MitoTracker probes were used to label mitochondria, and the dynamic observation of root hair cells with a confocal laser scanning microscope indicated that turnaround mitochondrial movement occurred along circular F-actin bundles. Conclusions Relevant experimental results demonstrated that the circular F-actin bundles provide a track for the turnaround and bidirectional movement of mitochondria.

Zhang, Yu; Sheng, Xiaojing; Meng, Xiangfei; Li, Yan

2014-01-01

269

Developing Arizona Turnaround Leaders to Build High-Capacity Schools in the Midst of Accountability Pressures and Changing Demographics  

ERIC Educational Resources Information Center

Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of…

Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad

2014-01-01

270

The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms  

Microsoft Academic Search

The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections.

P. Welch

1967-01-01

271

On the estimation of coefficients of a simultaneous linear explosive model of higher orders with moving average errors generating a pair of time series  

Microsoft Academic Search

In this paper, estimation of coefficients of simultaneous linear partially explosive model of higher orders with moving average\\u000a errors is considered. It has been shown that the above model can be decomposed into a purely explosive model and an autoregressive\\u000a model. A two stage estimation, procedure is carried out towards proposing estimators for the partially explosive model. The\\u000a asymptotic properties

G. Gopal; K. Suresh Chandra

1991-01-01

272

Neutron resonance averaging  

SciTech Connect

The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

Chrien, R.E.

1986-10-01

273

Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)  

NASA Technical Reports Server (NTRS)

This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.

Gross, R. Steven

1996-01-01

274

MIT BLOSSOMS - Flaws of Averages  

NSDL National Science Digital Library

This learning video presents an introduction to the Flaws of Averages using three exciting examples: the “crossing of the river” example, the “cookie” example, and the “dance class” example. Averages are often worthwhile representations of a set of data by a single descriptive number. The objective of this module, however, is to simply point out a few pitfalls that could arise if one is not attentive to details when calculating and interpreting averages. Most students at any level in high school can understand the concept of the flaws of averages presented here. The essential prerequisite knowledge for this video lesson is the ability to calculate an average from a set of numbers. Materials needed include: pen and paper for the students; and a blackboard or equivalent. During this video lesson, students will learn about three flaws of averages: (1) The average is not always a good description of the actual situation, (2) The function of the average is not always the same as the average of the function, and (3) The average depends on your perspective. To convey these concepts, the students are presented with the three real world examples mentioned above. The total length of the four in-class video segments is 12 minutes, leaving lots of time in a typical class session for the teacher to work with the students on their own learning examples (such as those from the supplementary notes) to firm up the ideas presented here on the flaws of averages.

Blossoms, Mit

2011-06-23

275

Threaded Average Temperature Thermocouple.  

National Technical Information Service (NTIS)

A threaded average temperature thermocouple is provided to measure the average temperature of a situs in a test material. A ceramic insulator rod with two parallel holes through its length is securely fitted in a cylinder, which is bored along the longitu...

S. W. Ward

1988-01-01

276

The average eye.  

PubMed

For statistical and other purposes one needs to be able to determine an average eye. An average of refractive errors is readily calculated as an average of dioptric power matrices. A refractive error, however, is not so much a property of the eye as a property of the compensating lens in front of the eye. As such, it ignores other aspects of the optical character of the eye. This paper discusses the difficulties of finding a suitable average that fully accounts for the first-order optics of a set of optical systems. It proposes an average based on ray transferences and logarithms and exponentials of matrices. Application to eyes in particular is discussed. PMID:15491486

Harris, W F

2004-11-01

277

Feed intake, average daily gain, feed efficiency, and real-time ultrasound traits in Duroc pigs: I. Genetic parameter estimation and accuracy of genomic prediction.  

PubMed

The efficiency of producing salable products in the pork industry is largely determined by costs associated with feed and by the amount and quality of lean meat produced. The objectives of this paper were 1) to explore heritability and genetic correlations for growth, feed efficiency, and real-time ultrasound traits using both pedigree and marker information and 2) to assess accuracy of genomic prediction for those traits using Bayes A prediction models in a Duroc terminal sire population. Body weight at birth (BW at birth) and weaning (BW at weaning) and real-time ultrasound traits, including back fat thickness (BF), muscle depth (MD), and intramuscular fat content (IMF), were collected on the basis of farm protocol. Individual feed intake and serial BW records of 1,563 boars obtained from feed intake recording equipment (FIRE; Osborne Industries Inc., Osborne, KS) were edited to obtain growth, feed intake, and feed efficiency traits, including ADG, ADFI, feed conversion ratio (FCR), and residual feed intake (RFI). Correspondingly, 1,047 boars were genotyped using the Illumina PorcineSNP60 BeadChip. The remaining 516 boars, as an independent sample, were genotyped with a low-density GGP-Porcine BeadChip and imputed to 60K. Magnitudes of heritability from pedigree analysis were moderate for growth, feed intake, and ultrasound traits (ranging from 0.44 ± 0.11 for ADG to 0.58 ± 0.09 for BF); heritability estimates were 0.32 ± 0.09 for FCR but only 0.10 ± 0.05 for RFI. Comparatively, heritability estimates using marker information by Bayes A models were about half of those from pedigree analysis, suggesting "missing heritability." Moderate positive genetic correlations between growth and feed intake (0.32 ± 0.05) and back fat (0.22 ± 0.04), as well as negative genetic correlations between growth and feed efficiency traits (-0.21 ± 0.08, -0.05 ± 0.07), indicate selection solely on growth traits may lead to an undesirable increase in feed intake, back fat, and reduced feed efficiency. Genetic correlations among growth, feed intake, and FCR assessed by a multiple-trait Bayes A model resulted in increased genetic correlation between ADG and ADFI, a negative correlation between ADFI and FCR, and a positive correlation between ADG and FCR. Accuracies of genomic prediction for the traits investigated, ranging from 9.4% for RFI to 36.5% for BF, were reported that might provide new insight into pig breeding and future selection programs using genomic information. PMID:24671579

Jiao, S; Maltecca, C; Gray, K A; Cassady, J P

2014-06-01

278

Determining GPS average performance metrics  

NASA Technical Reports Server (NTRS)

Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

Moore, G. V.

1995-01-01

279

Threaded average temperature thermocouple  

NASA Technical Reports Server (NTRS)

A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

Ward, Stanley W. (Inventor)

1990-01-01

280

On radar time and the twin ``paradox''  

NASA Astrophysics Data System (ADS)

In this paper we apply the concept of radar time (popularized by Bondi in his work on k calculus) to the well-known relativistic twin ``paradox.'' Radar time is used to define hypersurfaces of simultaneity for a class of traveling twins, from the ``immediate turn-around'' case, through the ``gradual turn-around'' case, to the ``uniformly accelerating'' case. We show that this definition of simultaneity is independent of choice of coordinates, and assigns a unique time to any event (with which the traveling twin can send and receive signals), resolving some common misconceptions.

Dolby, Carl E.; Gull, Stephen F.

2001-12-01

281

Dynamic Shapes Average.  

National Technical Information Service (NTIS)

A framework for computing shape statistics in general, and average in particular, for dynamic shapes is introduced in this paper. Given a metric d(-,-) on the set of static shapes, the empirical mean of N static shapes, C(sub 1),...,C(sub N), is defined b...

P. Maurel G. Sapiro

2005-01-01

282

The Average Teacher.  

ERIC Educational Resources Information Center

Data from the U.S. Current Population Survey and the Schools and Staffing Survey indicate that the average public school teacher is 42 years old, female, and white, with a bachelor's degree, 15 years of teaching experience, and an annual salary of $39,346. About 74 percent of teachers surveyed plan to stay in teaching as long as possible, 69…

Teacher Magazine, 2002

2002-01-01

283

Average Rate of Change  

NSDL National Science Digital Library

This demo provides students with a concrete understanding of the average rate of change for physical situations and for functions described in tabular or graphic form. Animations in gif and Quicktime format are available, as well as Excel programs for classroom or student use.

Roberts, Lila F.; Hill, David R.

2005-04-21

284

An Average Lunch?  

NSDL National Science Digital Library

This statistics activity requires students to find the mean using their TI-10 or TI-15 Explorer calculators. The task asks students to read part of Sideways Stories of Wayside School by Louis Sachar and find different combinations of lunches that would give an average price of $1.85. This resource includes a three-page PDF lesson plan with guiding questions, data collection page, and calculator instructions.

2008-06-09

285

Average balance equations for granular materials  

Microsoft Academic Search

A general weighted space-time averaging procedure is developed and utilized to obtain the average balance equations for granular materials. The method is applicable to both solid-like (quasi-static) and fluid-like (granular flow) modes of granular material behavior. The average balance equations take the same mathematical form while all average quantities involved in these equations retain the same clear physical meaning whether

Marijan Babic

1997-01-01

286

Dynamic Multiscale Averaging (DMA) of Turbulent Flow  

SciTech Connect

A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.

Richard W. Johnson

2012-09-01

287

Timing control improves seabed survey data quality  

Microsoft Academic Search

Seateam has completed development of and field-proven the Dolphin data acquisition and timing system for high-density surveys offshore. The Dolphin project was initiated to improve quality control of survey sensor data and ensure time synchronization, thus leading to faster turnaround of seabed terrain information. Data received from survey sensors is asynchronous, so the system must provide for data correlation. This

1996-01-01

288

On fast Birkhoff averaging  

NASA Astrophysics Data System (ADS)

We study the pointwise behavior of Birkhoff sums S_nphi(x) on subshifts of finite type for Hölder continuous functions phi. In particular, we show that for a given equilibrium state mu associated to a Hölder continuous potential, there are points x such that S_nphi(x) - n mathbb{E}_mu phi ˜ a n(beta) for any a>0 and 0< beta <1. Actually the Hausdorff dimension of the set of such points is bounded from below by the dimension of mu and it is attained by some maximizing equilibrium state nu such that mathbb{E}_nu phi = mathbb{E}_mu phi. On such points the ergodic average n(-1) S_nphi(x) converges more rapidly than predicted by the Birkhoff Theorem, the Law of the Iterated Logarithm and the Central Limit Theorem. All these sets, for different choices (a, beta), are distinct but have the same dimension. This reveals a rich multifractal structure of the symbolic dynamics. As a consequence, we prove that the set of uniform recurrent points, which are close to periodic points, has full dimension. Applications are also given to the study of syndetic numbers, Hardy-Weierstraß functions and lacunary Taylor series.

Fan, Ai-Hua; Schmeling, Jörg

2003-11-01

289

Ensemble averaging of acoustic data  

NASA Technical Reports Server (NTRS)

A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

Stefanski, P. K.

1982-01-01

290

Reliable and sensitive detection of fragile X (expanded) alleles in clinical prenatal DNA samples with a fast turnaround time.  

PubMed

This study evaluated a large set of blinded, previously analyzed prenatal DNA samples with a novel, CGG triplet-repeat primed (TP)-PCR assay (Amplidex FMR1 PCR Kit; Asuragen, Austin, TX). This cohort of 67 fetal DNAs contained 18 full mutations (270 to 1100 repeats, including 1 mosaic), 12 premutations (59 to 150 repeats), 9 intermediate mutations (54 to 58 repeats), and 28 normal samples (17 to 50 repeats, including 3 homozygous female samples). TP-PCR accurately identified FMR1 genotypes, ranging from normal to full- mutation alleles, with a 100% specificity (95% CI, 85.0% to 100%) and a 97.4% sensitivity (95% CI, 84.9% to 99.9%) in comparison with Southern blot analysis results. Exact sizing was possible for a spectrum of normal, intermediate, and premutation (up to 150 repeats) alleles, but CGG repeat numbers >200 are only identified as full mutations. All homozygous alleles were correctly resolved. The assay is also able to reproducibly detect a 2.5% premutation and a 3% full-mutation mosaicism in a normal male background, but a large premutation in a full male mutation background was masked when the amount of the latter was >5%. Implementation of this TP-PCR will significantly reduce reflex testing using Southern blot analyses. Additional testing with methylation-informative techniques might still be needed for a few cases with (large) premutations or full mutations. PMID:22921311

Seneca, Sara; Lissens, Willy; Endels, Kristof; Caljon, Ben; Bonduelle, Maryse; Keymolen, Kathleen; De Rademaeker, Marjan; Ullmann, Urielle; Haentjens, Patrick; Van Berkel, Kim; Van Dooren, Sonia

2012-11-01

291

Computer-assisted time-averaged holograms of the motion of the surface of the mammalian tympanic membrane with sound stimuli of 0.4-25 kHz.  

PubMed

Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f>4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841

Rosowski, John J; Cheng, Jeffrey Tao; Ravicz, Michael E; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme

2009-07-01

292

Accuracy and Robustness of Dynamical Tracking of Average Glycemia (A1c) to Provide Real-Time Estimation of Hemoglobin A1c Using Routine Self-Monitored Blood Glucose Data  

PubMed Central

Abstract Background: Laboratory hemoglobin A1c (HbA1c) assays are typically done only every few months. However, self-monitored blood glucose (SMBG) readings offer the possibility for real-time estimation of HbA1c. We present a new dynamical method tracking changes in average glycemia to provide real-time estimation of A1c (eA1c). Materials and Methods: A new two-step algorithm was constructed that includes: (1) tracking fasting glycemia to compute base eA1c updated with every fasting SMBG data point and (2) calibration of the base eA1c trace with monthly seven-point SMBG profiles to capture the principal components of blood glucose variability and produce eA1c. A training data set (n=379 subjects) was used to estimate model parameters. The model was then fixed and applied to an independent test data set (n=375 subjects). Accuracy was evaluated in the test data set by computing mean absolute deviation (MAD) and mean absolute relative deviation (MARD) of eA1c from reference HbA1c, as well as eA1c–HbA1c correlation. Results: MAD was 0.50, MARD was 6.7%, and correlation between eA1c and reference HbA1c was r=0.76. Using an HbA1c error grid plot, 77.5% of all eA1c fell within 10% from reference HbA1c, and 97.9% fell within 20% from reference. Conclusions: A dynamical estimation model was developed that achieved accurate tracking of average glycemia over time. The model is capable of working with infrequent SMBG data typical for type 2 diabetes, thereby providing a new tool for HbA1c estimation at the patient level. The computational demands of the procedure are low; thus it is readily implementable into home SMBG meters. Real-time HbA1c estimation could increase patients' motivation to improve diabetes control.

Flacke, Frank; Sieber, Jochen; Breton, Marc D.

2014-01-01

293

Accuracy and robustness of dynamical tracking of average glycemia (a1c) to provide real-time estimation of hemoglobin a1c using routine self-monitored blood glucose data.  

PubMed

Abstract Background: Laboratory hemoglobin A1c (HbA1c) assays are typically done only every few months. However, self-monitored blood glucose (SMBG) readings offer the possibility for real-time estimation of HbA1c. We present a new dynamical method tracking changes in average glycemia to provide real-time estimation of A1c (eA1c). Materials and Methods: A new two-step algorithm was constructed that includes: (1) tracking fasting glycemia to compute base eA1c updated with every fasting SMBG data point and (2) calibration of the base eA1c trace with monthly seven-point SMBG profiles to capture the principal components of blood glucose variability and produce eA1c. A training data set (n=379 subjects) was used to estimate model parameters. The model was then fixed and applied to an independent test data set (n=375 subjects). Accuracy was evaluated in the test data set by computing mean absolute deviation (MAD) and mean absolute relative deviation (MARD) of eA1c from reference HbA1c, as well as eA1c-HbA1c correlation. Results: MAD was 0.50, MARD was 6.7%, and correlation between eA1c and reference HbA1c was r=0.76. Using an HbA1c error grid plot, 77.5% of all eA1c fell within 10% from reference HbA1c, and 97.9% fell within 20% from reference. Conclusions: A dynamical estimation model was developed that achieved accurate tracking of average glycemia over time. The model is capable of working with infrequent SMBG data typical for type 2 diabetes, thereby providing a new tool for HbA1c estimation at the patient level. The computational demands of the procedure are low; thus it is readily implementable into home SMBG meters. Real-time HbA1c estimation could increase patients' motivation to improve diabetes control. PMID:24299302

Kovatchev, Boris P; Flacke, Frank; Sieber, Jochen; Breton, Marc D

2014-05-01

294

Averaging methods in nonlinear dynamical systems  

Microsoft Academic Search

Contents: Basic material. Asymptotics of slow-time processes, first steps. The theory of averaging. Attraction. Averagingover spatial variables: systems with slowly varying frequency and passage through resonance. Normal forms. Hamiltonian systems. Appendices.

J. A. Sanders; F. Verhulst

1985-01-01

295

Average-cost based robust structural control  

NASA Technical Reports Server (NTRS)

A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

Hagood, Nesbitt W.

1993-01-01

296

Differential absorption lidar signal averaging  

NASA Technical Reports Server (NTRS)

This paper presents experimental results using an atmospheric backscatter dual CO2 laser DIAL. It is shown that DIAL signals can be averaged to obtain an N exp -1/2 dependence decrease in the standard deviation of the ratio of backscattered returns from two lasers, where N is the number of DIAL signals averaged, and that such a lidar system can make measurements of gas concentrations with a precision of 0.7 percent in absorptance over 75 m in a short measurement time when the signal strength is high. Factors that eventually limit the rate of improvement in the SNR, such as changes in the ratio of the absorption and/or backscatter at the two laser frequencies and background noise, are discussed. In addition, it is noted that DIAL measurements made using hard-target backscatter often show departures from N exp -1/2 dependence improvement in the standard deviation because they are further limited by the combined effects of atmospheric turbulence and speckle (since the relative reproducibility of the speckle pattern on the receiver gives rise to correlations of the lidar signals).

Grant, William B.; Brothers, Alan M.; Bogan, James R.

1988-01-01

297

Reliable Transcript Quantification by Real-Time Reverse Transcriptase-Polymerase Chain Reaction in Primary Neuroblastoma Using Normalization to Averaged Expression Levels of the Control Genes HPRT1 and SDHA  

PubMed Central

Real-time reverse transcriptase-polymerase chain reaction (RT-PCR) represents a sensitive and efficient technique to determine expression levels of target genes in multiple samples and is increasingly used in clinical oncology to evaluate the patient’s outcome or to detect minimal residual disease. Normalization of raw data are required to obtain comparable results between different specimens and is usually achieved by correlating transcript abundances of target genes with those of a single control gene with putatively stable expression levels. In this study, expression stability of six supposed control genes was evaluated in 64 samples of primary neuroblastoma and HPRT1 and SDHA mRNA levels were shown to exhibit the least expression variability among the samples. Because application of more than one control gene may enhance reliability of real-time RT-PCR results, various normalization factors consisting of the geometrical mean of multiple control gene expression values were calculated and evaluated by mRNA quantification of 14 target genes. Comparison with transcript levels determined by oligonucleotide-array expression analysis revealed that target gene mRNA quantification became most consistent after normalization to averaged expression levels of HPRT1 and SDHA. This normalization factor was in addition demonstrated to be not associated with stage of disease or MYCN amplification status of the tumor. Thus, these data indicate that the geometrical mean of HPRT1 and SDHA transcript levels represents a suitable internal control for biological and clinical studies investigating differential gene expression in primary neuroblastoma by real-time RT-PCR.

Fischer, Matthias; Skowron, Matthias; Berthold, Frank

2005-01-01

298

Small scale magnetic flux-averaged magnetohydrodynamics  

SciTech Connect

By relaxing exact magnetic flux conservation below a scale [lambda] a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when [lambda] is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time [tau] but not for less than [tau]. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits.

Pfirsch, D. (Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, D-8046 Garching (Germany)); Sudan, R.N. (Laboratory of Plasma Studies, Cornell University, Ithaca, New York 14853 (United States))

1994-08-01

299

Towards sensitive label-free immunosensing by means of turn-around point long period fiber gratings.  

PubMed

Long period fiber gratings have been effectively used in the field of biochemical sensing since a few years. Compared to other well-known label-free optical approaches, long period gratings (LPGs) take advantage of the typical peculiarity of optical fibers. Coupling the propagating core mode with a high-order cladding mode near its turn-around point (TAP) was the strategy adopted to achieve good performances without additional coatings, except for the sensing and selective biolayer deposited on the fiber. Both the modeling and manufacturing of TAP LPGs were discussed. After the functionalization of the fiber surface with the deposition of a Eudragit L100 copolymer layer followed by immunoglobulin G (IgG) covalent immobilization, an IgG/anti-IgG bioassay was implemented along the grating region and the kinetics of antibody/antigen interaction was analyzed. A quantitative comparison between a TAP LPG and a non-TAP LPG was carried out to highlight the improvement of the proposed immunosensor. The real effectiveness and feasibility of an LPG-based biosensor were demonstrated by using a complex matrix consisting of human serum, which also confirmed the specificity of the assay, and a limit of detection of 70?gL(-1) (460pM) was achieved. PMID:24835405

Chiavaioli, F; Biswas, P; Trono, C; Bandyopadhyay, S; Giannetti, A; Tombelli, S; Basumallick, N; Dasgupta, K; Baldini, F

2014-10-15

300

An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System  

NASA Astrophysics Data System (ADS)

PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

301

Averaging Models: Parameters Estimation with the R-Average Procedure  

ERIC Educational Resources Information Center

The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

Vidotto, G.; Massidda, D.; Noventa, S.

2010-01-01

302

Average Speed and Unit Conversion  

NSDL National Science Digital Library

Students will determine average speeds from data collected and convert units for speed problems. Students try to roll the ball with a prescribed average speed based on intuition. Then, based on unit conversion we will see how accurate the rolls really were.

2009-01-01

303

Predictor Selection for Model Averaging  

Microsoft Academic Search

When a number of distinct models is available for prediction, choice of a single model can offer unstable results. In regression, stochastic search vari- able selection with Bayesian model averaging is a solution for this robustness issue but utilizes very many predictors. Here we look at Bayesian model averag- ing that incorporates variable selection for prediction and use decision theory

MARINA VANNUCCI; PHILIP J. BROWN; TOM FEARN

2001-01-01

304

Changes of Average Maternal Age  

Microsoft Academic Search

IN a report in NATURE of March 18 of an address delivered by Dr. L. S. Penrose to the Industrial Section of the British Psychological Society on February 8, the following sentence occurs: ``When the birth-rate is falling, the average maternal age [average age at child-birth] will probably become greater''. In support of this view it is stated that Dr.

R. S. Barclay; W. O. Kermack

1939-01-01

305

Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models  

NASA Technical Reports Server (NTRS)

Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.

Lakshminarayana, B.; Luo, J.

1993-01-01

306

Reflight of the First Microgravity Science Laboratory: Quick Turnaround of a Space Shuttle Mission  

NASA Technical Reports Server (NTRS)

Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.

Simms, Yvonne

1998-01-01

307

Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments  

NASA Technical Reports Server (NTRS)

ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .

Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

2012-01-01

308

Kuss Middle School: Expanding Time to Accelerate School Improvement  

ERIC Educational Resources Information Center

In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning, deepen…

Massachusetts 2020, 2012

2012-01-01

309

High average power pockels cell  

DOEpatents

A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

Daly, Thomas P. (Pleasanton, CA)

1991-01-01

310

Average case analysis of dynamic graph algorithms  

Microsoft Academic Search

We present a model for edge updates with restricted randomness in dynamic graph algorithms and a general technique for analyzing the expected running time of an update operation. This model is able to capture the average case in many applications, since (1) it allows restrictions on the set of edges which can be used for insertions and (2) the type

David Alberts; Monika Rauch Henzinger

1995-01-01

311

Engineering a Turnaround  

ERIC Educational Resources Information Center

This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…

Hood, Lucy

2006-01-01

312

Making a Turnaround  

ERIC Educational Resources Information Center

It's true that each school and each classroom presents its own unique challenges. But Kevin Mixon--who in October traded his instrumental music instructor duties at Blodgett School on the west side of Syracuse, New York, for a job as fine arts coordinator for Syracuse City Schools--knows from experience that some schools are far more challenging…

Olson, Cathy Applefeld

2010-01-01

313

Average deployments versus missile and defender parameters  

Microsoft Academic Search

This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing.

Canavan

1991-01-01

314

Maximum Likelihood Estimation of the Autoregressive Coefficients and Moving Average Covariances of Vector Autoregressive Moving Average Models.  

National Technical Information Service (NTIS)

The purpose of this paper is to derive asymptotically efficient estimates for the autoregressive matrix coefficients and moving average covariance matrices of the vector autoregressive moving average (VARMA) models in both time and frequency domains. To d...

F. Ahrabi

1979-01-01

315

Averaging inhomogenous cosmologies - a dialogue  

NASA Astrophysics Data System (ADS)

The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

Buchert, T.

316

Averaging inhomogeneous cosmologies - a dialogue.  

NASA Astrophysics Data System (ADS)

The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

Buchert, T.

317

ARMA (Autoregressive Moving Average) Identification,  

National Technical Information Service (NTIS)

In view of recent results on the asymptotic behavior of the prediction error covariance for a state variable system (see Ref. 1), an identification scheme for autoregressive moving average (ARMA) processes is proposed. The coefficients of the d-step predi...

G. Algengrin R. S. Bucy J. M. Moura J. Pages M. I. Ribeiro

1987-01-01

318

PAC-Bayesian model averaging  

Microsoft Academic Search

PAC-Bayesian learning methods combine theinformative priors of Bayesian methods withdistribution-free PAC guarantees. Building onearlier methods for PAC-Bayesian model selection,this paper presents a method for PACBayesianmodel averaging. The method constructsan optimized weighted mixture of conceptsanalogous to a Bayesian posterior distribution.Although the main result is stated forbounded loss, a preliminary analysis for unboundedloss is also given.1 INTRODUCTIONA PAC-Bayesian...

David A. McAllester; Shannon Labs

1999-01-01

319

Comments on ''The optimization of electronic precision in ultrasonic velocity measurements: A comparison of the time interval averaging and sing around methods'' (J. Acoust. Soc. Am. 73, 1833--1837 (1983))  

SciTech Connect

J. D. Aindow and R. C. Chivers (J. Acoust. Soc. Am. 73, 1833 (1983)) compared the precision of the direct ''time-of-flight'' technique with the ''sing-around'' method for sound velocity measurement. Their conclusion is changed by the newer, faster, commercial clocks (2 ns HP5345<0.1 ns HP5370), giving the advantage to the time of flight method. The analysis is herewith augmented by calculating the time jitter in terms of signal to noise ratio, which was correctly shown to be negligible with 100-ns clocks, but becomes increasingly more significant with faster clocks.

Karplus, H.B.

1984-06-01

320

Time  

ERIC Educational Resources Information Center

When a teacher gives their time to a student, it is more significant to that student than anything else one could do for him or her. Music teachers deal with time all the time. Someone once said that "time is like money: we never have enough." This may seem true; however, time is not like money. One can make more money, but one cannot "make time."…

Circle, David

2005-01-01

321

F-IF Average Cost  

NSDL National Science Digital Library

This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important asects of the task and its potential use. Here are the first few lines of the commentary for this task: John makes DVDs of his friendâs shows. He has realized that, because of his fixed costs, his average cost per DVD depends on the number of DVDs he prod...

322

Averaging over multiple or continuous scales  

NASA Astrophysics Data System (ADS)

Several important physical systems with microstructures that vary spatially and/or temporally are examined. The appropriate averaging schemes are developed to derive effective equations and parameters describing the long- term, long-range behavior of the systems studied. For a general time-periodic linear planar flow, it is shown that the large-time, time-average transport of a pulse of localized solute can be described by classical anisotropic diffusion. The constant effective diffusion tensor is derived as a function of frequency, amplitude and flow topology. Statistical averaging of similar advection-diffusion problem with a random velocity field possessing certain stochastic properties is carried out. An extension of earlier exact renormalization theory to the case of two and three dimensional homogeneous stationary isotropic turbulence allows the derivation of integral-scale effective transport equation and the associated effective diffusivity, which is time-dependent in general, in terms of parameters defining the velocity statistics. With the Kolmogorov velocity spectrum, the developed theory predicts both time-evolution of the mean-square displacement of fluid particles and the infinite-time asymptotic value of eddy diffusivity in turbulent pipe and channel flows with acceptable accuracy in a wide range of Reynolds number. A macroscopic disjoining pressure near a triple contact line for an arbitrary air/liquid interface is derived from the attractive van der Waals forces by a renormalized coarse-graining scheme. It is then used to obtain the dynamic contact angles for partially-wetting fluids. For glycerine on Plexiglas, the predicted critical capillary number for the onset of rolling motion is in good agreement with fingering experiments both for gravity-driven flow and during spin coating. The resulting dynamic contact angle for Santicizer 405 on silicone treated glass is in quantitative agreement with Hoffman's capillary data and, at low spreading speeds, with earlier asymptotic theories.

Indeikina, Alexandra E.

323

Low-pass filters for signal averaging  

SciTech Connect

Detailed comparison of the settling time-noise bandwidth products of 31 types of low-pass filters demonstrates that the settling time-noise bandwidth product is the figure of merit for such filters when the goal is averaging. Common filters such as Butterworth, elliptic, and Chebyshev are found to be unusable for such purposes while others, such as Bessel filters, offer only moderate figures of merit. The best reported analog low-pass filter differs from ideality by only about 11%. The optimum analog low-pass filter, having continuous, rational transfer function, is unknown.

Voigtman, E.; Winefordner, J.D.

1986-05-01

324

Average configuration of the induced venus magnetotail  

SciTech Connect

In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.

McComas, D.J.; Spence, H.E.; Russell, C.T.

1985-01-01

325

Achronal averaged null energy condition  

SciTech Connect

The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.

Graham, Noah; Olum, Ken D. [Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States) and Center for Theoretical Physics, Laboratory for Nuclear Science, and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts 02155 (United States)

2007-09-15

326

Geomagnetic effects on the average surface temperature  

NASA Astrophysics Data System (ADS)

Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

Ballatore, P.

327

A Simple Algorithm for Averaging Spike Trains  

PubMed Central

Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.

2013-01-01

328

Iridium Oxide Nanomonitors for Real-time Health Monitoring Systems  

Microsoft Academic Search

An innovative lab-on-a-chip device based on iridium oxide (IrOx) nanowires was designed, fabricated, and developed for real-time “point-of-care” diagnostics. The turnaround time in detection and the need for expensive equipment for analysis have considverably limited rapid and “point-of-care” diagnosis. This research demonstrates the potential of IrOx nanowires toward early disease diagnosis by detecting proteins that are disease markers. The device

Vinu L. Venkatraman; Fengyan Zhang; David Evans; Bruce Ulrich; Shalini Prasad

2009-01-01

329

A visibility graph averaging aggregation operator  

NASA Astrophysics Data System (ADS)

The problem of aggregation is of considerable importance in many disciplines. In this paper, a new type of operator called visibility graph averaging (VGA) aggregation operator is proposed. This proposed operator is based on the visibility graph which can convert a time series into a graph. The weights are obtained according to the importance of the data in the visibility graph. Finally, the VGA operator is used in the analysis of the TAIEX database to illustrate that it is practical and compared with the classic aggregation operators, it shows its advantage that it not only implements the aggregation of the data purely, but also conserves the time information. Meanwhile, the determination of the weights is more reasonable.

Chen, Shiyu; Hu, Yong; Mahadevan, Sankaran; Deng, Yong

2014-06-01

330

Chasing trends: recursive moving average trading rules and internet stocks  

Microsoft Academic Search

The recent rise and fall of Internet stock prices has led to popular impressions of a speculative bubble in the Internet sector. We investigate whether investors could have exploited the momentum in Internet stocks using simple moving average (MA) trading rules. We simulate real time technical trading using a recursive trading strategy applied to over 800 moving average rules. Statistical

Wai Mun Fong; Lawrence H. M. Yong

2005-01-01

331

Delineating the Average Rate of Change in Longitudinal Models  

ERIC Educational Resources Information Center

The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

Kelley, Ken; Maxwell, Scott E.

2008-01-01

332

Optimal averaging of seasonal sea surface temperatures and associated confidence intervals (1860-1989)  

Microsoft Academic Search

Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest

Thomas M. Smith; C. F. Ropelewski; R. W. Reynolds

1994-01-01

333

Average observational quantities in the timescape cosmology  

SciTech Connect

We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

Wiltshire, David L. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand, and International Center for Relativistic Astrophysics Network (ICRANet), Piazzale le della Repubblica 10, Pescara 65121 (Italy)

2009-12-15

334

Analytical spatially-averaged photochemical nonsteady state urban ozone model  

Microsoft Academic Search

An Analytical Spatially-averaged Photochemical (ASP) air quality model is presented for predicting daytime time-dependent volume-averaged ozone concentrations in urban areas. The ASP model is designed for screening air pollution control strategies, particularly in urban areas in developing countries where data, computer, and monetary constraints preclude using complex advection-diffusion-transformation airshed models. The ASP model is an analytical time-dependent box model, developed

1984-01-01

335

RHIC BPM system average orbit calculations  

SciTech Connect

RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

2009-05-04

336

Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane  

NASA Technical Reports Server (NTRS)

This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

2000-01-01

337

The Scatter in Tropical Average Precipitation Anomalies  

Microsoft Academic Search

Tropical mean precipitation anomalies ^P9& for 3-month averages appear quite scattered in relation to tropical average sea surface temperature (SST) anomalies ^& , based on examination of a number of observational T9 s datasets and of atmospheric general circulation model (GCM) results. Even though SST is locally important for determining precipitation, for a given warm SST anomaly, the tropical average

HUI S UA; NDJ. DAVID NEELIN

2003-01-01

338

Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?  

SciTech Connect

Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

2013-06-17

339

Probability Distributions for the Refractive Index Structure Parameter and the Inner Scale of Turbulence and Their Implications for Flux Averaging.  

National Technical Information Service (NTIS)

Defining the averaging time required for measuring meaningful turbulence statistics is a central problem in boundary-layer meteorology. Path- averaging scintillation instruments are presumed to confer some time-averaging benefits when the objective is to ...

E. L. Andrea C. W. Fairall P. O. Persson P. S. Guest

2003-01-01

340

Average consensus problems in networks of agents with delayed communications  

Microsoft Academic Search

The present paper is devoted to the study of average consensus problems for undirected networks of dynamic agents having communication delays. The accent is put here on the study of the time-delays influence: both constant and time-varying delays are considered, as well as uniform and non uniform repartitions of the delays in the network. The main results provide sufficient conditions

Pierre-Alexandre Bliman; Giancarlo Ferrari-Trecate

2005-01-01

341

Average consensus problems in networks of agents with delayed communications  

Microsoft Academic Search

The present paper is devoted to the study of average consensus problems for undirected networks of dynamic agents having communication delays. By focusing on agents with integrator dynamics, the accent is put here on the study of the time-delays influence: both constant and time-varying delays are considered, as well as uniform and non uniform repartitions of the delays in the

Pierre-alexandre Bliman; Giancarlo Ferrari-trecate

2008-01-01

342

Crime and arrests: An autoregressive integrated moving average (ARIMA) approach  

Microsoft Academic Search

Various theoretical perspectives suggest that marginal changes in the quantity of crime and arrests are related to one another. Unfortunately, they provide little guidance as to the amount of time that is required for these effects to be realized. In this paper, autoregressive integrated moving average (ARIMA) time-series modeling techniques, which necessitate making minima! assumptions concerning the lag structure one

Mitchell B. Chamlin

1988-01-01

343

Averaging underwater noise levels for environmental assessment of shipping.  

PubMed

Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575

Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

2012-10-01

344

Neutron resonance averaging with filtered beams  

SciTech Connect

Neutron resonance averaging using filtered beams from a reactor source has proven to be an effective nuclear structure tool within certain limitations. These limitations are imposed by the nature of the averaging process, which produces fluctuations in radiative intensities. The fluctuations have been studied quantitatively. Resonance averaging also gives us information about initial or capture state parameters, in particular the photon strength function. Suitable modifications of the filtered beams are suggested for the enhancement of non-resonant processes.

Chrien, R.E.

1985-01-01

345

High average power active-mirror amplifier  

NASA Astrophysics Data System (ADS)

Operation of the first high average power Nd:glass active-mirror amplifier, a scalable laser device that may be used to configure solid-state laser systems with high average power output into the kilowatt regime, is reported. An extractable average power of over 120 W was achieved at the device laser material fracture limit, and at a repetition rate of 5 Hz.

Brown, D. C.; Lee, K. K.; Bowman, R.; Menders, J.; Kuper, J.

1986-03-01

346

Averaging in GR using Cartan scalars  

NASA Astrophysics Data System (ADS)

Averaging problem in GR and cosmology is of fundamental importance. It is still not clear how to unambiguously average Einstein equations and the metric tensor (despite some promising attempts). Here we will present a new approach to this problem using the theory of the Cartan scalars. After short review of the theory originally taken from the equivalence problem, averaging procedure is presented and some examples are given.

Kašpar, Petr; Svítek, Otakar

2012-07-01

347

Bayes model averaging with selection of regressors  

Microsoft Academic Search

When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with Bayesian model averaging offers a cure for this robustness issue but at the expense of requiring very many predictors. Here we look at Bayes model averaging incorporating variable selection for prediction.

P. J. Brown; M. Vannucci; T. Fearn

2002-01-01

348

Average Transmission Probability of a Random Stack  

ERIC Educational Resources Information Center

The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

2010-01-01

349

Averages in vector spaces over finite fields  

Microsoft Academic Search

We study the analogues of the problems of averages and maximal averages over a surface in Rn when the euclidean structure is replaced by that of a vector space over a finite field, and obtain optimal results in a number of model cases.

ANTHONY CARBERY; BRENDAN STONES; JAMES WRIGHT

2008-01-01

350

Whatever Happened to the Average Student?  

ERIC Educational Resources Information Center

Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

Krause, Tom

2005-01-01

351

Moving average process underlying the holographic-optical-tweezers experiments  

NASA Astrophysics Data System (ADS)

We study statistical properties of the recordings which contain time-dependent positions of a bead trapped in optical tweezers. Analysis of such a time series indicates that the commonly accepted model, i.e., the autoregressive process of first order, is not sufficient to fit the data. We show a presence of the first-order moving average part in the dynamical model of the system. We explain origin of this part as an influence of the high frequency CCD camera on the measurements. The proposed autoregressive moving average model appears to reflect perfectly all statistical features of the high-frequency recording data.

?lezak, Jakub; Drobczy?ski, S?awomir; Weron, Karina; Masajada, Jan

2013-12-01

352

Averaging in cosmology based on Cartan scalars  

NASA Astrophysics Data System (ADS)

We present a new approach for averaging in general relativity and cosmology. After a short review of the theory originally taken from the equivalence problem, we consider two ways of dealing with averaging based on Cartan scalars. We apply the theory for two different Lemaître-Tolman-Bondi models. In the first one, the correlation term behaves as a positive cosmological constant, in the second example, the leading correlation term behaves like spatial curvature. We also show the non-triviality of averaging for linearized monochromatic gravitational wave.

Kašpar, P.; Svítek, O.

2014-05-01

353

Cosmic inhomogeneities and averaged cosmological dynamics.  

PubMed

If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811

Paranjape, Aseem; Singh, T P

2008-10-31

354

Generalization to stochastic averaging in random vibration.  

National Technical Information Service (NTIS)

Stochastic Averaging is applied to a class of randomly excited single- degree-of-freedom oscillators possessing linear damping and nonlinear stiffness terms. The assumed excitation form involves an externally applied evolutionary Gaussian stochastic proce...

J. R. Red-Horse

1992-01-01

355

40 CFR 86.449 - Averaging provisions.  

Code of Federal Regulations, 2013 CFR

...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...

2013-07-01

356

Average Atom Model of Partial Ionization, Revisited.  

National Technical Information Service (NTIS)

The average atom model of partial ionization discussed by Zeldovich and Raizer provides an approximate solution to the Saha equation. As originally derived the model contains several approximations and ad hoc assumptions and violates one of the Maxwell eq...

M. J. Clauser

1975-01-01

357

Modeling Plants With Moving-Average Outputs  

NASA Technical Reports Server (NTRS)

Three discrete-state-variable representations derived. Paper discusses mathematical modeling of digital control systems for plants in which outputs include combinations of instantaneous and moving-average-prefiltered measurements.

Polites, Michael E.

1989-01-01

358

Heuristic approach to capillary pressures averaging  

SciTech Connect

Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

Coca, B.P.

1980-10-01

359

Climatology of globally averaged thermospheric mass density  

Microsoft Academic Search

We present a climatological analysis of daily globally averaged density data, derived from orbit data and covering the years 1967–2007, along with an empirical Global Average Mass Density Model (GAMDM) that encapsulates the 1986–2007 data. The model represents density as a function of the F10.7 solar radio flux index, the day of year, and the Kp geomagnetic activity index. We

J. T. Emmert; J. M. Picone

2010-01-01

360

Applications of high average power nonlinear optics  

SciTech Connect

Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.

Velsko, S.P.; Krupke, W.F.

1996-02-05

361

On the average genus of a graph  

Microsoft Academic Search

Not all rational numbers are possibilities for the average genus of an individual graph. The smallest such numbers are determined,\\u000a and varied examples are constructed to demonstrate that a single value of average genus can be shared by arbitrarily many\\u000a different graphs. It is proved that the number 1 is a limit point of the set of possible values for

Jonathan L. Gross; E. Ward Klein; Robert G. Rieper

1993-01-01

362

Radial averages of astigmatic TEM images.  

PubMed

The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images. PMID:18662790

Fernando, K Vince

2008-10-01

363

INVERSIONS FOR AVERAGE SUPERGRANULAR FLOWS USING FINITE-FREQUENCY KERNELS  

SciTech Connect

I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.

Svanda, Michal, E-mail: michal@astronomie.cz [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)

2012-11-10

364

Turnarounds Central Issue under NCLB  

ERIC Educational Resources Information Center

For all the debate over the effectiveness of the No Child Left Behind Act, researchers and policymakers say that, despite the law's flaws, it has successfully identified 1,200 public schools that need help, some of them desperately so. Now the question is: How can the federal law be changed to make sure such schools get that help? As Congress…

Hoff, David J.

2007-01-01

365

Turnaround Team Racing Summer's Clock  

ERIC Educational Resources Information Center

A few weeks before a new team of teachers was to report to the former Shawnee High School in Louisville, Kentucky, Principal Keith Look discovered the master schedule for the 2010-11 school year to be in total disarray. More than 100 Shawnee students--all of them juniors and seniors--had been enrolled in classes with no connection to the credits…

Maxwell, Lesli A.

2010-01-01

366

Districts Created to Steer "Turnarounds"  

ERIC Educational Resources Information Center

If the job of a traditional superintendent is hard, imagine the complexities involved in building a school system from scratch--especially one composed of schools with some of the most intractable educational challenges. That's the task facing education leaders in Michigan and Tennessee, which are building special districts to take over…

Samuels, Christina A.

2011-01-01

367

An Offensive Earned-Run Average for Baseball  

Microsoft Academic Search

This paper studies a baseball statistic that plays the role of an offen- sive earned-run average (OERA). The OERA of an individual is simply the number of earned runs per game that he would score if he batted in all nine positions in the line-up. Evaluation can be performed by hand by scoring the sequence of times at bat of

THOMAS M. COVER; CARROLL W. KEILERS

1977-01-01

368

State-Variable Representations For Moving-Average Sampling  

NASA Technical Reports Server (NTRS)

Two state-variable representations derived for continuous-time plant driven by control algorithm including zero-order hold and measurements sampled at mutliple rates by multiple-input/multiple-output moving-average processes. New representations enhance observability and controllability of plant. Applications include mathematical modeling of navigation systems including star trackers, gyroscopes, and accelerometers.

Polites, Michael E.

1991-01-01

369

Averaging battery model for a lead-acid battery operating in an electric car  

Microsoft Academic Search

Computer programs used to calculate the performance of an electric vehicle must contain a battery model. A simple mathematical model is developed and evaluated. The model is based on time averaged current or power required from a battery during the operation of the electric vehicle. The time averaging technique used accounts for time varying discharge rates, rest times, and the

Bozek

1979-01-01

370

When Is the Local Average Treatment Close to the Average? Evidence from Fertility and Labor Supply  

ERIC Educational Resources Information Center

The local average treatment effect (LATE) may differ from the average treatment effect (ATE) when those influenced by the instrument are not representative of the overall population. Heterogeneity in treatment effects may imply that parameter estimates from 2SLS are uninformative regarding the average treatment effect, motivating a search for…

Ebenstein, Avraham

2009-01-01

371

Determination of the average lifetime of b-baryons  

Microsoft Academic Search

The average lifetime of b-baryons has been studied using 3 \\\\times 10^6 hadronic Z^0 decays collected by the DELPHI detector at LEP. Three methods have been used, based on the measurement of different observables: the proper decay time distribution of 206 vertices reconstructed with a \\\\Lambda, a lepton and an oppositely charged pion; the impact parameter distribution of 441 muons

P. Abreu; W. Adam; T. Adye; E. Agasi; I. Ajinenko; Roy Aleksan; G. D. Alekseev; R. Alemany; P. P. Allport; S. Almehed; Ugo Amaldi; S. Amato; A. Andreazza; M. L. Andrieux; P. Antilogus; W. D. Apel; Y. Arnoud; B Åsman; J. E. Augustin; A. Augustinus; Paul Baillon; P. Bambade; F. Barao; R. Barate; M S Barbi; Dimitri Yuri Bardin; A. Baroncelli; O. Barring; J. A. Barrio; Walter Bartl; M. J. Bates; Marco Battaglia; M. Baubillier; J. Baudot; K. H. Becks; M. Begalli; P. Beilliere; Yu A Belokopytov; K S Belous; Alberto C Benvenuti; M. Berggren; D. Bertrand; F. Bianchi; M. Bigi; S M Bilenky; P. Billoir; D. Bloch; M. Blume; S. Blyth; T. Bolognese; M. Bonesini; W. Bonivento; P. S. L. Booth; G. Borisov; C. Bosio; S. Bosworth; O. Botner; E. Boudinov; B. Bouquet; C. Bourdarios; T. J. V. Bowcock; M. Bozzo; P. Branchini; K. D. Brand; T. Brenke; R. A. Brenner; C. Bricman; L. Brillault; R. C. A. Brown; P. Bruckman; J. M. Brunet; L. Bugge; T. Buran; T. Burgsmueller; P. Buschmann; A. Buys; S. Cabrera; M. Caccia; M. Calvi; A J Camacho-Rozas; T. Camporesi; V. Canale; M. Canepa; K. Cankocak; F. Cao; F. Carena; L. Carroll; Carlo Caso; M V Castillo-Gimenez; A. Cattai; F. R. Cavallo; L. Cerrito; V. Chabaud; M M Chapkin; Ph. Charpentier; L. Chaussard; J. Chauveau; P. Checchia; G. A. Chelkov; M. Chen; R. Chierici; P V Chliapnikov; P. Chochula; V. Chorowicz; J. Chudoba; V. Cindro; P. Collins; J. L. Contreras; R. Contri; E. Cortina; G. Cosme; F. Cossutti; H. B. Crawley; D J Crennell; G. Crosetti; J Cuevas-Maestro; S. Czellar; Erik Dahl-Jensen; J. Dahm; B D'Almagne; M. Dam; G. Damgaard; P. D. Dauncey; Martyn Davenport; W Da Silva; C. Defoix; A. Deghorain; G Della Ricca; P A Delpierre; N. Demaria; A De Angelis; Wim de Boer; S De Brabandere; C De Clercq; C de La Vaissière; B De Lotto; A De Min; L S De Paula; C De Saint-Jean; H. Dijkstra; Lucia Di Ciaccio; F. Djama; J. Dolbeau; M. Donszelmann; K. Doroba; M. Dracos; J. Drees; K. A. Drees; M. Dris; Y. Dufour; D M Edsall; R. Ehret; G. Eigen; T J C Ekelöf; Gösta Ekspong; M. Elsing; J. P. Engel; N. Ershaidat; B. Erzen; E. Falk; D. Fassouliotis; Michael Feindt; A. Ferrer; A Filippas-Tassos; A. Firestone; P. A. Fischer; H. Foeth; E. Fokitis; F. Fontanelli; F. Formenti; B J Franek; P. Frenkiel; D E C Fries; A. G. Frodesen; R. Fruhwirth; F. Fulda-Quenzer; J A Fuster; A. Galloni; D. Gamba; M. Gandelman; C. Garcia; J. Garcia; C. Gaspar; U. Gasparini; Ph. Gavillet; E. N. Gazis; D. Gele; J. P. Gerber; L N Gerdyukov; M. Gibbs; R. Gokieli; B. Golob; Gian P Gopal; L. Gorn; M. Gorski; Yu. Gouz; Valerio Gracco; E. Graziani; G. Grosdidier; K. Grzelak; S A Gumenyuk; P. Gunnarsson; M. Gunther; J. Guy; F. Hahn; S. Hahn; Z. Hajduk; A. Hallgren; K. Hamacher; W. Hao; F. J. Harris; V. Hedberg; R P Henriques; J. J. Hernandez; P. Herquet; H. Herr; T. L. Hessing; E. Higon; Hans Jürgen Hilke; T. S. Hill; S. O. Holmgren; P. J. Holt; D J Holthuizen; S. Hoorelbeke; M A Houlden; Josef Hrubec; K. Huet; K. Hultqvist; J. N. Jackson; R. Jacobsson; P. Jalocha; R. Janik; Ch. Jarlskog; G. Jarlskog; P. Jarry; B. Jean-Marie; E. K. Johansson; L B Jönsson; P E Jönsson; Christian Joram; P. Juillot; M. Kaiser; F. Kapusta; K. Karafasoulis; M. Karlsson; E. Karvelas; S. Katsanevas; E. C. Katsoufis; R. Keranen; Yu A Khokhlov; B. A. Khomenko; N N Khovanskii; B J King; N. J. Kjaer; H. Klein; A. Klovning; P M Kluit; B. Koene; P. Kokkinias; M. Koratzinos; K. Korcyl; C. Kourkoumelis; O. Kouznetsov; P. H. Kramer; Manfred Krammer; C. Kreuter; I J Kronkvist; Z Krumshtein; W. Krupinski; P. Kubinec; W. Kucewicz; K L Kurvinen; C. Lacasta; I. Laktineh; S. Lamblot; J. W. Lamsa; L. Lanceri; P. Langefeld; I. Last; J. P. Laugier; R. Lauhakangas; Gerhard Leder; F. Ledroit; V. Lefebure; C. K. Legan; R. Leitner; Y. Lemoigne; J. Lemonne; Georg Lenzen; V. Lepeltier; T. Lesiak; D. Liko; R. Lindner; A. Lipniacka; I. Lippi; B. Loerstad; J. G. Loken; J. M. Lopez; D. Loukas; P. Lutz; L. Lyons; J N MacNaughton; G. Maehlum; A. Maio; V. Malychev; F. Mandl; J. Marco; R P Marco; B. Marechal; M. Margoni; J. C. Marin; C. Mariotti; A. Markou; T. Maron; C. Martinez-Rivero; F. Martinez-Vidal; S. Marti i Garcia; J. Masik; F. Matorras; C. Matteuzzi; Giorgio Matthiae; M. Mazzucato; M L McCubbin; R McKay; R McNulty; J. Medbo; M. Merk; C. Meroni; S. Meyer; W. T. Meyer; M. Michelotto; E. Migliore; L. Mirabito; Winfried A Mitaroff; U. Mjoernmark; T. Moa; R. Moeller; K. Moenig; M. R. Monge; P. Morettini; H. Mueller; L. M. Mundim; W. J. Murray; B. Muryn; Gerald Myatt; F. Naraghi; Francesco Luigi Navarria; S. Navas; K. Nawrocki; P. Negri; S. Nemecek; W. Neumann; N. Neumeister; R. Nicolaidou; B. S. Nielsen; M. Nieuwenhuizen; V. Nikolaenko; P. Niss; A. Nomerotski; Ainsley Normand; W. Oberschulte-Beckmann; V F Obraztsov; A G Olshevskii; A. Onofre; Risto Orava; K Österberg; A. Ouraou; P. Paganini; M. Paganoni; P. Pages

1996-01-01

372

Benchmarking statistical averaging of spectra with HULLAC  

NASA Astrophysics Data System (ADS)

Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

Klapisch, Marcel; Busquet, Michel

2008-11-01

373

High Average Power Yb:YAG Laser  

SciTech Connect

We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

Zapata, L E; Beach, R J; Payne, S A

2001-05-23

374

Perceiving the average hue of color arrays  

PubMed Central

The average of a color distribution has special significance for color coding (e.g. to estimate the illuminant) but how it depends on the visual representation (e.g. perceptual vs. cone-opponent) or nonlinearities (e.g. categorical coding) is unknown. We measured the perceived average of two colors shown alternated in spatial arrays. Observers adjusted the components until the average equaled a specified reference hue. Matches for red, blue-red, or yellow-green were consistent with the arithmetic mean chromaticity, while blue-green settings deviated toward blue. The settings show little evidence for categorical coding, and cannot be predicted from the scaled appearances of the individual components.

Webster, Jacquelyn; Kay, Paul; Webster, Michael A.

2014-01-01

375

Polarized electron beams at milliampere average current  

SciTech Connect

This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

Poelker, Matthew [JLAB

2013-11-01

376

On the Average Pairing Energy in Nuclei  

NASA Astrophysics Data System (ADS)

The macroscopic-microscopic method is applied to calculate the nuclear energies, especially the microscopic shell and pairing corrections. The single-particle levels are obtained with the Yukawa folded mean-field potential. The macroscopic energy is evaluated using the Lublin-Strasbourg Drop model. The shell corrections are obtained using the Strutinsky method with smoothing in nucleon number space. The average nuclear pairing energy is also determined by folding the BCS sums in nucleon number space. The average pairing energy dependence on the nuclear elongation is investigated.

Nerlo-Pomorska, Bo?ena; Pomorski, Krzysztof

377

The average tangential electric field at the noon magnetopause  

NASA Technical Reports Server (NTRS)

Two hundred and five magnetopause passes of the ISEE 1 satellite through the magnetopause within 2 hrs of local noon during the time period 1977-1981 were examined in order to determine the relationship between the dawn-dusk tangential electric field at the magnetopause and the northward component of the magnetosheath magnetic field; the ratio of these quantities yields the speed of the plasma flow toward the magnetopause (i.e., the reconnection flow speed). The results indicate that, on the average, reconnection occurs at the subsolar magnetopause, and that the average reconnection flow speed of magnetosheath plasma toward the magnetopause is about 15 percent of the local Alfven speed.

Lindqvist, P.-A.; Mozer, F. S.

1990-01-01

378

Boundedness and Dimension for Weighted Average Functions.  

National Technical Information Service (NTIS)

The power considers a weighted average property of the type u(x sub 0) = (the integral over B of uwdx)/(the integral over B of wdx), B a ball in E superscript n with center x sub 0. A lemma constructing such functions is presented from which it follows th...

D. P. Stanford

1969-01-01

379

Average Values of Mean Squares in Factorials  

Microsoft Academic Search

The assumptions appropriate to the application of analysis of variance to specific examples, and the effects of these assumptions on the resulting interpretations, are today a matter of very active discussion. Formulas for average values of mean squares play a central role in this problem, as do assumptions about interactions. This paper presents formulas for crossed (and, incidentally, for nested

Jerome Cornfield; John W. Tukey

1956-01-01

380

Science of NHL Hockey: Statistics & Averages  

NSDL National Science Digital Library

Being a top goalie in the NHL takes more than quick reflexes and nerves of steel, it also requires a firm grip on the numbers. Namely, the key averages and statistics of goaltending. "Science of NHL Hockey" is a 10-part video series produced in partnership with the National Science Foundation and the National Hockey League.

Learn, Nbc

2010-10-07

381

Model averaging, optimal inference, and habit formation  

PubMed Central

Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior.

FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

2014-01-01

382

HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.  

SciTech Connect

Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

2005-08-21

383

Initial Conditions in the Averaging Cognitive Model  

ERIC Educational Resources Information Center

The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

Noventa, S.; Massidda, D.; Vidotto, G.

2010-01-01

384

Average Annual Rainfall Over the Globe  

NASA Astrophysics Data System (ADS)

The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts1 1.74×10 J of solar radiation per second and it is divided over various channels as given in Table 1. It keeps our planet warm and maintains its average temperature2 of 288 K with the help of the atmosphere in such a way that life can survive. It also recycles the water in the oceans/rivers/ lakes by initial evaporation and subsequent precipitation; the average annual rainfall over the globe is around one meter. According to M. King Hubbert1 the amount of solar power going into the evaporation and precipitation channel is 4.0×10 W. Students can verify the value of average annual rainfall over the globe by utilizing this part of solar energy. This activity is described in the next section.

Agrawal, D. C.

2013-12-01

385

Methodology for Bayesian Model Averaging: An Update  

Microsoft Academic Search

The standard practice of selecting a single model from some class of models and then making inferences based on this model ignores model uncertainty. Ignoring model uncertainty can im- pair predictive performance and lead to overstatement of the strength of evidence via p-values that are too small. Bayesian model averaging provides a coherent approach for accounting for model uncertainty. A

Jennifer A. Hoeting

386

Estimation of areally-averaged surface fluxes  

NASA Astrophysics Data System (ADS)

The concept of blending height is used to estimate areally averaged surface fluxes of momentum and heat in a stratified, horizontally inhomogeneous surface-layer flow. This concept is based on the assumption that at sufficiently large heights above a heterogeneous surface, subsequent surface modifications will not be recognizable in the flow individually, but overall flux and mean profiles will represent the surface condition of a large area. The height at which the flow becomes approximately independent of horizontal position is called blending height according to Wieringa (1986). Here, it is proposed to classify the ground surface in a surface-layer grid box of a larger-scale model into several land-use categories. Surface momentum and heat fluxes should be estimated for each category at the blending height. The grid-averaged surface fluxes are to be obtained by the average of surface fluxes on each land-use surface weighted by its fractional area. The postulate of computing the surface fluxes at the blending height leads to a new formulation of turbulent transfer coefficients. The proposed parameterization has been tested by employing a small-scale numerical model as a surface-layer grid box of a hypothesized larger-scale model. Several quite different flow configurations have been studied in order to investigate the performance of the new parameterization. Generally, the relative errors of estimated averaged surface fluxes are found to be well within ±10%.

Claussen, Martin

1991-03-01

387

Profile of Hired Farmworkers, 1998 Annual Averages.  

National Technical Information Service (NTIS)

An average of 875,000 persons 15 years of age and older did hired farmwork each week as their primary job in 1998. An additional 63,000 people did hired farmwork each week as their secondary job. Hired farmworkers were more likely than the typical U.S. wa...

J. L. Runyan

2000-01-01

388

Why Johnny Can Be Average Today.  

ERIC Educational Resources Information Center

During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

Sturrock, Alan

1997-01-01

389

Distributed averaging on asynchronous communication networks  

Microsoft Academic Search

Abstract— Distributed algorithms for averaging have at- tracted interest in the control and sensing literature. However, previous works have not addressed some practical concerns that will arise in actual implementations on packet-switched communication networks such as the Internet. In this paper, we present several implementable algorithms that are robust to asynchronism and dynamic topology changes. The algorithms do not require

M. Mehyar; D. Spanos; J. Pongsajapan; S. H. Low; R. M. Murray

2005-01-01

390

Why do Noninvertible Estimated Moving Averages Occur.  

National Technical Information Service (NTIS)

The positive probability that an estimated moving average process is noninvertible is studied for maximum likelihood estimation of a univariate process. Upper and lower bounds for the probability in the first-order case are obtained as well as limits when...

A. Takemura T. W. Anderson

1984-01-01

391

The Average Height of Planted Plane Trees.  

National Technical Information Service (NTIS)

An asymptotic expression for the average height of a planted plane tree is derived, based on an asymptotic series for sums such as the Summation (k > or = 1) of (2n, N + K) d(k) and the Summation (k > or = 1) of (e to the (-k squared/n) power) d(k), where...

D. E. Knuth N. G. de Bruijn S. O. Rice

1971-01-01

392

Average Annual Rainfall over the Globe  

ERIC Educational Resources Information Center

The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

Agrawal, D. C.

2013-01-01

393

Moving average process underlying the holographic-optical-tweezers experiments.  

PubMed

We study the statistical properties of recordings that contain time-dependent positions of a bead trapped in optical tweezers. Analysis of such a time series indicates that the commonly accepted model, i.e., the autoregressive process of first-order, is not sufficient to fit the data. We show the presence of a first-order moving average part in the dynamical model of the system. We explain the origin of this part as an influence of the high-frequency CCD camera on the measurements. We show that this influence evidently depends on the applied exposure time. The proposed autoregressive moving average model appears to reflect perfectly all statistical features of the high-frequency recording data. PMID:24787213

?l?zak, Jakub; Drobczy?ski, S?awomir; Weron, Karina; Masajada, Jan

2014-04-01

394

Bounce-averaged kinetic equations and neoclassical polarization density  

SciTech Connect

The rigorous formulation of the bounce-averaged equations is presented based upon the Poincar{acute e}{endash}Cartan one-form and Lie perturbation methods. The resulting bounce-averaged Vlasov equation is Hamiltonian, and is thus suitable for the self-consistent simulation of low frequency electrostatic turbulence in the trapped ion mode regime. In the bounce-kinetic Poisson equation, the {open_quotes}neoclassical polarization density{close_quotes} arises from the difference between the bounce-averaged banana center and real trapped particle densities across a field line. This representation of the neoclassical polarization drift as a shielding term provides a systematic way to study the long term behavior of the turbulence driven {bold E{times}B} flow. {copyright} {ital 1999 American Institute of Physics.}

Fong, B.H.; Hahm, T.S. [Princeton University Plasma Physics Laboratory, Princeton, New Jersey, 08543 (United States)] [Princeton University Plasma Physics Laboratory, Princeton, New Jersey, 08543 (United States)

1999-01-01

395

Evolution of the average avalanche shape with the universality class  

NASA Astrophysics Data System (ADS)

A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics.

Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J.

2013-12-01

396

Evolution of the average avalanche shape with the universality class.  

PubMed

A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571

Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J

2013-01-01

397

Average Cosmological Invariant Parameters of Cosmic Gamma Ray Bursts  

NASA Technical Reports Server (NTRS)

Average cosmological invariant parameters (ACIPs) are calculated for six groups of BATSE cosmic gamma-ray bursts selected by their peak fluxes on the 1.024s timescale. The ACIPs represent the average temporal and spectral properties of these events equally in the observer frame of reference and in the co-moving frames of outbursting emitters. The parameters are determined separately for rise fronts and for back slopes of bursts, defined as the time profiles before and after the main peaks, respectively. The ACIPs for the rise fronts are found to be different for different intensity groups, while the ACIPs for the back slopes show no significant dependence on intensity. We conclude that emitters of bursts manifest standard average properties only during the back slopes of bursts.

Mitrofanov, Igor G.; Anfimov, Dimitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, W. S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles

1998-01-01

398

Direct Matrix-Assisted Laser Desorption Ionization Time-of-Flight Mass Spectrometry Improves Appropriateness of Antibiotic Treatment of Bacteremia  

Microsoft Academic Search

Matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) allows the identification of microorganisms directly from positive blood culture broths. Use of the MALDI-TOF MS for rapid identification of microorganisms from blood culture broths can reduce the turnaround time to identification and may lead to earlier appropriate treatment of bacteremia. During February and April 2010, direct MALDI-TOF MS was

Anne L. M. Vlek; Marc J. M. Bonten; C. H. Edwin Boel

2012-01-01

399

Experimental measurements and analytical analysis related to gas turbine heat transfer. Part 1: Time-averaged heat-flux and surface-pressure measurements on the vanes and blades of the SSME fuel-side turbine and comparison with prediction. Part 2: Phase-resolved surface-pressure and heat-flux measurements on the first blade of the SSME fuel-side turbine  

NASA Technical Reports Server (NTRS)

Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

1994-01-01

400

Experimental measurements and analytical analysis related to gas turbine heat transfer. Part 1: Time-averaged heat-flux and surface-pressure measurements on the vanes and blades of the SSME fuel-side turbine and comparison with prediction. Part 2: Phase-resolved surface-pressure and heat-flux measurements on the first blade of the SSME fuel-side turbine  

NASA Astrophysics Data System (ADS)

Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

1994-05-01

401

Exact Averaging of Stochastic Equations for Flow in Porous Media  

SciTech Connect

It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

2008-03-15

402

Global average ozone change from November 1978 to May 1990  

NASA Technical Reports Server (NTRS)

A recent recalibration and reprocessing of the total ozone mapping spectrometer (TOMS) data have made possible a new determination of the global average total ozone decrease of 3.5 percent over the 11-year period, January 1, 1979, to December 31, 1989, with a 2 sigma error of 1.4 percent. The revised TOMS ozone trend data are in agreement, within error limits, with the average of 39 ground-based Dobson stations and with the world standard Dobson spectrometer 83 at Mauna Loa, Hawaii. Superimposed on the 11-year ozone trend is a possible solar cycle effect, quasi-biennial oscillation (QBO), annual, and semiannual cycles. Using solar 10-7-cm flux data and 30-mbar Singapore wind data (QBO), a time series has been constructed that reproduces the long-term behavior of the globally averaged ozone. Removal of the apparent solar cycle effect from the global average reduces the net ozone loss to 2.66 + or - 1.4 percent per decade. The precise value of the global average ozone trend depends on the latitude range selected, with ranges greater than + or - 69 emphasizing the larger variations at high latitudes.

Herman, J. R.; Mcpeters, R.; Stolarki, R.; Larko, D.; Hudson, R.

1991-01-01

403

New applications for high average power beams  

SciTech Connect

The technology base formed by the development of high peak power simulators, laser drivers, FEL`s, and ICF drivers from the early 60`s through the late 80`s is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

Neau, E.L.; Turman, B.N.; Patterson, E.L.

1993-08-01

404

High average power free-electron lasers  

NASA Astrophysics Data System (ADS)

Ever since the first free-electron laser (FEL) was both conceived and demonstrated in the 1970s at Stanford, it has attracted much attention as a promising foundation for a high average power light source. Since its light is produced from free electrons, an FEL can be designed to lase at virtually any wavelength. Furthermore, since its light is generated from a relativistic beam that is continually replenished, its gain medium cannot be damaged by any conventional means. Accordingly, as there have been several ambitious attempts to design and construct high average power FELs over the past several decades, we chronicle some of these efforts. Additionally, we provide an overview of FEL technology and theory, discuss some of the obstacles that remain in the construction of a high-power FEL, and provide a road map toward a megawatt-class device.

Blau, Joseph; Cohn, Keith; Colson, William B.

2013-02-01

405

Apparent and average accelerations of the Universe  

SciTech Connect

In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

Bolejko, Krzysztof [School of Physics, University of Melbourne, VIC 3010 (Australia)] [School of Physics, University of Melbourne, VIC 3010 (Australia); Andersson, Lars, E-mail: bolejko@camk.edu.pl, E-mail: larsa@math.miami.edu [Department of Mathematics, University of Miami, Coral Gables, FL 33124 (United States)] [Department of Mathematics, University of Miami, Coral Gables, FL 33124 (United States)

2008-10-15

406

On the high correlation between long-term averages of solar wind speed and geomagnetic activity  

Microsoft Academic Search

Six-month and yearly averages of solar wind speed from 1962 to 1975 are shown to be highly correlated with geomagnetic activity as measured by averages of the AP index. On the same time scale the correlation between the southward component of the interplanetary magnetic field and geomagnetic activity is poor. Previous studies with hourly averages give opposite results. The better

N. U. Crooker; J. Feynman; J. T. Gosling

1977-01-01

407

Model Averaging with Discrete Bayesian Network Classifiers  

Microsoft Academic Search

This paper considers the problem of perform- ing classification by model-averaging over a class of discrete Bayesian network structures consistent with a partial ordering and with bounded in-degree k. We show that for N nodes this class contains in the worst-case at least ›( ¡N=2 k ¢N=2 ) distinct network structures, but we show that this summation can be per-

Denver Dash; Gregory F. Cooper

408

Stochastic Games with Average Payoff Criterion  

SciTech Connect

We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

Ghosh, M. K. [Department of Mathematics, Indian Institute of Science, Bangalore 560012 (India); Bagchi, A. [Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)

1998-11-15

409

Modern average global sea-surface temperature  

USGS Publications Warehouse

The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

Schweitzer, Peter N.

1993-01-01

410

Lagged average forecasting, an alternative to Monte Carlo forecasting  

NASA Technical Reports Server (NTRS)

A 'lagged average forecast' (LAF) model is developed for stochastic dynamic weather forecasting and used for predictions in comparison with the results of a Monte Carlo forecast (MCF). The technique involves the calculation of sample statistics from an ensemble of forecasts, with each ensemble member being an ordinary dynamical forecast (ODF). Initial conditions at a time lagging the start of the forecast period are used, with varying amounts of time for the lags. Forcing by asymmetric Newtonian heating of the lower layer is used in a two-layer, f-plane, highly truncated spectral model in a test forecasting run. Both the LAF and MCF are found to be more accurate than the ODF due to ensemble averaging with the MCF and the LAF. When a regression filter is introduced, all models become more accurate, with the LAF model giving the best results. The possibility of generating monthly or seasonal forecasts with the LAF is discussed.

Hoffman, R. N.; Kalnay, E.

1983-01-01

411

Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.  

ERIC Educational Resources Information Center

Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

Dirks, Jean; And Others

1983-01-01

412

High average power diode pumped solid state lasers for CALIOPE  

NASA Astrophysics Data System (ADS)

Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable nonlinear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multirod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version is presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

Comaskey, B.; Halpin, J.; Moran, B.

1994-07-01

413

High average power laser for EUV lithography  

SciTech Connect

We have demonstrated the operation of a high average power, all solid state laser and target system for EUV lithography. The laser operates at 1.06 {mu}m with a pulse repetition rate of 200 Hz. Each pulse contains up to 400 mJ of energy and is less than 10 ns in duration. The ELTV conversion efficiency measured with the laser is independent of the laser repetition rate. Operating at 200 Hz, the laser has been used for lithography using a 3 bounce Kohler illuminator.

Kania, D.R.; Gaines, D.P.; Hermann, M.; Honig, J.; Hostetler, R.; Levesque, R.; Sommargren, G.E.; Spitzer, R.C.; Vernon, S.P.

1995-01-19

414

Average prime-pair counting formula  

NASA Astrophysics Data System (ADS)

Taking r>0 , let ?_{2r}(x) denote the number of prime pairs (p, p+2r) with p? x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that ?_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ?_{2r}(x)=?_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for ?(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ?_{2r}(x) which is supported by numerical results.

Korevaar, Jaap; Riele, Herman Te

2010-04-01

415

Measurement of small temperature fluctuations at high average temperature  

NASA Technical Reports Server (NTRS)

Both absolute and differential temperature measurements were simultaneously performed as a function of time for a pixel on a high-temperature, multi-spectral, spatially and temporally varying infrared target simulator. A scanning laser beam was used to maintain a pixel at an on-the-average constant temperature of 520 K. The laser refresh rate of up to 1 kHz resulted in small-amplitude temperature fluctuations with a peak-to-peak amplitude of less than 1 K. The experimental setup to accurately measure the differential and the absolute temperature as a function of time is described.

Scholl, James W.; Scholl, Marija S.

1988-01-01

416

MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS  

SciTech Connect

A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

Kevin Jordan; Trent Allison; Richard Evans; James Coleman; Albert Grippo

2003-05-01

417

Unpredictable visual changes cause temporal memory averaging.  

PubMed

Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change. PMID:17767943

Ohyama, Junji; Watanabe, Katsumi

2007-09-01

418

Average Gait Differential Image Based Human Recognition  

PubMed Central

The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

Chen, Jinyan; Liu, Jiansheng

2014-01-01

419

On the determination of local instantaneous averages in particulate flow measurements  

NASA Technical Reports Server (NTRS)

Determination of instantaneous local average particle density of a gas-particle suspension requires satisfying both the time scale relation and the volume scale relation or its continuum counter part of time averaging. This procedure was validated by comparing simultaneous velocity and mass flux measurements and the laser phase Doppler measurements.

Vandewall, R. E.; Soo, S. L.

1993-01-01

420

Optimum Low Thrust Elliptic Orbit Transfer Using Numerical Averaging  

NASA Astrophysics Data System (ADS)

Low-thrust electric propulsion is increasingly being used for spacecraft missions primarily due to its high propellant efficiency. Since analytical solutions for general low-thrust transfers are not available, a simple and fast method for low-thrust trajectory optimization is of great value for preliminary mission planning. However, few low-thrust trajectory tools are appropriate for preliminary mission design studies. The method presented in this paper provides quick and accurate solutions for a wide range of transfers by using numerical orbital averaging to improve solution convergence and include orbital perturbations. Thus allowing preliminary trajectories to be obtained for transfers which involve many revolutions about the primary body. This method considers minimum fuel transfers using first order averaging to obtain the fuel optimum rates of change of the equinoctial orbital elements in terms of each other and the Lagrange multipliers. Constraints on thrust and power, as well as minimum periapsis, are implemented and the equations are averaged numerically using a Gaussian quadrature. The use of numerical averaging allows for more complex orbital perturbations to be added without great difficulty. Orbital perturbations due to solar radiation pressure, atmospheric drag, a non-spherical central body, and third body gravitational effects have been included. These perturbations have not been considered by previous methods using analytical averaging. Thrust limitations due to shadowing have also been considered in this study. To allow for faster convergence of a wider range of problems, the solution to a transfer which minimizes the square of the thrust magnitude is used as a preliminary guess for the minimum fuel problem. Thus, this method can be quickly applied to many different types of transfers which may include various perturbations. Results from this model are shown to provide a reduction in propellant mass required over previous minimum fuel solutions. Minimum time transfers are also solved and compared to minimum fuel.

Tarzi, Zahi Bassem

421

Database of average-power damage thresholds at 1064 nm  

SciTech Connect

We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from < 1 J/cm/sup 2/ for some metals to > 46 J/cm/sup 2/ for a bare polished glass substrate. 4 refs., 7 figs., 1 tab.

Rainer, F.; Hildum, E.A.; Milam, D.

1987-12-14

422

Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport  

NASA Astrophysics Data System (ADS)

Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

Parker, J. C.; van Genuchten, M. Th.

1984-07-01

423

A Bayesian Algorithm for Reconstructing Spatially Averaged Temperature  

NASA Astrophysics Data System (ADS)

The determination of spatially averaged temperature from point estimates is a non-trivial statistical problem. In the paleo-climate context, the additional need to convert proxy time-series into temperature estimates presents a serious challenge. Most estimates of spatially averaged temperature at paleo-climate time-scales address these two issues sequentially: the proxy values are first averaged through space, and these estimates are then transformed onto the temperature scale via some form of regression. This two step approach distances the final estimate of temperature from the underlying data, complicating estimates of the associated uncertainty. Our approach is to model the relationship between the true temperature field and the noisy, localized measurements of it using a hidden Markov model. We use a fully Bayesian algorithm to simultaneously estimate the coefficients linking the proxy values to temperature units, the parameters associated with both the temporal and spatial covariance structures, the observational error variances, the temperature values at a large number of uniformly distributed spatial locations, and the average of these estimated temperature values. We assume proper but weakly informative priors for all unknowns. We have, when possible, used conjugate priors - many of them not normal. A major benefit of this Bayesian approach is that, by drawing repeatedly from the full conditional posterior distributions, we obtain an estimate of the uncertainty covariance structure. This allows us to make quantitative statements about both the relative contributions of the different proxies to the spatial average, and the extent to which the model can constrain the various parameters. In particular, the model outputs the uncertainty in the coefficients of the transformation linking the proxy values to temperature units. The model can easily be generalized to accommodate different categories of proxy data assumed to have different uncertainty properties. We first apply the model to surrogate data to test the method and develop intuition about the convergence properties of the Monte-Carlo sampling procedure. We then apply the model to instrumental data, before extending the reconstruction back in time using proxy data.

Tingley, M. P.; Huybers, P.

2007-12-01

424

Average structure of incommensurately modulated monoclinic lazurite  

SciTech Connect

The average structure of the monoclinic modification of lazurite Ca{sub 1.26}Na{sub 6.63}K{sub 0.04}[Al{sub 6}Si{sub 6}O{sub 24}](SO{sub 4}){sub 1.53}S{sub 0.99}Cl{sub 0.05} (discovered in the Lake Baikal region) incommensurately modulated along the c axis is solved by the single-crystal X-ray diffraction method. The unit-cell parameters are a = 9.069(1) A, b = 12.868(1) A, c = 12.872(1) A, {gamma} = 90.19(1) deg., sp. gr. Pa, R = 6.9%, 2057 reflections. The AlO{sub 4} and SiO{sub 4} tetrahedra form a partially ordered framework. The positions in the cavities of the framework are split and randomly occupied by Na and Ca atoms and the SO{sub 4}, S{sub 2}, S{sub 3}, and SO{sub 2} anionic groups. The structure of the mineral is compared with the superstructure of triclinic lazurite. Conclusions are drawn about the causes of the incommensurate modulation in monoclinic and other lazurites.

Bolotina, N. B.; Rastsvetaeva, R. K. [Russian Academy of Sciences, Shubnikov Institute of Crystallography (Russian Federation)], E-mail: rast@ns.crys.ras.ru; Sapozhnikov, A. N. [Russian Academy of Sciences, Vinogradov Institute of Geochemistry, Siberian Division (Russian Federation)

2006-07-15

425

Averaging and globalising quotients of informetric and scientometric data  

Microsoft Academic Search

Based on the particular case of the average impact factor of a subfield versus the impact factor of this subfield as a whole, the difference is studied between an average of quotients, denoted as AQ, and a global average, obtained as a quotient of averages, and denoted as GQ. In the case of impact factors, AQ becomes the average impact

Leo Egghe; Ronald Rousseau

1996-01-01

426

Relation of Poisson's ratio on average with Young's modulus. Auxetics on average  

NASA Astrophysics Data System (ADS)

A linear relation between the Poisson's ratio averaged along the transverse directions and Young's modulus of the tensed cubic crystal is established. It is found that the coefficients of the linear relation in the dimensionless form depend on two dimensionless elastic parameters combined from three compliance coefficients. By virtue of this fact, the form of angular regions of the crystal orientation with negative Poisson's ratio on average varies as the magnitude of one dimensionless coefficient and the sign of the other one. We find the critical value of the dimensionless parameter at which there is the topological change in the structure of the angular regions occurs is established.

Goldstein, R. V.; Gorodtsov, V. A.; Lisovenko, D. S.

2012-04-01

427

Forecasting Sales by Exponentially Weighted Moving Averages  

Microsoft Academic Search

The growing use of computers for mechanized inventory control and production planning has brought with it the need for explicit forecasts of sales and usage for individual products and materials. These forecasts must be made on a routine basis for thousands of products, so that they must be made quickly, and, both in terms of computing time and information storage,

Peter R. Winters

1960-01-01

428

Averaged null energy condition in loop quantum cosmology  

SciTech Connect

Wormholes and time machines are objects of great interest in general relativity. However, to support them it needs exotic matters which are impossible at the classical level. Semiclassical gravity introduces the quantum effects into the stress-energy tensor and constructs many self-consistent wormholes. But they are not traversable due to the averaged null energy condition. Loop quantum gravity (LQG) significantly modifies the Einstein equation in the deep quantum region. If we write the modified Einstein equation in the form of the standard one but with an effective stress-energy tensor, it is convenient to analyze the geometry in LQG through the energy condition. Loop quantum cosmology (LQC), an application of LQG, has an effective stress-energy tensor which violates some kinds of local energy conditions. So it is natural that the inflation emerges in LQC. In this paper, we investigate the averaged null energy condition in LQC in the framework of the effective Hamiltonian, and we find that the effective stress-energy tensor in LQC violates the averaged null energy condition in the massless scalar field coupled model.

Li Lifang; Zhu Jianyang [Department of Physics, Beijing Normal University, Beijing 100875 (China)

2009-02-15

429

Averaged null energy condition in loop quantum cosmology  

NASA Astrophysics Data System (ADS)

Wormholes and time machines are objects of great interest in general relativity. However, to support them it needs exotic matters which are impossible at the classical level. Semiclassical gravity introduces the quantum effects into the stress-energy tensor and constructs many self-consistent wormholes. But they are not traversable due to the averaged null energy condition. Loop quantum gravity (LQG) significantly modifies the Einstein equation in the deep quantum region. If we write the modified Einstein equation in the form of the standard one but with an effective stress-energy tensor, it is convenient to analyze the geometry in LQG through the energy condition. Loop quantum cosmology (LQC), an application of LQG, has an effective stress-energy tensor which violates some kinds of local energy conditions. So it is natural that the inflation emerges in LQC. In this paper, we investigate the averaged null energy condition in LQC in the framework of the effective Hamiltonian, and we find that the effective stress-energy tensor in LQC violates the averaged null energy condition in the massless scalar field coupled model.

Li, Li-Fang; Zhu, Jian-Yang

2009-02-01

430

Global Average Brightness Temperature for April 2003  

NASA Technical Reports Server (NTRS)

[figure removed for brevity, see original site] Figure 1

This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.

The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

2003-01-01

431

Constructing the Average Natural History of HIV-1 Infection  

NASA Astrophysics Data System (ADS)

Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.

Diambra, L.; Capurro, A.; Malta, C. P.

2007-05-01

432

Average power effects in parametric oscillators and amplifiers  

NASA Technical Reports Server (NTRS)

Average power effects relative to the operation of parametric oscillators and amplifiers have been calculated. Temperature gradients have been calculated for both radial and longitudinal heat extraction. In many instances, the thermal load on a parametric oscillator is higher than the thermal load on a parametric amplifier with the same pump power. Having one or both these wavelengths resonant increases the chances that a generated photon will be absorbed by the nonlinear crystal. Temperature profiles and thermal diffusion time constants have been calculated for Gaussian beams, given the heat-deposition rate. With radical heat extraction the temperature profile can be expressed in a power series or approximated by a Gaussian distribution function.

Barnes, Norman P.; Williams-Byrd, Julie A.

1995-01-01

433

High average power second harmonic generation in air  

SciTech Connect

We demonstrate second harmonic vortex generation in atmospheric pressure air using tightly focused femtosecond laser beam. The circularly polarized ring-shaped beam of the second harmonic is generated in the air by fundamental beam of the same circular polarization, while the linear polarized beam produces two-lobe beam at the second harmonic frequency. The achieved normalized conversion efficiency and average second harmonic power are two orders of magnitude higher compared to those previously reported and can be increased up to 20 times by external gas flow. We demonstrate that the frequency doubling originates from the gradient of photoexcited free electrons created by pondermotive force.

Beresna, Martynas; Kazansky, Peter G. [Optoelectronics Research Centre, University of Southampton, Southampton SO17 1BJ (United Kingdom); Svirko, Yuri [Department of Physics and Mathematics, University of Joensuu, Joensuu FI-80101 (Finland); Barkauskas, Martynas; Danielius, Romas [Light Conversion Ltd., Vilnius, LT-10223 (Lithuania)

2009-09-21

434

Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data  

NASA Astrophysics Data System (ADS)

The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

Kristoffersen, Anders

2007-08-01

435

Industry-grade high average power femtosecond light source  

NASA Astrophysics Data System (ADS)

Ultrashort pulses are capable of processing practically any material with negligible heat affected zone. Typical pulse durations for industrial applications are situated in the low picosecond-regime. Pulse durations of 5 ps or below are a well established compromise between the electron-phonon interaction time of most materials and the need for pulses long enough to suppress detrimental effects such as nonlinear interaction with the ablated plasma plume. However, sub-picosecond pulses can further increase the ablation efficiency for certain materials, depending on the available average power, pulse energy and peak fluence. Based on the well established TruMicro 5000 platform (first release in 2007, third generation in 2011) an Yb:YAG disk amplifier in combination with a broadband seed laser was used to scale the output power for industrial femtosecond-light sources: We report on a subpicosecond amplifier that delivers a maximum of 160 W of average output power at pulse durations of 750 fs. Optimizing the system for maximum peak power allowed for pulse energies of 850 ?J at pulse durations of 650 fs. Based on this study and the approved design of the TruMicro 5000 product-series, industrygrade, high average power femtosecond-light sources are now available for 24/7 operation. Since their release in May 2013 we were able to increase the average output power of the TruMicro 5000 FemtoEdition from 40 W to 80 W while maintaining pulse durations around 800 fs. First studies on metals reveal a drastic increase of processing speed for some micro processing applications.

Heckl, O. H.; Weiler, S.; Fleischhaker, R.; Gebs, R.; Budnicki, A.; Wolf, M.; Kleinbauer, J.; Russ, S.; Kumkar, M.; Sutter, D. H.

2014-03-01