Emergency Department Overcrowding and Ambulance Turnaround Time
Lee, Yu Jin; Shin, Sang Do; Lee, Eui Jung; Cho, Jin Seong; Cha, Won Chul
2015-01-01
Objective The aims of this study were to describe overcrowding in regional emergency departments in Seoul, Korea and evaluate the effect of crowdedness on ambulance turnaround time. Methods This study was conducted between January 2010 and December 2010. Patients who were transported by 119-responding ambulances to 28 emergency centers within Seoul were eligible for enrollment. Overcrowding was defined as the average occupancy rate, which was equal to the average number of patients staying in an emergency department (ED) for 4 hours divided by the number of beds in the ED. After selecting groups for final analysis, multi-level regression modeling (MLM) was performed with random-effects for EDs, to evaluate associations between occupancy rate and turnaround time. Results Between January 2010 and December 2010, 163,659 patients transported to 28 EDs were enrolled. The median occupancy rate was 0.42 (range: 0.10-1.94; interquartile range (IQR): 0.20-0.76). Overcrowded EDs were more likely to have older patients, those with normal mentality, and non-trauma patients. Overcrowded EDs were more likely to have longer turnaround intervals and traveling distances. The MLM analysis showed that an increase of 1% in occupancy rate was associated with 0.02-minute decrease in turnaround interval (95% CI: 0.01 to 0.03). In subgroup analyses limited to EDs with occupancy rates over 100%, we also observed a 0.03 minute decrease in turnaround interval per 1% increase in occupancy rate (95% CI: 0.01 to 0.05). Conclusions In this study, we found wide variation in emergency department crowding in a metropolitan Korean city. Our data indicate that ED overcrowding is negatively associated with turnaround interval with very small practical significance. PMID:26115183
Computerized Monitoring and Analysis of Radiology Report Turnaround Times
NASA Astrophysics Data System (ADS)
Wang, Yen
1989-05-01
A computerized Radiology Management System was used to monitor the turnaround time of radiology reports in a large university hospital. The time from patient entry into the department until the printing and distribution of the final examination report was monitored periodically for two-week time intervals. Total turnaround time was divided into four separate components. Analysis of the data enabled us to assess individual and departmental performance and thereby improve important patient service functions.
Turnaround Time and Market Capacity in Contract Cheating
ERIC Educational Resources Information Center
Wallace, Melisa J.; Newton, Philip M.
2014-01-01
Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…
24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.
Code of Federal Regulations, 2010 CFR
2010-04-01
...2010-04-01 2010-04-01 false Indicator #1, vacancy rate and unit turnaround...MANAGEMENT ASSESSMENT PROGRAM § 901.10 Indicator #1, vacancy rate and unit turnaround time. This indicator examines the vacancy rate, a...
Naylor, Heather; Woloschuk, Donna M M; Fitch, Patrick; Miller, Sarah
2011-01-01
Background: Standardizing the interpretation of “stat”, “emergent”, “urgent”, and “now” medication orders can improve patient safety. However, the effect of implementing standardized definitions on the turnaround time for medication orders in hospital pharmacy dispensaries has not been studied. Objectives: To examine the effects of using formal definitions for “stat”, “emergent”, “urgent”, and “now” on turnaround time for medication orders within a pharmacy dispensary. Methods: Definitions for “stat”, “emergent”, “urgent”, and “now” orders, as well as for “turnaround time”, were developed from the formal literature and the grey literature. The definitions were implemented by educating all pharmacy staff. Retrospective audits of turnaround time were conducted at baseline (for all orders over a 1-month period) and after implementation of the definitions (for a total of 28 days over a 3-month period). Health records and medication orders were used to calculate time from prescribing to administration (total turnaround time) and time from prescribing to departure from the dispensary (dispensary turnaround time). Differences between total and dispensary turnaround times were compared with nonparametric statistics. Results: During the baseline audit period, 84 (1.1%) of 7787 orders were identified as “stat”, “emergent”, “urgent”, or “now”. After implementation of the formal definitions, 142 (2.6%) of 5365 orders were identified by one of these terms. The percentage of orders meeting the target dispensary turnaround time of less than 15 min was at least 90% both at baseline (76/84 [90%]) and after implementation (129/142 [91%]) (p = 0.80). Median dispensary turnaround time for stat and emergent medication orders combined (10 versus 9 min, p = 0.27) and for urgent and now medication orders combined (10 versus 12 min, p = 0.09) did not change after implementation of formal definitions. Similarly, median total turnaround time did not change for stat and emergent medication orders combined (30 versus 45 min, p = 0.32), but it increased for urgent and now orders combined (35 versus 45 min, p = 0.041). Conclusions: Implementing standardized definitions for “stat”, “emergent”, “urgent”, and “now” had no significant effect on dispensary turnaround time. However, the majority of orders with these designations met the expected target for dispensary turnaround time. Further interventions aimed at other health care professionals may be needed to reduce total turnaround time. This research supports the concept of interdisciplinary interventions for reducing total turnaround time. PMID:22479087
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
Predictable turn-around time for post tape-out flow
NASA Astrophysics Data System (ADS)
Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya
2012-03-01
A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.
A Time-Motion Study of the Emergency Medical Services Turnaround Interval
David C Cone; Steven J Davidson; Que Nquyen
1998-01-01
Study objective: Because overall EMS system response depends on ambulance availability, we conducted a prospective study of the EMS turnaround interval. This interval represents the time elapsed from ambulance arrival at the hospital until the ambulance reports back in service. Methods: An on-site observer, while monitoring EMS radio traffic, recorded the delivery and recovery activities of personnel from a large
An automated resource management system to improve production tapeout turn-around time
NASA Astrophysics Data System (ADS)
Guo, Eric; Liu, Qingwei; Zhu, Sherry; Wu, Jason; Tsai, Jenny; Lu, Junwei; Simmons, Mark C.
2013-03-01
In today's semiconductor industry, both the pure-play and independent device manufacturer (IDM) foundries are constantly and rigorously competing for market share. The acknowledged benefit for customers who partner with these foundries includes a reduced cost-of-ownership, along with the underwritten agreement of meeting or exceeding an aggressive time-to-market schedule. Because the Semiconductor Manufacturing International Corporation (SMIC) is one of the world-wide forerunners in the foundry industry, one of its primary concerns is ensuring continual improvement in its fab's turnaround time (TAT), especially given that newer technology nodes and their associated processes are increasing in complexity, and consequently, in their time-to-process. In assessing current runtime data trends at the 65nm and 40nm technology nodes, it was hypothesized that hardware and software utilization improvements could accomplish a reduced overall TAT. By running an experiment using the Mentor Graphics Calibre® Cluster Manager (CalCM) software, SMIC was able to demonstrate just over a 30% aggregate TAT improvement in conjunction with a greater than 90% average utilization of all hardware resources. This paper describes the experimental setup and procedures that predicated the reported results.
Plaza, Stephen M.; Scheffer, Louis K.; Saunders, Mathew
2012-01-01
The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality. PMID:23028540
DECISION MAKING IN NON-PROFIT ORGANIZATIONS DURING TIMES OF TURNAROUND
James R. Langabeer
Non-profit organizations differ in many respects from traditional industrial organizations, with a fundamental discriminate being the absence of a pure economic motivation. There is abundant research on decision making orientation in corporations during organizational declines and subsequent turnarounds. This paper contributes to the research on decision making by investigating how non-profits, when faced with potential for bankruptcy or closure, create
Policy change to improve pathology turnaround time and reduce costs – possible to do both?
Dimeski, Goce; Silvester, Breeann; Ungerer, Jacobus; Johnson, Leslie; Martin, Jennifer H.
2013-01-01
Background: Overcrowding and prolonged length of stay in emergency departments (ED) are increasing problems in hospitals. Rapid availability of all laboratory results has an impact on clinical decision-making, admissions or discharge decisions and resource utilisation. Increasing number of our urinary drugs of abuse (DOA) screens had a turnaround time (TAT) of up to 33 days after the discharge of the patient. Materials and methods: Following an audit and a consultation period with clinicians using the service, a policy change was implemented to reduce the use of gas chromatography mass spectroscopy (GCMS): all requests would have a standard immunoassay (IA) test panel undertaken unless specifically they requested GCMS (including medico-legal) analysis. Results: Almost all of the clinicians interviewed had no understanding of the DOA screening or the difference in the information generated between a confirmatory GCMS urine toxicology screen and IA DOA panel. It appeared none of the patients surveyed in the audit would have had a different clinical decision made if a GCMS had not been undertaken. Post change audit showed only 4.3% of drug requests for IA also received a confirmatory GCMS testing. The estimated saving post change implementation was $127,000 (AU $) in test costs alone over a two year period. The TAT of GCMS results was reduced to 3–4 days. Conclusion: A laboratory-led behavioural change in test requesting is possible and sustainable provided the reason is clinically sound and accompanied by consultation and availability of advice by phone when requested on test requesting or interpretation. PMID:24266298
Time-averaged digital holography.
Picart, Pascal; Leval, Julien; Mounier, Denis; Gougeon, Samuel
2003-10-15
We demonstrate that it is possible to study the modal structures of a vibrating object with digitally recorded holograms by use of the time-averaging principle. We investigate the numerical reconstruction from a theoretical point of view, and we show that the numerically reconstructed object from a digital hologram is modulated by the zeroth-order Bessel function. Results of experiments in time-averaged digital holography are presented. PMID:14587769
Turnaround in Cyclic Cosmology
Baum, Lauris; Frampton, Paul H. [University of North Carolina, Chapel Hill, North Carolina 27599-3255 (United States)
2007-02-16
It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion.
Turnaround in cyclic cosmology.
Baum, Lauris; Frampton, Paul H
2007-02-16
It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion. PMID:17359014
Averaging facial expression over time
Haberman, Jason; Harp, Tom; Whitney, David
2010-01-01
The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064
NASA Astrophysics Data System (ADS)
Yamazaki, Kenji; Yamaguchi, Hiroshi
2015-06-01
The three-dimensional (3D) electron beam (EB) lithography devised by the authors, which uses EB writing on a sample from various directions, was renovated. Although the original technique was effective for 3D nanofabrication in hard materials, it had two disadvantages: long turnaround time (TAT) and insufficient 3D alignment accuracy in conflict with its high resolution. The renovated technique uses new methods of focusing and fine adjustment of rotations in an EB writer with higher positioning accuracy in addition to a rotation drive with higher rotation accuracy than the previous apparatus. These have enabled us to significantly reduce the TAT and obtain sufficiently high 3D alignment accuracy. Moreover, 3D resist coating on all four side faces of a square microblock formed on a substrate, which is very useful for 3D nanofabrication, is demonstrated. The renovation is accelerating the technique’s application to various 3D nanodevices.
ERIC Educational Resources Information Center
Herman, Rebecca
2012-01-01
This article explores the research on turning around low performing schools to summarize what we know, what we don't know, and what this means for scaling school turnaround efforts. "School turnaround" is defined here as quick, dramatic gains in academic achievement for persistently low performing schools. The article first considers the case set…
Nyberg, M
1991-04-01
A large Medicare population, a higher than average length of stay, an aging medical staff, and difficulty in retaining professional staff were a few of the indications that Nazareth Hospital, Philadelphia, was headed for financial difficulties. Unfortunately, Nazareth leaders overlooked these warnings because the hospital was concentrating on a major construction project. In 1989 the hospital lost approximately $5 million from operations, resulting in debt covenant violations on a $33 million bond issue. Nazareth then adopted a revitalization program with the following basic components: Establishing extensive cost-cutting measures to help reverse the financial deficits Retaining outside consultants to serve as management advocates and to keep the repositioning effort moving forward Enhancing long-range strategic planning to lay firm groundwork for the hospital's future As a result of these efforts, Nazareth went from a fiscal year 1989 operating loss to a 1990 year-end break-even position. Nazareth leaders attribute this success to the total commitment by the involved constituencies: board, managers, medical staff, and employees. PMID:10109970
Exploring Fish School Algorithm for Improving Turnaround Time: An Experience of Content Retrieval
Soumya Banerjee; Santi Caballe
2011-01-01
In distributed e-learning paradigm, learning pedagogy demands different content retrieval methodologies after reaching certain boundary of learning. Hence, the learners are expected, to retrieve the contents and they need to improvise at substantially faster rate. The overall learning process converges into a finite time usage and they will return to the same point of access point. The present paper proposes
Random time averaged diffusivities for Lévy walks
NASA Astrophysics Data System (ADS)
Froemberg, D.; Barkai, E.
2013-07-01
We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.
Correlated continuous time random walk with time averaged waiting time
NASA Astrophysics Data System (ADS)
Lv, Longjin; Ren, Fu-Yao; Wang, Jun; Xiao, Jianbin
2015-03-01
In this paper, we study the dynamics of a correlated continuous time random walk with time averaged waiting time. The mean square displacement (MSD) shows this process is subdiffusive and generalized Einstein relation holds. We also get the asymptotic behavior of the probability density function (PDF) of this process is stretched Gaussian. At last, by computing the time averaged MSD, we find ergodicity breaking occurs in this process.
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Turnaround Principal Competencies
ERIC Educational Resources Information Center
Steiner, Lucy; Barrett, Sharon Kebschull
2012-01-01
When the Minneapolis Public Schools first set out to hire turnaround school principals, administrators followed their usual process--which focused largely on reputation and anecdotal support and considered mainly internal candidates. Yet success at the complicated task of turning around the fortunes of a failing school depends on exceptionally…
Corporate turnaround and financial distress
Malcolm Smith; Christopher Graves
2005-01-01
Purpose – Drawing on variables cited in the turnaround literature, this study aims to explore whether information contained within annual reports is useful in distinguishing between distressed companies that enact a turnaround and those that fail. Design\\/methodology\\/approach – This study develops a discriminant model to identify distressed companies that have turnaround potential. Findings – Analysis of the results reveals that
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Analysis of averaged multichannel delay times
N. G. Kelkar; M. Nowakowski
2008-05-05
The physical significances and the pros and cons involved in the usage of different time delay formalisms are discussed. The delay time matrix introduced by Eisenbud, where only s-waves participate in a reaction, is in general related to the definition of an angular time delay which is shown not to be equivalent to the so-called phase time delay of Eisenbud and Wigner even for single channel scattering. Whereas the expression due to Smith which is derived from a time delayed radial wave packet is consistent with a lifetime matrix which is Hermitian, this is not true for any Eisenbud-type lifetime matrix which violates time reversal invariance. Extending the angular time delay of Nussenzveig to multiple channels, we show that if one performs an average over the directions and subtracts the forward angle contribution containing an interference of the incident and scattered waves, the multichannel angle dependent average time delay reduces to the one given by Smith. The present work also rectifies a recently misinterpreted misnomer of the relation due to Smith.
Time averaging of instantaneous quantities in HYDRA
McCallen
1996-01-01
For turbulent flow the evaluation of direct numerical simulations (DNS) where all scales are resolved and large-eddy simulation (LES) where only large-scales are resolved is difficult because the results are three-dimensional and transient. To simplify the analysis, the instantaneous flow field can be averaged in time for evaluation and comparison to experimental results. The incompressible Navier-Stokes flow code HYDRA has
Making Sense of School Turnarounds
ERIC Educational Resources Information Center
Hess, Frederick M.
2012-01-01
Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…
CEO perceptions and corporate turnaround
Stephen E. Clapham; Charles R. Schwenk; Cam Caldwell
2005-01-01
CEO decision-making is profoundly impacted by perceptions about organizational identity. In this paper we report on the results of a study of communication patterns in letters to shareholders of a sample of firms that achieved successful turnaround. The results indicate that replacement of the CEO is a common but not essential element of successful turnaround. Results show distinctly different patterns
Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports
ERIC Educational Resources Information Center
Silva, Elena
2012-01-01
If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposed…
Contrast of time-averaged images of the solar granulation
P. N. Brandt; A. V. Getling
2004-01-01
The time-averaged images of the solar granulation exhibit a slower decrease in contrast with the averaging time than do time-averaged images of numerically simulated granulation and time-averaged random fields with some parameters typical of granulation. This confirms the hints for long-lived structures in the granulation pattern.
The synchronous (time domain) average revisited
NASA Astrophysics Data System (ADS)
Braun, S.
2011-05-01
Synchronous averaging is one of the most powerful techniques for the extraction of periodic signals from a composite signal. It is based on averaging periodic sections, necessitating an a-priori knowledge of the period sought. It is one of the most effective signal processing tools applied to rotating machinery, and has been known and used for decades.It will be shown that synchronous average is actually just one of the many possible "synchronous filters" which could be used to extract the above periodic components performance. A novel signal analysis, geared to periodic signals will be introduced, with the potential of extracting more complex phenomena typical of some rotating machinery. Examples given are based on periodic oscillating transients, with various additive interferences. The possibility of additional signal processing approaches is also discussed.
Fei, Yang; Zeng, Rong; Wang, Wei; He, Falin; Zhong, Kun
2015-01-01
Introduction To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. Materials and methods 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. Results The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479 / 1307), 38% (228 / 598), and 36% (449 / 1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Conclusions Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them.
Subdiffusion in time-averaged, confined random walks
Thomas Neusius; Igor M. Sokolov; Jeremy C. Smith
2009-01-01
Certain techniques characterizing diffusive processes, such as single-particle tracking or molecular dynamics simulation, provide time averages rather than ensemble averages. Whereas the ensemble-averaged mean-squared displacement (MSD) of an unbounded continuous time random walk (CTRW) with a broad distribution of waiting times exhibits subdiffusion, the time-averaged MSD, delta2¯ , does not. We demonstrate that, in contrast to the unbounded CTRW, in
Ensemble averaging vs. time averaging in molecular dynamics simulations of thermal conductivity
NASA Astrophysics Data System (ADS)
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-01
In this report, we compare time averaging and ensemble averaging as two different methods for phase space sampling in molecular dynamics (MD) calculations of thermal conductivity. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium MD. We introduce two different schemes for the ensemble averaging approach and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical MD, the approaches used for generating independent trajectories may find their greatest utility in computationally expensive simulations such as first principles MD. For such simulations, where each time step is costly, time averaging can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each trajectory is independent. For this reason, particularly when using massively parallel architectures, ensemble averaging can result in much shorter simulation times (˜100-200X), but exhibits similar overall computational effort.
TIME INVARIANT MULTI ELECTRODE AVERAGING FOR BIOMEDICAL SIGNALS
Orellana, R. Martinez; Erem, B.; Brooks, D.H.
2013-01-01
One of the biggest challenges in averaging ECG or EEG signals is to overcome temporal misalignments and distortions, due to uncertain timing or complex non-stationary dynamics. Standard methods average individual leads over a collection of epochs on a time-sample by time-sample basis, even when multi-electrode signals are available. Here we propose a method that averages multi electrode recordings simultaneously by using spatial patterns and without relying on time or frequency. PMID:24465170
A parametric model to average time–intensity taste data
D. Garrido; A. Calviño; G. Hough
2001-01-01
Time–intensity (T–I) curves are often summarized by average curves. In this study, a different method is proposed. A parametric model is derived in which intensity is described by a continuous function of time. Using this model, an average curve may be calculated by averaging the parameters derived from adjusting the individual T–I curves to the model. The parameters of the
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
Organized by the key conditions areas for turnaround, "People, Program, Time and Money," this tool offers sample language for each contract element to serve as a model for modifications from a traditional CBA that may support a district's turnaround efforts. Sample language is offered from existing provisions in district-wide collective bargaining…
Improving the Average Response Time in Collective I/O
Jin, Chen; Sehrish, Saba; Liao, Wei-keng; Choudhary, Alok; Schuchardt, Karen L.
2011-09-21
In collective I/O, MPI processes exchange requests so that the rearranged requests can result in the shortest file system access time. Scheduling the exchange sequence determines the response time of participating processes. Existing implementations that simply follow the increasing order of file ofsets do not necessary produce the best performance. To minimize the average response time, we propose three scheduling algorithms that consider the number of processes per file stripe and the number of accesses per process. Our experimental results demonstrate improvements of up to 50% in the average response time using two synthetic benchmarks and a high-resolution climate application.
Time-average and time-dependent parts of core flow and Peter Olson
Olson, Peter L.
Time-average and time-dependent parts of core flow Hagay Amit and Peter Olson April 11, 2005-geostrophic flow. We obtain snap- shot images of core flow at Â¥ year intervals, which we combine to give time-average and time-dependent parts of the motion over this time interval. The most prominent time- average flow
Aircraft operational costs and turnaround efficiency at airports
Cheng-Lung Wu; Robert E Caves
2000-01-01
The aim of this paper is to investigate the relationship between flight schedule punctuality and aircraft turnaround efficiency at airports, in order to minimise system operational costs and meanwhile to maintain a required level of schedule punctuality. Investigations of aircraft operational costs, passenger delay costs and airline schedule time-opportunity costs are carried out in this paper. A mathematical model is
Series Overview. Sustaining School Turnaround at Scale. Brief 1
ERIC Educational Resources Information Center
Education Resource Strategies, 2012
2012-01-01
Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…
Time-average based on scaling law in anomalous diffusions
NASA Astrophysics Data System (ADS)
Kim, Hyun-Joo
2015-05-01
To solve the obscureness in measurement brought about from the weak ergodicity breaking appeared in anomalous diffusions, we have suggested the time-averaged mean squared displacement (MSD) /line{? 2 (? )}? with an integral interval depending linearly on the lag time ?. For the continuous time random walk describing a subdiffusive behavior, we have found that /line{? 2 (? )}? ˜ ? ? like that of the ensemble-averaged MSD, which makes it be possible to measure the proper exponent values through time-average in experiments like a single molecule tracking. Also, we have found that it has originated from the scaling nature of the MSD at an aging time in anomalous diffusion and confirmed them through numerical results of the other microscopic non-Markovian model showing subdiffusions and superdiffusions with the origin of memory enhancement.
The characteristic averaging time for the surface-layer fluxes
NASA Astrophysics Data System (ADS)
Calaf, M.; Iungo, V.; Oldroyd, H. J.; Giometto, M. G.; Parlange, M. B.
2014-12-01
The choice of the representative averaging time to compute surface-layer fluxes remains a source of discrepancy between studies. While the Ogive function (Oncley et al. 1996) has become the standard approach to determine the 'physically sound' averaging time, uncertainty remains on how to precisely select the minimum-necessary averaging time. Alternatives based on a multiresolution analysis exist and they can further provide a characteristic time-scale separating turbulence from mesoscale motions (Vickers and Mahrt, 2003, 2005). Yet little is learned from the inherent turbulent time scales and their corresponding contribution to the overall surface-layer fluxes. Here a new approach based on a Proper Orthogonal Decomposition (POD) will be presented, where the extracted characteristic averaging times are energy optimal. Consequently one can now select from these POD-provided characteristic times, the most relevant time scales depending on the desired application and based on the overall energy contribution to the surface-layer fluxes. One of the main advantages of the POD-technique compared to traditional Fourier analysis or wavelet decompositions is that the shape of the eigenfunctions is directly dictated by the input data and that the resultant eigenfunctions are energy ordered in the most optimal sense. Thus, one can really determine a-priori the most 'physically relevant' averaging time without source of ambiguity. Examples of the proposed approach in varied complex scenarios will be presented, spanning multiple atmospheric stratifications and topography driven flows. Further, results exploring the potential of the POD technique to determine the proper averaging times for tilt corrections will also be presented.
An averaging analysis of discrete-time indirect adaptive control
NASA Technical Reports Server (NTRS)
Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.
1988-01-01
An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.
Turnaround Optimization of Continuous Chemical Plants
Grossmann, Ignacio E.
· Large companies spend millions on turnarounds annually · Potential for significant savings · Practical limits Turnaround durations and frequencies #12;Example network Intermediate Products Final Product approaches Optimization Simulation Results Gantt chart 0 52 104 156 208 260 1 3 5 7 10 13 8 15 17 Week Unit
Time and ensemble averaging in time series analysis
Miroslaw Latka; Massimiliano Ignaccolo; Wojciech Jernajczyk; Bruce J. West
2010-01-01
In many applications expectation values are calculated by partitioning a single experimental time series into an ensemble of data segments of equal length. Such single trajectory ensemble (STE) is a counterpart to a multiple trajectory ensemble (MTE) used whenever independent measurements or realizations of a stochastic process are available. The equivalence of STE and MTE for stationary systems was postulated
NASA Astrophysics Data System (ADS)
Sun, Yu; Dai, Meifeng; Xi, Lifeng
Recent work on the networks has focused on the weighted hierarchical networks that are significantly different from the un-weighted hierarchical networks. In this paper we study a family of weighted hierarchical networks which are recursively defined from an initial uncompleted graph, in which the weights of edges have been assigned to different values with certain scale. Firstly, we study analytically the average weighted shortest path (AWSP) on the weighted hierarchical networks. Using a recursive method, we determine explicitly the AWSP. The obtained rigorous solution shows that the networks grow unbounded but with the logarithm of the network size, while the weighted shortest paths stay bounded. Then, depending on a biased random walk, we research the mean first-passage time (MFPT) between a hub node and any peripheral node. Finally, we deduce the analytical expression of the average of MFPTs for a random walker originating from any node to first visit to a hub node, which is named as the average receiving time (ART). The obtained result shows that ART is bounded or grows sublinearly with the network order relating to the number of initial nodes and the weighted factor or grows quadratically with the iteration.
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Time and ensemble averaging in time series analysis
Latka, Miroslaw; Jernajczyk, Wojciech; West, Bruce J
2010-01-01
In many applications expectation values are calculated by partitioning a single experimental time series into an ensemble of data segments of equal length. Such single trajectory ensemble (STE) is a counterpart to a multiple trajectory ensemble (MTE) used whenever independent measurements or realizations of a stochastic process are available. The equivalence of STE and MTE for stationary systems was postulated by Wang and Uhlenbeck in their classic paper on Brownian motion (Rev. Mod. Phys. 17, 323 (1945)) but surprisingly has not yet been proved. Using the stationary and ergodic paradigm of statistical physics -- the Ornstein-Uhlenbeck (OU) Langevin equation, we revisit Wang and Uhlenbeck's postulate. In particular, we find that the variance of the solution of this equation is different for these two ensembles. While the variance calculated using the MTE quantifies the spreading of independent trajectories originating from the same initial point, the variance for STE measures the spreading of two correlated r...
Time average triple-image holography for measuring vibration
Zifeng Ye; Zongliang Jia
1985-01-01
A time-average triple-image holography technique is described. In the method, three linearly independent equal pattern element (EPE) plates are obtained on which the corresponding elements (called the address code) of a matrix expressing the plate represent the same point (X, Y, Z) on the specimen. The address codes are obtained using the theory of projection geometry. Data acquisition and processing
Improvement of Rate Shift in Average Atomic Time Scale Algorithm
Yuko Hanado; Mizuhiko Hosokawa
2008-01-01
In an average atomic time scale algorithm, we developed a new method of suppressing the influence of a large sudden rate shift in some clocks. The method entails two points. One is to add a clock rate check process. The weight of a bad clock is forced to be zero if its latest rate largely changes from the past rate.
Learning OneVariable Pattern Languages in Linear Average Time
Reischuk, Rüdiger
Learning OneVariable Pattern Languages in Linear Average Time R¨ udiger Reischuk \\Lambda Institut81 Japan thomas@i.kyushuu.ac.jp Abstract A new algorithm for learning onevariable pattern languages is replaced by a string to generate random samples of the target pattern language this algorithm converges
Time-averaged order parameter restraints in molecular dynamics simulations.
Hansen, Niels; Heller, Fabian; Schmid, Nathan; van Gunsteren, Wilfred F
2014-11-01
A method is described that allows experimental S(2) order parameters to be enforced as a time-averaged quantity in molecular dynamics simulations. The two parameters that characterize time-averaged restraining, the memory relaxation time and the weight of the restraining potential energy term in the potential energy function used in the simulation, are systematically investigated based on two model systems, a vector with one end restrained in space and a pentapeptide. For the latter it is shown that the backbone N-H order parameter of individual residues can be enforced such that the spatial fluctuations of quantities depending on atomic coordinates are not significantly perturbed. The applicability to realistic systems is illustrated for the B3 domain of protein G in aqueous solution. PMID:25312596
Applications of time-averaged digital holographic interferometry
Nazif Demoli; Kristina Šariri; Dalibor Vukicevic; Marc Torzynski
This work reports on recent results in the area of the time-averaged digital holographic interferometry. Two techniques are\\u000a described, one that removes the zero-order reconstruction term thus allowing recording of large objects, and the other by\\u000a which the hidden stationary deformations can be detected. Quantifying hidden deformations is important because its evaluation\\u000a allows direct measurement of a stationary bias strain
Adjoint sensitivity analysis of time averaged quantities for unsteady flows
NASA Astrophysics Data System (ADS)
Wang, Qiqi
2011-11-01
Sensitivity analysis is an essential gradient for data assimilation, aerodynamic design, uncertainty quantification and optimal flow control. In particular. the adjoint sensitivity analysis method has been shown to solve very high dimensional optimization problems typically found these applications. This talk focuses on recent developments in extending adjoint sensitivity analysis to unsteady flows. The adjoint equation of unsteady flows must be integrated backwards in time. Each backward time step must use the flow solution at the corresponding time. As a result, the entire time history of the flow solution must be either stored or recalculated. The invention of checkpointing schemes provides an economic solution to this challenge. In particular, the dynamic checkpointing scheme makes this solution more practical for computational fluid dynamics problems. In unsteady flows, the quantities of interest are often long time averages. We demonstrate that sensitivity analysis of these long time averaged quantities poses significant new challenge. A novel windowing scheme is developed to compute correct sensitivity for periodic unsteady flows, such as in laminar vortex shedding. Initial investigation of sensitivity analysis of chaotic unsteady flows, i.e., transitional and turbulent flows, is also discussed. This work is supported by a subcontract of PSAAP program at Stanford awarded to MIT.
Multiwave tomography in a closed domain: averaged sharp time reversal
NASA Astrophysics Data System (ADS)
Stefanov, Plamen; Yang, Yang
2015-06-01
We study the mathematical model of multiwave tomography including thermo and photoacoustic tomography with a variable speed for a fixed time interval [0,T]. We assume that the waves reflect from the boundary of the domain. We propose an averaged sharp time reversal algorithm. In case of measurements on the whole boundary, we give an explicit solution in terms of a Neumann series expansion. When the measurements are taken on a part of the boundary, we show that the same algorithm produces a parametrix. We present numerical reconstructions in both the full boundary and the partial boundary data case.
Stability of stochastic nonlinear switched systems with average dwell time
NASA Astrophysics Data System (ADS)
Wu, Xiaotai; Yan, Litan; Zhang, Wenbing; Tang, Yang
2012-03-01
This paper concerns the stability of nonlinear stochastic switched systems with delay. By using the Razumikhin method and the average dwell time approach, some sufficient criteria of stability are derived. It is shown that when the Lyapunov-like functions are decreasing in all active subsystems, the switched system is pth-moment exponentially stable. Moreover, it is also shown that under some conditions the system can be pth-moment stable and global asymptotically stable, where the Lyapunov-like functions are increasing on some intervals. The effectiveness of the proposed results is illustrated by two examples.
A time Fourier analysis of zonal averaged ozone heating rates
NASA Technical Reports Server (NTRS)
Wang, P.-H.; Wu, M.-F.; Deepak, A.; Hong, S.-S.
1981-01-01
A time-Fourier analysis is presented for the yearly variation of the zonal averaged ozone heating rates in the middle atmosphere based on a model study. The ozone heating rates are determined by utilizing two-dimensional ozone distributions, the altitude and latitude, and by including the effect of the curved earth's atmosphere. In addition, assumptions are introduced to the yearly variations of the ozone distributions due to the lack of sufficient existing ozone data. Among other results, it is shown that the first harmonic component indicates that the heating rates are completely out of phase between the northern and southern hemispheres. The second Fourier component shows a symmetric pattern with respect to the equator, as well as five distinct local extreme values of the ozone heating rate. The third harmonic component shows a pattern close to that of the first component except in the regions above 70 deg between 45-95 km in both hemispheres.
Textiles, Tariffs, and Turnarounds: Profits Improved.
ERIC Educational Resources Information Center
Aronoff, Craig
1986-01-01
The U.S. textile industry may serve as a classic study on regeneration through market forces. The industry has recently made a turnaround in profits after having been recognized as an industry that was losing most of its profits to overseas producers. The reasons for the emerging strength of the industry is that it began to innovate after a…
Time dependence of average regional sulfur oxide concentrations
Kleinman, L I; Carney, T A; Meyers, R E
1980-03-01
The atmospheric burden of SO/sub 2/ and SO/sub 4/ in a large regional area has been calculated as a function of time during the July 1978 SURE intensive period. A formulism is developed in which a highly parameterized description of deposition and chemical conversion is combined with a realistic emissions inventory and with realistic trajectories as calculated with the BNL long-range air quality model, AIRSOX. Temporal variations in the regional atmospheric burden are calculated as being due to variations in the advection of material through the region, with the pollutant being advected having a characteristic half-life for its formation and decay. The calculated atmospheric burdens are divided by a nominal mixing height to derive an average ground-level concentration. Comparison is made with the corresponding concentrations derived from SURE observations. The regions for which these comparisons have been made are in the Midwest and East and are similar in size (approx. 5 x 10/sup 5/ KM/sup 2/) and location to areas commonly affected by regional sulfate episodes.
Averaging of time - periodic systems without a small parameter
Mickaël D. Chekroun; Michael Ghil; Jean Roux; Ferenc Varadi
2006-01-01
In this article, we present a new approach to averaging in non-Hamiltonian\\u000asystems with periodic forcing. The results here do not depend on the existence\\u000aof a small parameter. In fact, we show that our averaging method fits into an\\u000aappropriate nonlinear equivalence problem, and that this problem can be solved\\u000aformally by using the Lie transform framework to linearize
On the Brownian meander and excursion conditioned to have a fixed time average
Lorenzo Zambotti
2006-01-01
We study the density of the time average of the Brownian meander\\/excursion over the time interval [0,1]. Moreover we give an expression for the Brownian meander\\/excursion conditioned to have a fixed time average.
Time-averaged images produced by optical systems with time-varying pupils
W. T. Welford
1971-01-01
If the pupil of a coherently illuminated optical system varies in time, the time-averaged image is equivalent to that of a system with a certain fixed pupil and partially coherent illumination. An eccentrically rotating circular pupil in this way simulates illumination by a ring source.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Scheduling to Minimize the Average Completion Time of Dedicated Tasks
Foto N. Afrati; Evripidis Bampis; Aleksei V. Fishkin; Klaus Jansen; Claire Kenyon
2000-01-01
We propose a polynomial time approximation scheme for schedulinga set of dedicated tasks on a constant number m of processors inorder to minimize the sum of completion times Pmjx jjPC j . In additionwe give a polynomial time approximation scheme for the weightedpreemptive problem with release dates, Pmjx j ; pmtn; r jjPw j C j .1 IntroductionIn this paper
The time-averaged paleomagnetic field 0–5 Ma
M. W. McElhinny; P. L. McFadden; R. T. Merrill
1996-01-01
Persistent departures from the geocentric axial dipole field model of the time-aver- aged paleomagnetic field over the past 5 Myr have been analyzed using oceanic data from deep-sea cores and continental data from igneous rocks and sediments. The data set comprises the equivalent of 9490 spot readings of the field (5831 normal and 3659 reverse) from 930 groups of data.
OBIT DEVELOPMENT MEMO SERIES NO. 13 1 Effects of Baseline Dependent Time Averaging of
Groppi, Christopher
. This memo explores some of the imaging effects of time averaging the data in a baseline dependent fashion in the image of 0.3%. Upper limits on the averaging time short enough to allow self calibration (15 sec.) do and imaged in order to evaluate the imaging effects of the averaging. Further tests were made to determine
Contrast of time-averaged image speckle pattern for a vibrating object
N. Takai
1978-01-01
In vibration analysis by speckle photography it is qualitatively known that the contrast of speckle patterns averaged in the exposure time varies, depending on the object motion. In this paper, the contrast of time-averaged image speckle patterns is theoretically evaluated and found to be determined by the ratio of the sinusoidally oscillating amplitude of the object motion to the average
Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer
NASA Astrophysics Data System (ADS)
Dunn, M. G.; Rae, W. J.; Rigby, D. L.
Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.
Turnaround Migration as an Episode of Collective Behavior.
ERIC Educational Resources Information Center
Campbell, Rex R.; Garkovich, Lorraine
1984-01-01
Presents a collective behavior model for examining the population turnaround (urban to rural migration) of the 1960s/1970s as a specific form of mass movement. Specifies the value-added process that produced this particular episode at this particular point in social history. Explores factors influencing decline in turnaround migration. (Author/MH)
Model reduction for switched linear parameter varying systems with average dwell time
Lixian Zhang; Peng Shi; Michael Basin
2008-01-01
In this paper, the model reduction problem is studied for a class of discrete-time switched linear parameter varying systems under average dwell time switching. A parameterized reduced-order model is constructed and the corresponding existence conditions of such models are derived via LMI formulation. The minimal average dwell time among all the subsystems and the desired reduced system are obtained such
G. C Borgia; V Bortolotti; R. J. S Brown; P Fantazzini
1998-01-01
A method is presented for approximating fractional power averages of relaxation times for data equispaced in log time, without the need to invert multiexponential relaxation data. This form of average permits giving emphasis to short or long times depending on the choice of the p value, thus giving the possibility of representing different specific properties of porous media. This method
Sustainment of Fine Particle Cloud by Means of Time-Averaged Particle Driving Force in Plasmas
Gohda, Takuma; Iizuka, Satoru [Department of Electrical Engineering, Graduate School of Engineering, Tohoku University Aza-Aoba, Aramaki, Aoba, 980-8579 Sendai (Japan)
2008-09-07
We have succeeded in sustaining fine particle cloud by using a time-averaged particle driving (TAPD) method in the RF discharge plasma. The particles feel only time-averaged force when the period of pulses applied to those point-electrodes is shorter than the particle response time. The particles are transported to a middle point between two point-electrodes.
Applicability of time-average moir techniques for chaotic oscillations Minvydas Ragulskis,1
Rey Juan Carlos, Universidad
-averaged fringes are produced when the car- rier moiré grating is contrast modulated by the function de- pendent, it is important to understand what time-averaged image would be produced if a moiré grating is formed-averaged moiré images would help to improve the un- certainty of the inverse problem and to distinguish malfunc
Scheduling to Minimize Average Completion Time Revisited: Deterministic On-line Algorithms
Megow, Nicole
2004-02-06
We consider the scheduling problem of minimizing the average weighted completion time on identical parallel machines when jobs are arriving over time. For both the preemptive and the nonpreemptive setting, we show that ...
Average Time Results -Child -Sample Size 500 Error Bars = +/-Std.Dev.
Brown, Laura E.
Time Average Time Results - Child - Sample Size 500 Error Bars = +/- Std.Dev. 0 50 100 150 11516 MMHC - Child3 - Sample Size 500 Error Bars = +/- Std.Dev. 0 100 200 300 400 108147 MMHC OR1k=5 OR1k=10 OR1k=20 OR2k=5 OR2k=10 OR2k=20 SCk=5 SCk=10 GS PC TPDA GES Time Average Time Results - Child5 - Sample Size
k=10 GS PC TPDA GES Average Time Results -Child -Sample Size 500
Brown, Laura E.
TPDA GES Time Average Time Results - Child - Sample Size 500 Error Bars = +/- Std.Dev. 0 50 100 150 Average Time Results - Child3 - Sample Size 500 Error Bars = +/- Std.Dev. 0 100 200 300 400 108147 MMHC OR Results - Child5 - Sample Size 500 Error Bars = +/- Std.Dev. 0 500 1000 1500 2386 MMHC OR1 k=5 OR1 k=10 OR
DSP Implementation of Time Delay Estimation Based on Bufferable Average Square Difference Matrix
Erfan Soltanmohammadi; S. Dastgerdi; A. H. Rezaie
2010-01-01
Time delay estimation is a fundamental subject in the field of source localization. Although many accurate approaches were proposed to estimate time delay, only some of them are implementable. We propose two approaches based on average square difference to estimate time delay in adverse conditions. In this way we estimate time delay between two sensors signals. In these approaches a
Averaging time adjustments to plume spread and limits to their application
Wilson, D.J. [Univ. of Alberta, Edmonton, Alberta (Canada). Dept. of Mechanical Engineering
1995-12-31
The averaging time exponent, p, in {sigma}{sub y} {proportional_to} t{sub avg}{sup p} is shown to vary strongly, ranging from p = 0.05 for short averaging times (a few seconds to a few minutes) far downwind of a large source; to p = 0.5 for long averaging times (30 minutes to several hours) close to a small source. The widely-used exponent of p = 0.2 is only a rough approximation in most cases. Gifford`s random force model for crosswind plume spread is used to determine the effect of varying source size and downwind distance (puff travel time) on averaging time adjustments. Data from a set of field experiments are correlated to test their agreement with the model.
Vagedes, Jan; Bialkowski, Anja; Wiechers, Cornelia; Poets, Christian F.; Dietz, Klaus
2014-01-01
Objective The number of desaturations determined in recordings of pulse oximeter saturation (SpO2) primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. Methods Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2–4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D). The whole procedure was carried out for 7 different minimal desaturation durations (?1, ?5, ?10, ?15, ?20, ?25, ?30 s) below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. Results Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2?=?D1 (T2/T1)c, where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. Conclusion This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations. PMID:24489887
Reaction-time binning: A simple method for increasing the resolving power of ERP averages
Poli, Riccardo
Reaction-time binning: A simple method for increasing the resolving power of ERP averages RICCARDO-locked, response-locked, and ERP-locked averaging are effective methods for reducing artifacts in ERP analysis. However, they suffer from a magnifying-glass effect: they increase the resolution of specific ERPs
Time-gated ensemble-averaged imaging through highly scattering media
P. Naulleau; E. Leith; H. Chen; B. Hoover; J. Lopez
1997-01-01
A previously described ensemble-averaged imaging method Opt. Lett. 21 , 1691 (1996) is extended by its combination with holographically implemented time-gated imaging. This combined method is shown to extend the effectiveness of the ensemble-averaged method by permitting imaging through thicker diffusers. Experimental results are presented.
Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field
C. Constable; C. L. Johnson
2009-01-01
We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in
Turnaround operations analysis for OTV. Volume 2: Detailed technical report
NASA Technical Reports Server (NTRS)
1988-01-01
The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.
Rethinking the nonmetropolitan turnaround: renewed rural growth or extended urbanization?
Wang, Xiaodong
2007-04-25
This dissertation proposes a new, synthesized perspective for explaining the Ã¢Â?Â?Nonmetropolitan TurnaroundÃ¢Â?Â in the 1970s and 1990s. By studying the definition of urbanization carefully, using the human ecological ...
34. BOILER HOUSE, COAL CONVEYOR AND TURNAROUND TRACK FOR COAL ...
34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
NASA Astrophysics Data System (ADS)
Gomez, Vicente; Ramirez, Patricio; Cervera, Javier; Nasir, Saima; Ali, Mubarak; Ensinger, Wolfgang; Mafe, Salvador
2015-02-01
The possibility of taking advantage of a fluctuating environment for energy and information transduction is a significant challenge in biological and artificial nanostructures. We demonstrate here directional electrical transduction from fluctuating external signals using a single nanopore of conical shape immersed in an ionic aqueous solution. To this end, we characterize experimentally the average output currents obtained by the electrical rectification of zero time-average input potentials. The transformation of external potential fluctuations into nonzero time-average responses using a single nanopore in liquid state is of fundamental significance for biology and nanophysics. This energy and information conversion constitutes also a significant step towards macroscopic scaling using multipore membranes.
Approximation Schemes for Minimizing Average Weighted Completion Time with Release Dates
Foto N. Afrati; Evripidis Bampis; Chandra Chekuri; David R. Karger; Claire Kenyon; Sanjeev Khanna; Ioannis Milis; Maurice Queyranne; Martin Skutella; Clifford Stein; Maxim Sviridenko
1999-01-01
We consider the problem of scheduling jobs with re- lease dates on machines so as to minimize their average weighted completion time. We present the first known poly- nomial time approximation schemes for several variants of this problem. Our results include PTASs for the case of identical parallel machines and a constant number of unre- lated machines with and without
Influence of noise on the averaged false neighbors method for analyzing time series
Sofiane Ramdani; Jean-François Casties; Frédéric Bouchara; Denis Mottet
2006-01-01
This paper deals with the influence of noise on the averaged false neighbors method (AFN) proposed by L. Cao for analyzing time series and describing the dynamical properties of their underlying process. First, we give a theoretical justification of the AFN method results for a pure random time series (white gaussian noise). Then we present some numerical experiments corresponding to
of Vehicle Routing Problems (VRP) with time window, route duration, and capacity constraints sequencing a customer with time windows a new expression to estimate VRP distances is developed. The increase predicts the average length of VRP problems with a high level of accuracy. KEYWORDS: Vehicle Routing
Code of Federal Regulations, 2010 CFR
2010-07-01
(b) Use EPA Reference Method 19 in appendix A of this part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section...
Code of Federal Regulations, 2010 CFR
2010-07-01
40 ? Protection of Environment ? 6 ? 2010-07-01 ? 2010-07-01 ? false ? How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? ? 60.3042 ? Section 60.3042 ? Protection of Environment ? ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ? AIR PROGRAMS (CONTINUED)...
Code of Federal Regulations, 2010 CFR
2010-07-01
40 ? Protection of Environment ? 6 ? 2010-07-01 ? 2010-07-01 ? false ? How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? ? 60.2943 ? Section 60.2943 ? Protection of Environment ? ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ? AIR PROGRAMS (CONTINUED)...
Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio
NASA Astrophysics Data System (ADS)
Li, Shenghong; Bi, Guoan
2014-12-01
Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.
Neural Networks Used to Compare Designed and Measured Time-Average Patterns
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1999-01-01
Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.
Optical analysis of time-averaged multiscale Bessel beams generated by a tunable acoustic
Arnold, Craig B.
Optical analysis of time-averaged multiscale Bessel beams generated by a tunable acoustic gradient index of refraction lens Euan McLeod and Craig B. Arnold* Department of Mechanical and Aerospace the multiscale Bessel beams created using an alternative rapidly switchable device: a tunable acoustic gradient
Oxygen isotope enrichment ( 18O) as a measure of time-averaged transpiration rate
M. S. Sheshshayee; H. Bindumadhava; R. Ramesh; T. G. Prasad; M. R. Lakshminarayana; M. Udayakumar
2005-01-01
Experimental evidence is presented to show that the 18 O enrichment in the leaf biomass and the mean (time- averaged) transpiration rate are positively correlated in groundnut and rice genotypes. The relationship between oxygen isotope enrichment and stomatal con- ductance (gs) was determined by altering gs through ABA and subsequently using contrasting genotypes of cowpea and groundnut. The Peclet model
G. E. P. Box; David A. Pierce
1970-01-01
Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values,
The Impact of Overassignment on Grade Point Averages of First-Time Freshmen.
ERIC Educational Resources Information Center
Desler, Mary; North, Gary
1978-01-01
Data show that the first-time freshmen, who were overassigned in residence halls, did no worse than other students concerning grade point average. The peer pressure may have been a motivating force. There was no male/female difference. Data also show no difference in environment perception. (LPG)
Sample Size Calculation for Time-Averaged Differences in the Presence of Missing Data
Ahn, Chul
2012-01-01
Sample size calculations based on two-sample comparisons of slopes in repeated measurements have been reported by many investigators. In contrast, the literature has paid relatively little attention to the sample size calculations for time-averaged differences in the presence of missing data in repeated measurements studies. Diggle et al. (2002) provided a sample size formula comparing time-averaged differences for continuous outcomes in repeated measurement studies assuming no missing data and the compound symmetry (CS) correlation structure among outcomes from the same subject. In this paper we extend Diggle et al.'s time-averaged difference sample size formula by allowing missing data and various correlations structures. We propose to use the generalized estimating equation (GEE) method to compare the time-averaged differences in repeated measurement studies and introduce a closed form formula for sample size and power. Simulation studies were conducted to investigate the performance of GEE sample size formula with small sample sizes, damped exponential family of correlation structure and missing data. The proposed sample size formula is illustrated using a clinical trial example. PMID:22553832
A diagnostic study of time variations of regionally averaged background error covariances
NASA Astrophysics Data System (ADS)
Monteiro, Maria; Berre, LoïK.
2010-12-01
In variational data assimilation systems, background error covariances are often estimated from a temporal and spatial average. For a limited area model such as the Aire Limited Adaptation Dynamique Developpment International (ALADIN)/France, the spatial average is calculated over the regional computation domain, which covers western Europe. The purpose of this study is to revise the temporal stationarity assumption by diagnosing time variations of such regionally averaged covariances. This is done through examination of covariance changes as a function of season (winter versus summer), day (in connection with the synoptic situation), and hour (related to the diurnal cycle), with the ALADIN/France regional ensemble Three-Dimensional Variational analysis (3D-Var) system. In summer, compared to winter, average error variances are larger, and spatial correlation functions are sharper horizontally but broader vertically. Daily changes in covariances are particularly strong during the winter period, with larger variances and smaller-scale error structures when an unstable low-pressure system is present in the regional domain. Diurnal variations are also significant in the boundary layer in particular, and, as expected, they tend to be more pronounced in summer. Moreover, the comparison between estimates provided by two independent ensembles indicates that these covariance time variations are estimated in a robust way from a six-member ensemble. All these results support the idea of representing these time variations by using a real-time ensemble assimilation system.
An upper bound to time-averaged space-charge limited diode currents
Griswold, M. E.; Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States); Wurtele, J. S. [Department of Physics, University of California at Berkeley, Berkeley, California 94720 (United States)
2010-11-15
The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.
Limin Wang; Cheng Shao
2010-01-01
The issue of exponential stabilisation for a class of special time-varying delay switched systems resulting from actuator faults is considered in this article. The time-varying delay is assumed to belong to an interval and can be a slow or fast time-varying function. A hybrid state feedback strategy is redesigned to guarantee the system stable since the original controller is unavailable
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
The average zero time lag cross-correlation of LASA earth noise signals
Calmes, Grady Alden
1972-01-01
THE AVERAGE 2'RO TIVE LAG CROSS- CORRELATTON OF LASA EARTII NOTSE STGIIALS A Thesis by GRADY ALDEN CAIZ~'IES Submitted to the Graduate Co loge of Texas A&II University in partia. " fulfillment o the require. . sents for the d' gree... of MASTER OF SCIENCE August 1972 Naj or S ul. j set ~ Geophysic s THE AVERAGE ZERO TIME LAG CROSS- CORRELATION OF LASA EARTH NOISE SIGNALS A Thesis by GRADY ALDEN CALMES Approved es to style and content by& Chairman of Cbmmi ttee (Head of De...
NASA Technical Reports Server (NTRS)
Decker, A. J.
1984-01-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.
NASA Technical Reports Server (NTRS)
Decker, A. J.
1984-01-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three-dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed. Previously announced in STAR as N84-21849
Turnaround: Leading Stressed Colleges and Universities to Excellence
ERIC Educational Resources Information Center
Martin, James; Samels, James E.
2008-01-01
Nearly one thousand colleges and universities in the United States face major challenges--from catastrophic hurricanes to loss of accreditation to sagging enrollment. What can leaders of such at-risk institutions do to improve their situation? "Turnaround" gives college and university leaders the tools they need to put their fragile institutions…
129. Back side technical facilities passageways between turnaround trans. & ...
129. Back side technical facilities passageways between turnaround trans. & building no. 104, "plans, sections & elevations" - architectural, AS-BLT AW 36-25-13, sheet 9 of 40, dated 23 November, 1960. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
Event Registration Form Turn-Around Reading Workshop -#80291
Arnold, Jonathan
phone work phone fax e-mail address Please send me e-mail information about Georgia Center programs mailing address or home address city state zip county country and postal code (if outside the USA) home-884-1419 -- Credit Card Only Mail: Turn-Around Reading Workshop #80291 The Georgia Center The University of Georgia
Negotiation Areas for "Transformation" and "Turnaround" Intervention Models
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…
Convergence of Time Averages of Weak Solutions of the Three-Dimensional Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Foias, Ciprian; Rosa, Ricardo M. S.; Temam, Roger M.
2015-05-01
Using the concept of stationary statistical solution, which generalizes the notion of invariant measure, it is proved that, in a suitable sense, time averages of almost every Leray-Hopf weak solution of the three-dimensional incompressible Navier-Stokes equations converge as the averaging time goes to infinity. This system of equations is not known to be globally well-posed, and the above result answers a long-standing problem, extending to this system a classical result from ergodic theory. It is also shown that, from a measure-theoretic point of view, the stationary statistical solution obtained from a generalized limit of time averages is independent of the choice of the generalized limit. Finally, any Borel subset of the phase space with positive measure with respect to a stationary statistical solution is such that for almost all initial conditions in that Borel set and for at least one Leray-Hopf weak solution starting with that initial condition, the corresponding orbit is recurrent to that Borel subset and its mean sojourn time within that Borel subset is strictly positive.
ERIC Educational Resources Information Center
Murphy, Joseph; Meyers, Coby V.
2009-01-01
In this article, we provide a grounded narrative of capacity building in the turnaround equation by exploring the turnaround literature outside of education and applying it to troubled schools. Our analysis is based upon reviews of: (1) 14 comprehensive, historical volumes that examine the turnaround phenomenon; (2) 16 book-length analyses of…
Turnaround Schools as a U-Turn for Student Achievement: The Rotary of School Reform Efforts
ERIC Educational Resources Information Center
Mette, Ian M.
2012-01-01
This case study presents data to support the notion that turnaround school policy can improve the efficiency of how traditionally low-performing schools function. The schools that were successful in implementing the UVA Turnaround Program training developed a clear understanding of the expectations for participating in the UVA Turnaround Program…
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions for the three-dimensional DES when compared to experiment. The initial results using the RANS and DES procedures compared well with experimental data for the wall-mounted hump as well. Using the RANS model, the onset of separation was accurately predicted while the reattachment point was over-predicted. The RANS procedure also over-predicted the mean pressure, skin friction, and velocity profiles in the separation zone. The DES procedure showed much better results for the secondary flow of a wing/endwall junction; the three-dimensional structures resolved in the wake of the DES improved the local flow physics in the separation region and the predictions of the mean pressure distribution, skin friction, and streamwise velocity. The DES procedure exhibited a three-dimensional ow structure in the wake, with a 13.65% shorter mean separation region compared to RANS and a mean reattachment length that is in good agreement with experimental measurements. DES predictions of the pressure coefficient in the separation region also exhibit good agreement with experiment and are more accurate than RANS predictions.
A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.
Ivan Psenc ´ ik; Ke Karlovu
SUMMARY The energy-flux vector and other energy-related quantities play an important role in vari- ous wave propagation problems. In acoustics and seismology, the main attention has been devoted to the time-averaged energy flux of time-harmonic wavefields propagating in non- dissipative, isotropic and anisotropic media. In this paper, we investigate the energy-flux vector and other energy-related quantities of wavefields propagating in
Vlastislav Cervený; Ivan Psencík
2007-01-01
The energy-flux vector and other energy-related quantities play an important role in various wave propagation problems. In acoustics and seismology, the main attention has been devoted to the time-averaged energy flux of time-harmonic wavefields propagating in non-dissipative, isotropic and anisotropic media. In this paper, we investigate the energy-flux vector and other energy-related quantities of wavefields propagating in inhomogeneous anisotropic viscoelastic
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
Long, Lijun; Zhao, Jun
2015-07-01
This paper investigates the problem of adaptive neural tracking control via output-feedback for a class of switched uncertain nonlinear systems without the measurements of the system states. The unknown control signals are approximated directly by neural networks. A novel adaptive neural control technique for the problem studied is set up by exploiting the average dwell time method and backstepping. A switched filter and different update laws are designed to reduce the conservativeness caused by adoption of a common observer and a common update law for all subsystems. The proposed controllers of subsystems guarantee that all closed-loop signals remain bounded under a class of switching signals with average dwell time, while the output tracking error converges to a small neighborhood of the origin. As an application of the proposed design method, adaptive output feedback neural tracking controllers for a mass-spring-damper system are constructed. PMID:25122844
Numerical multi-scale time averaging approach to quantum systems with pseudo random potentials
Tal Kachman; Shmuel Fishman; Avy Soffer
2015-01-14
We consider the problem of numerically solving the Schrodinger equation with a quasi periodic in space time potential, approximating multiplicative noise term. We introduce a numerical scheme based on a newly developed multi-time scale averaging technique. We show that with this new method we can solve efficiently and with rigorous control of the error the above equation for large times. A comparison with the standard split-step method shows order of magnitude improvement in computational times, besides the controlled errors. We apply this method to a free particle perturbed by quasi-periodic potential with many frequencies. We observe the (anomalous) diffusion in momentum space, over large time intervals, as large as the system is away from the Chirikov resonance strips. In the limit of zero potential, the diffusion rate becomes standard, but with a coefficient that approaches zero
NASA Astrophysics Data System (ADS)
Young, Christopher; Lucca Fabris, Andrea; Gascon, Nicolas; Cappelli, Mark
2014-10-01
An extensive experimental campaign characterizes a 70 mm diameter stationary plasma thruster operating on xenon in the 200--500 W power range. This study resolves both time-averaged properties and oscillatory phenomena in the plasma discharge. Specifically, we explore the time variation of the plume ion velocity field referenced to periodic discharge current oscillations using time-synchronized laser induced fluorescence (LIF) measurements. This LIF scheme relies on a triggered signal acquisition gate locked at a given phase of the current oscillation period. The laser is modulated at a characteristic frequency and homodyne detection through a lock-in amplifier extracts the induced fluorescence signal out of the bright background emission. This work is sponsored by the U.S. Air Force Office of Scientific Research with Dr. Mitat Birkan as program manager. CVY acknowledges support from the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-01
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467
Time-averaged quantum dynamics and the validity of the effective Hamiltonian model
Gamel, Omar; James, Daniel F. V. [Department of Physics, University of Toronto, 60 St. George Street, Toronto, Ontario M5S 1A7 (Canada)
2010-11-15
We develop a technique for finding the dynamical evolution in time of an averaged density matrix. The result is an equation of evolution that includes an effective Hamiltonian, as well as decoherence terms in Lindblad form. Applying the general equation to harmonic Hamiltonians, we confirm a previous formula for the effective Hamiltonian together with a additional decoherence term which should, in general, be included and whose vanishing provides the criteria for validity of the effective Hamiltonian approach. Finally, we apply the theory to examples of the ac Stark shift and three-level Raman transitions, recovering a decoherence effect in the latter.
Probe shapes that measure time-averaged streamwise momentum and cross-stream turbulence intensity
NASA Technical Reports Server (NTRS)
Rossow, Vernon J. (inventor)
1993-01-01
A method and apparatus for directly measuring the time-averaged streamwise momentum in a turbulent stream use a probe which has total head response which varies as the cosine-squared of the angle of incidence. The probe has a nose with a slight indentation on its front face for providing the desired response. The method of making the probe incorporates unique design features. Another probe may be positioned in a side-by-side relationship to the first probe to provide a direct measurement of the total pressure. The difference between the two pressures yields the sum of the squares of the cross-stream components of the turbulence level.
Restructuring and the Nonmetropolitan Turnaround: The California Evidence
Warf, Barney L.
1986-01-01
the voluminous literature on the turnaround at the national level, very little has been written regarding specifically regional contexts, especially in the rapidly growing West. Accord- ingly, the purpose of this paper is to document the extent to which... existed as external "urbanization economies" by increasing the length of production runs within larger plants. Design changes throughout this process either halt or become markedly less frequent, and contacts with buyers and sellers both "up and down...
Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay
Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.
2000-01-01
Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.
Modal identification based on Gaussian continuous time autoregressive moving average model
NASA Astrophysics Data System (ADS)
Xiuli, Du; Fengquan, Wang
2010-09-01
A new time-domain modal identification method of the linear time-invariant system driven by the non-stationary Gaussian random force is presented in this paper. The proposed technique is based on the multivariate continuous time autoregressive moving average (CARMA) model. This method can identify physical parameters of a system from the response-only data. To do this, we first transform the structural dynamic equation into the CARMA model, and subsequently rewrite it in the state-space form. Second, we present the exact maximum likelihood estimators of parameters of the continuous time autoregressive (CAR) model by virtue of the Girsanov theorem, under the assumption that the uniformly modulated function is approximately equal to a constant matrix over a very short period of time. Then, based on the relation between the CAR model and the CARMA model, we present the exact maximum likelihood estimators of parameters of the CARMA model. Finally, the modal parameters are identified by the eigenvalue analysis method. Numerical results show that the method we introduced here not only has high precision and robustness, but also has very high computing efficiency. Therefore, it is suitable for real-time modal identification.
Time-averaged current analysis of a thunderstorm using ground-based measurements
NASA Technical Reports Server (NTRS)
Driscoll, Kevin T.; Blakeslee, Richard J.; Koshak, William J.
1994-01-01
The amount of upward current provided to the ionosphere by a thunderstorm that appeared over the Kennedy Space Center (KSC) on July 11, 1978, is reexamined using an analytic equation that describes a bipolar thunderstorm's current contribution to the global circuit in terms of its generator current, lightning currents, the altitudes of its charge centers, and the conductivity profile of the atmosphere. Ground-based measurements, which were obtained from a network of electric field mills positioned at various distances from the thunderstorm, were used to characterize the electrical activity inside the thundercloud. The location of the lightning discharges, the type of lightning, and the amount of charge neutralized during this thunderstorm were computed through a least squares inversion of the measured changes in the electric fields following each lightning discharge. These measurements provided the information necessary to implement the analytic equation, and consequently, a time-averaged estimate of this thunderstorm's current contribution to the global circuit was calculated. From these results the amount of conduction current supplied to the ionosphere by this small thunderstorm was computed to be less than 25% of the time-averaged generator current that flowed between the two vertically displaced charge centers.
NASA Astrophysics Data System (ADS)
Charuchittipan, Doojdao; Babel, Wolfgang; Mauder, Matthias; Leps, Jens-Peter; Foken, Thomas
2014-09-01
The modified ogive analysis and the block ensemble average were employed to investigate the impact of the averaging time extension on the energy balance closure over six land-use types. The modified ogive analysis, which requires a steady-state condition, can extend the averaging time up to a few hours and suggests that an averaging time of 30 min is still overall sufficient for eddy-covariance measurements over low vegetation. The block ensemble average, which does not require a steady-state condition, can extend the averaging time to several days. However, it can improve the energy balance closure for some sites during specific periods, when secondary circulations exist in the vicinity of the sensor. These near-surface secondary circulations mainly transport sensible heat, and when near-ground warm air is transported upward, the sensible heat flux observed by the block ensemble average will increase at longer averaging times. These findings suggest an alternative energy balance correction for a ground-based eddy-covariance measurement, in which the attribution of the residual depends on the ratio of sensible heat flux to the buoyancy flux. The fraction of the residual attributed to the sensible heat flux by this energy balance correction is larger than in the energy balance correction that preserves the Bowen ratio.
NASA Astrophysics Data System (ADS)
Mao, Yijun; Xu, Chen; Qi, Datong
2015-02-01
A vector aeroacoustics method is developed to analyze the acoustic energy flow path from the rotating source. In this method, the instantaneous and time-averaged active acoustic intensity vectors are evaluated from the time-domain and frequency-domain acoustic pressure and acoustic velocity formulations, respectively. With the above method, the acoustic intensity vectors and the acoustic energy streamlines are visualized to investigate the propagation feature of the noise radiated from the monopole and dipole point sources and the rotor in subsonic rotation. The result reveals that a portion of the acoustic energy spirals many circles before moving towards the far field, and another portion of the acoustic energy firstly flows inward along the radial direction and then propagates along the axial direction. Further, an acoustic black hole exists in the plane of source rotation, from which the acoustic energy cannot escape once the acoustic energy flows into it. Moreover, by visualizing the acoustic intensity field around the rotating sources, the acoustic-absorption performance of the acoustic liner built in the casing and centerbody is discussed.
Area-preserving maps models of gyro-averaged ${\\bf E} \\times {\\bf B}$ chaotic transport
J. D. da Fonseca; D. del-Castillo-Negrete; I. L. Caldas
2014-09-10
Discrete maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on area-preserving maps describing finite Larmor radius (FLR) effects on ${\\bf E} \\times {\\bf B}$ chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. FLR effects are included by gyro-averaging the Hamiltonians of the maps which, depending on the zonal flow profile, can have monotonic or non-monotonic frequencies. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that in both cases FLR leads to chaos suppression, changes in the stability of fixed points, and robustness of transport barriers. FLR effects are also responsible for changes in the phase space topology and zonal flow bifurcations. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the Larmor radius of the threshold for the destruction of transport barriers.
G. C Borgia; R. J. S Brown; P Fantazzini
1998-01-01
In a suite of water-saturated sandstones, we have recently demonstrated that irreducible water saturation can be well estimated using relaxation time only, in the form of any of several “averages” giving more emphasis to short times than does the geometric mean time. The best estimate of permeability came from fits giving more emphasis to slightly longer times. In this paper
Meeting the challenge of a group practice turnaround.
Porn, L M
2001-03-01
Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing. PMID:11258274
Detecting macroeconomic phases in the Dow Jones Industrial Average time series
NASA Astrophysics Data System (ADS)
Wong, Jian Cheng; Lian, Heng; Cheong, Siew Ann
2009-11-01
In this paper, we perform statistical segmentation and clustering analysis of the Dow Jones Industrial Average (DJI) time series between January 1997 and August 2008. Modeling the index movements and log-index movements as stationary Gaussian processes, we find a total of 116 and 119 statistically stationary segments respectively. These can then be grouped into between five and seven clusters, each representing a different macroeconomic phase. The macroeconomic phases are distinguished primarily by their volatilities. We find that the US economy, as measured by the DJI, spends most of its time in a low-volatility phase and a high-volatility phase. The former can be roughly associated with economic expansion, while the latter contains the economic contraction phase in the standard economic cycle. Both phases are interrupted by a moderate-volatility market correction phase, but extremely-high-volatility market crashes are found mostly within the high-volatility phase. From the temporal distribution of various phases, we see a high-volatility phase from mid-1998 to mid-2003, and another starting mid-2007 (the current global financial crisis). Transitions from the low-volatility phase to the high-volatility phase are preceded by a series of precursor shocks, whereas the transition from the high-volatility phase to the low-volatility phase is preceded by a series of inverted shocks. The time scale for both types of transitions is about a year. We also identify the July 1997 Asian Financial Crisis to be the trigger for the mid-1998 transition, and an unnamed May 2006 market event related to corrections in the Chinese markets to be the trigger for the mid-2007 transition.
NASA Astrophysics Data System (ADS)
Liotard, Arnaud; Muratet, Sylvaine; Zamkotsian, Frédéric; Fourniols, Jean-Yves
2004-12-01
Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing, characterizing and modeling this key-component. Actuators and a continuous-membrane micro deformable mirror (3*3 actuators, 600*600 µm2) have been designed in-house and processed by surface micromachining in the Cronos foundry. A dedicated characterization bench has been developed for the complete analysis. This Twyman-Green interferometer allows high in-plane resolution (4 µm) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps and FEM can be fitted. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometer. The deformable mirror exhibit a 350nm stroke for 35 volts on the central actuator. This limited stroke could be overcome by changing the components material and promising actuators are made with polymers.
NASA Astrophysics Data System (ADS)
Liotard, Arnaud; Muratet, Sylvaine; Zamkotsian, Fr‰d.‰ric; Fourniols, Jean-Yves
2005-01-01
Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing, characterizing and modeling this key-component. Actuators and a continuous-membrane micro deformable mirror (3*3 actuators, 600*600 ?m2) have been designed in-house and processed by surface micromachining in the Cronos foundry. A dedicated characterization bench has been developed for the complete analysis. This Twyman-Green interferometer allows high in-plane resolution (4 ?m) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps and FEM can be fitted. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometer. The deformable mirror exhibit a 350nm stroke for 35 volts on the central actuator. This limited stroke could be overcome by changing the components material and promising actuators are made with polymers.
NASA Astrophysics Data System (ADS)
Liotard, Arnaud; Zamkotsian, Frederic
2004-09-01
Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing and characterizing blocks of this key-component. An in-house designed tiltable mirror (170*100 ?m2) has been processed by surface micromachining in the Cronos foundry, and a dedicated characterization bench has been developed for the complete analysis of building blocks as well as operational deformable mirrors. This modular Twyman-Green interferometer allows high in-plane resolution (4?m) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps. Range is increased without loosing accuracy by using two-wavelength phase-shifting interferometry authorizing large steps measurements such as 590 nm print-through steps caused by the Cronos process. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometry. Rotation mode frequency of 31-3kHz of the micro tiltable mirror, and a resonance with a tuned damping at 1.1kHz of the commercial OKO deformable mirror are revealed.
NASA Astrophysics Data System (ADS)
Brown, Gordon C.; Pryputniewicz, Ryszard J.
1998-05-01
An optical microscope, utilizing the principles of time- averaged hologram interferometry, is described for microelectromechanical systems (MEMS) applications. MEMS are devices fabricated via techniques such as microphotolithography to create miniature actuators and sensors. Many of these sensors are currently deployed in automotive applications which rely on, or depend on, the dynamic behavior of the sensor, e.g., airbag sensors, ride monitoring suspensions sensors, etc. Typical dimensions of current MEMS devices are measured in micrometers, a small fraction of the diameter of a human hair, and the current trends is to further decrease the size of MEMS devices to submicrometer dimensions. However, the smaller MEMS become, the more challenging it is to measure with accuracy the dynamic characteristics of these devices. An electro-optic holographic microscope (EOHM) for the purpose of studying the dynamic behavior of MEMS type devices is described. Additionally, by performing phase measurements within an EOHM image, object displacements are determined as illustrated by representative examples. With the EOHM, devices with surface sizes ranging from approximately 35 X 400 to 5 X 18 micrometers are studied while undergoing resonant vibrations at frequencies as high as 2 MHz.
Distribution of Time-Averaged Pressure Fluctuations Along the Boundary of a Round Subsonic Jet
NASA Technical Reports Server (NTRS)
Howes, Walton L.
1960-01-01
A semiempirical analysis of the equation for incompressible fluctuations in a turbulent fluid, using similarity relations for round subsonic jets with uniform exit velocity, is used to predict the shape of the time-averaged fluctuation-pressure distribution along the mean-velocity boundary of jets. The predicted distribution is independent of distance downstream of the nozzle exit along the mixing region, inversely proportional to the distance downstream along the region of mean-velocity self-preservation, and proportional to the inverse square of the distance downstream along the fully developed region. Experimental results were in fair agreement with the theory. However, the measured fluctuation-pressure distributions were found to be very sensitive to changes in jet temperature and jet-nozzle profile, especially near the nozzle. These factors are not included in the theory. Increased jet temperatures produce increased pressure fluctuations and violation of similarity conditions. Nozzle-profile modifications may lead to violation of the uniform-exit-velocity requirement imposed in the theory.
High speed signal averager for characterizing periodic signals in the time domain
Dennis W. Hair; Frank J. Niertit; Donald F. Hodgson; Eric J. Amis
1989-01-01
A new data-acquisition and averaging system is described which rapidly digitizes and averages periodic signals of known frequency. Each acquisition channel has a programmable gain input amplifier, track and hold amplifier, 12 bit analog to digital converter, 4K by 32 bit memory buffer, and an arithmetic logic unit which adds the converter output directly to the contents of the current
P. N. Brandt; A. V. Getling
2008-01-01
The decrease in the rms contrast of time-averaged images with the averaging time is compared between four data sets: (1) a\\u000a series of solar granulation images recorded at La Palma in 1993, (2) a series of artificial granulation images obtained in\\u000a numerical simulations by Rieutord et al. (Nuovo Cimento\\u000a 25, 523, 2002), (3) a similar series computed by Steffen and his
Paleosecular Variation and Time-Averaged Field Behavior: Global and Regional Signatures
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Cromwell, G.; Tauxe, L.; Constable, C.
2012-12-01
We use an updated global dataset of directional and intensity data from lava flows to investigate time-averaged field (TAF) and paleosecular variation (PSV) signatures regionally and globally. The data set includes observations from the past 10 Ma, but we focus our investigations on the field structure over past 5 Ma, in particular during the Brunhes and Matuyama. We restrict our analyses to sites with at least 5 samples (all of which have been stepwise demagnetized), and for which the estimate of the Fisher precision parameter, k, is at least 50. The data set comprises 1572 sites from the past 5 Ma that span latitudes 78oS to 71oN; of these ?40% are from the Brunhes chron and ?20% are from the Matuyama chron. Age control at the site level is variable because radiometric dates are available for only about one third of our sites. New TAF models for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling, and the limited age information available for many sites. Results from Hawaii indicate that resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but a PSV estimate (virtual geomagnetic pole dispersion) that is increased relative to that obtained from the unevenly sampled data. The global distribution of sites in our dataset allows us to investigate possible hemispheric asymmetries in field structure, in particular differences between north and south high latitude field behavior and low latitude differences between the Pacific and Atlantic hemispheres.
Using temporal averaging to decouple annual and nonannual information in AVHRR NDVI time series
Kastens, Jude Heathcliff; Lerner, David E.; Jakubauskas, Mark E.
2003-11-01
invariant targets (SITS) are typically used for sensor calibration and data consistency checks. Unfortunately, such targets are not always available in study regions. The temporal averaging technique described in this research can be used to determine...
Approaches to Operative Decision Support in the Management of Aircraft Turnaround Disruptions
Jannach, Dietmar
1 Approaches to Operative Decision Support in the Management of Aircraft Turnaround Disruptions J, this paper focuses on the problem of manag- ing disruptions within the aircraft turnaround, the most typical: Particularly, strengths and limitations of simulation, planning and scheduling are discussed in the regarded
Dancing in a Minefield: An Analysis of Turnaround Specialists in Arizona Schools
ERIC Educational Resources Information Center
McMillie, Kyann L.
2010-01-01
In 2008, educational leaders from the Arizona Department of Education (ADE) assigned a group of turnaround specialists to work in four failing public schools in a large, urban school district in Phoenix, Arizona in hopes of improving those schools. The utilization of turnaround specialists in failing schools was Arizona's method of enacting…
The Role of Boards in the Failure and Turnaround of Non-Profit Organizations
Jill Mordaunt; Chris Cornforth
2004-01-01
This article reports on research that examined the role that boards play in the failure and turnaround of non-profit organizations. The article concludes that boards do often play an important hands-on role in turnaround, which is different from that described in much of the normative literature. As well as needing skills, such as leadership, those board members leading the change
School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform
ERIC Educational Resources Information Center
Peck, Craig; Reitzug, Ulrich C.
2014-01-01
School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the potential to significantly increase the flexibility of hybrid rarefied/continuum flow analyses.
Toint, Philippe
1 From average travel time budgets to daily travel time distributions: an appraisal of two-paul.hubert@insee.fr 4999 words, 5 figures, 5 tables = 7499 Keywords: travel time, distributions, travel behaviour. #12 and human energy expenditure for travel, which is assumed to be constant in time and space. The second one
6 The effect of time averaged sampling on 6.3 Introduction
of these sensors employ sub- sampling to reduce noise in the sensor's output signal. Some examples of current positioning and resolution. These limitations compromise the assumptions underpinning the eddy covariance the attenuation of signal fluctuations caused by sensor spatial averaging have been addressed by Gurvich (1962
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Quantification of deep-time orbital forcing by average spectral misfit
STEPHEN R. MEYERS; BRADLEY B. SAGEMAN
2007-01-01
Quantification of Milankovitch orbital cyclicity within ancient strata has become a principal tool for refinement of the geologic time scale. However, accurate characterization of the orbital signal in deep time paleoclimate records is commonly challenged by inadequate radiometric time constraints for calibration of the spatial rhythms to temporal periods. This problem can potentially introduce large errors into derivative orbital timescales.
Atwell, J.; Evans, C. [Sevee and Maher Engineers, Inc., Cumberland, ME (United States); Francoeur, T.L. [TerraChem Consultants, Inc., Yarmouth, ME (United States); Guerra, R. [Lockheed-Sanders, Inc., Nashua, NH (United States)
1995-12-31
This paper addresses the use of rapid turnaround, heated headspace/gas chromatography (GC) to support a soil remediation project in the state of New Jersey. In the past, rapid turnaround, heated head space/GC procedures have been used primarily as a screening tool to delineate areas of volatile organic compound (VOC) contamination on-site. For this project, the heated headspace/GC method was the primary analytical tool used to support a large soil remediation project. This paper reviews the project goals, presents analytical protocol, presents internal quality assurance/quality control (QA/QC), evaluates laboratory split data, and discusses the advantages and disadvantages of this rapid turnaround method.
Jonas, M.; Olendrzynski, K.; Ganopolski, A.V.; Petoukhov, V.K.; Shaw, R.W.
1996-12-31
This study describes a methodology that combines the outputs of (1) the Integrated Model to Assess the Greenhouse Effect (IMAGE Version 1.0) of the Netherlands National Institute of Public Health and Environmental Protection (RIVM) and (2) ECHAM-1/LSG, the Global Circulation Model (GCM) of the Max-Planck Institute for Meteorology in Hamburg, Germany. The combination enables one to calculate grid point surface air temperature changes for different scenarios with a turnaround time that is much quicker than that for a GCM.
Counting Messages as a Proxy for Average Execution Time in Pharo
Alexandre Bergel
\\u000a Code profilers are used to identify execution bottlenecks and understand the cause of a slowdown. Execution sampling is a\\u000a monitoring technique commonly employed by code profilers because of its low impact on execution. Regularly sampling the execution\\u000a of an application estimates the amount of time the interpreter, hardware or software, spent in each method execution time.\\u000a Nevertheless, this execution time
Limaye, S.S.; Grassotti, C.; Kuetemeyer, M.J.
1988-02-01
Significant mean cloud level circulation changes since 1974, noted in 1982 Venus cloud motion observations, have been validated by independent measurements of cloud motions in nearly-identical sets of images; agreement is obtained not only for the average zonal and meridional components, but for the eddy circulation's meridional transport of momentum. In contrast to 1979 observations, the time latitudinal profile and the longitudinally-averaged zonal component of the cloud motions for 1982 exhibit jets near 45 deg latitude in both the northern and southern hemispheres. 30 references.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-08
...ETA has a continuing need for information on the...involved in employment and training or related activities...board membership and training ETA needs quick turnaround surveys...Other research and evaluation efforts, including...
Philip Langley; Susan King; Kun Wang; Dingchang Zheng; Roberto Giovannini; Marjan Bojarnejad; Alan Murray
2010-01-01
The Computing in Cardiology Challenge 2010 was to develop a computer algorithm for reconstructing missing sections of physiological data. For cardiac related signals our algorithm obtained beat timings from a reference timing channel. Missing beats were estimated from the average of non-missing beats. ECG derived respiration was used for missing respiratory data. Score for event 1 was 59 and for
Amit, Hagay
2006-01-01
-dependent parts of core flow Hagay Amit, Peter Olson Department of Earth and Planetary Sciences, Johns Hopkins is generated by fluid flow in the Earth's metallic liquid outer core. This process, the geodynamo, has beenPhysics of the Earth and Planetary Interiors 155 (2006) 120139 Time-average and time
Empirical wind model for the middle and lower atmosphere. Part 1: Local time average
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.
A. L. Hales; M. L. Mäki-Lopez
1980-01-01
An analysis of P travel-times from the Nevada Test Site is carried out using data from 23 explosions between 1966 and 1976. Results show a scatter of travel times which may partly arise in the lower mantle. P travel-times over the distance range 25-100° are 2.0-2.5 s faster than the Jeffreys-Bullen travel times.
Letter, Word, and Text Reading Times of Precocious and Average Readers.
ERIC Educational Resources Information Center
Jackson, Nancy Ewald; Biemiller, Andrew J.
1985-01-01
Compared comprehension of kindergarten-age precocious readers (who read at the third-grade level) with second- and third-grade-age children. Results on measures of letter, scrambled word, and text reading times indicated that, for precocious readers, efficiency in lower-order tasks is not a prerequisite for rapid text reading and good…
Bootstrap model averaging in time series studies of particulate matter air pollution and mortality
Michael A Martin; Steven Roberts
2006-01-01
The consensus from time series studies that have investigated the mortality effects of particulate matter air pollution (PM) is that increases in PM are associated with increases in daily mortality. However, recently concerns have been raised that the observed positive association between PM and mortality may be an artefact of model selection due to multiple hypothesis testing. This problem arises
K. G. Bemis; P. A. Rona; D. R. Jackson; C. Jones; K. Mitsuzawa; D. Palmer; D. Silver; R. Gudlavalletti
2001-01-01
Many simple models of plumes predict time-averaged behavior. Although instantaneous measurements of plumes are still useful, time-averaged measurements are more directly comparable. Averages of varying numbers of consecutive acoustic images are quantified and the results compared. The acoustic images of Grotto Vent in the Main Endeavour Vent Field were obtained during the VIP (Vents Imaging Pacific) 2000 cruise using the
Stanley, H. Eugene
Analysis of clusters formed by the moving average of a long-range correlated time series A. Carbone that Cn(i) gen- erates a stationary sequence of self-affine clusters C with length , lifetime , and area s on the basis of the value of H 35,9 . The stock price volatility shows a degree of persistence (0.7 H 0
Ghanem F. Oweis; Steven L. Ceccio
2005-01-01
The instantaneous and time-averaged flow fields in the tip region of a ducted marine propulsor are examined. In this flow, a primary tip-leakage vortex interacts with a secondary, co-rotating trailing edge vortex and other co- and counter-rotating vorticity found in the blade wake. Planar particle imaging velocimetry (PIV) is used to examine the flow in a plane approximately perpendicular to
An upper bound to time-averaged space-charge limited diode currents M. E. Griswold,1
An upper bound to time-averaged space-charge limited diode currents M. E. Griswold,1 N. J. Fisch,1Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known the space-charged lim- ited current in the classical problem of a one-dimensional 1D diode JCL = 4 9 0 2q m
NASA Astrophysics Data System (ADS)
Valuev, A.
2002-08-01
The choice of a numerical scheme for molecular dynamics method is very important since molecular systems are dynamically unstable. In general, the Newtonian many body problem has no analytical solution though the mathematical solution (NS) does exist. Thus, no numerical method can give the exact solution either. The question is: which numerical method can provide a longer trajectory that could be considered to be close to the NS. Here compared are an ordinary MD scheme of "leap-frog" (LF) type and a scheme completely conserving the total energy of the system (CCS). As an example of CCS the scheme of Greenspan was chosen (developed for Coulomb systems). Computations were performed for several MD systems, time steps being the same for both schemes: 1. One-dimensional motion of the electron in a field of a massive charge. 2. Coulomb ellipse. 3. Unstable motion of a charge in the potential center. The system is enclosed in a rigid wall box. 4. Many body Coulomb dense system. For all of these cases except the first one LF scheme turned out to be preferable. For the problem of many-particles mixing system the CCS method has a serious shortcoming: it is more time consuming if one has to achieve the same accuracy if the phase trajectory. This conclusion should not, of course, be applied to special cases where exact energy conservation is necessary.
Arecibo Multi-frequency Time-Aligned Pulsar Average-Profile and Polarization Database
Timothy H. Hankins; Joanna M. Rankin
2008-02-08
We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams and/or polarization profiles for 58 pulsars pulsars at some or all of the frequencies 50, 111/130, 430 and 1400 MHz. The frequency-dependent dispersion delay has been removed In order to align the profiles for study of their spectral evolution and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey--and the calibrated pulse sequences are available by web download for further study.
P. N. Brandt; A. V. Getling
2008-02-01
The decrease in the rms contrast of time-averaged images with the averaging time is compared between four datasets: (1) a series of solar granulation images recorded at La Palma in 1993; (2) a series of artificial granulation images obtained in numerical simulations by Rieutord et al. (2002); (3) a similar series computed by Steffen and his colleagues (see Wedemeyer et al., 2004}); (4) a random field with some parameters typical of the granulation, constructed by Rast (2002). In addition, (5) a sequence of images was obtained from real granulation images using a temporal and spatial shuffling procedure, and the contrast of the average of n images from this sequence as a function of n is analysed. The series (1) of real granulation images exhibits a considerably slower contrast decrease than do both the series (3) of simulated granulation images and the series (4) of random fields. Starting from some relatively short averaging times t, the behaviour of the contrast in series (3) and (4) resembles the t^{-1/2} statistical law, while the shuffled series (5) obeys the n^{-1/2} law from n = 2 on. Series (2) demonstrates a peculiarly slow decline of contrast, which could be attributed to particular properties of the boundary conditions used in the simulations. Comparisons between the analysed contrast-variation laws indicate quite definitely that the brightness field of solar granulation contains a long-lived component, which could be associated with locally persistent dark intergranular holes and/or with the presence of quasi-regular structures. The suggestion that the random field (4) successfully reproduces the contrast-variation law for the real granulation (Rast, 2002) can be declined.
Avitia, Roberto L; Reyna, Marco A; Bravo-Zanoguera, Miguel E; Cetto, Lucio A
2013-04-01
Ventricular late potentials (VLPs) are small-amplitude waves with a short duration that appear at the end part of the QRS complex, making a QRS complex duration larger. The signal-averaged electrocardiography (ECG) technique enhances VLPs and beats, assuming noise as the only random variable. However, ECG signals are not completely stationary and different elongations appear in both time and amplitude in each beat. This research proposes to use piecewise linear approximation to segment each beat and performs the alignment of the beats using the technique known as derivative dynamic time-warping to have beats better aligned and consequently enhance the presence of VLPs. We recorded high-resolution ECGs (HRECGs) from 50 subjects in supine position with no heart-stroke antecedents. VLPs were created synthetically and added to the HRECGs. Two cases were evaluated: (i) duration of the QRS complexes with VLPs without beats alignment, and (ii) duration of QRS complexes with VLPs using beats alignment in time and amplitude. Considering QRS duration as an indicative of VLP presence, results show that when using beats alignment in time and amplitude it is possible to reach a sensitivity of 0.96 and a specificity of 0.52, as opposed to 0.72 and 0.40, respectively, when using only averaging without beats alignment in time and amplitude. PMID:23446922
NASA Astrophysics Data System (ADS)
Acosta, D.; Affolder, T.; Akimoto, H.; Albrow, M. G.; Ambrose, D.; Amidei, D.; Anikeev, K.; Antos, J.; Apollinari, G.; Arisawa, T.; Artikov, A.; Asakawa, T.; Ashmanskas, W.; Azfar, F.; Azzi-Bacchetta, P.; Bacchetta, N.; Bachacou, H.; Badgett, W.; Bailey, S.; de Barbaro, P.; Barbaro-Galtieri, A.; Barnes, V. E.; Barnett, B. A.; Baroiant, S.; Barone, M.; Bauer, G.; Bedeschi, F.; Behari, S.; Belforte, S.; Bell, W. H.; Bellettini, G.; Bellinger, J.; Benjamin, D.; Bensinger, J.; Beretvas, A.; Berryhill, J.; Bhatti, A.; Binkley, M.; Bisello, D.; Bishai, M.; Blair, R. E.; Blocker, C.; Bloom, K.; Blumenfeld, B.; Blusk, S. R.; Bocci, A.; Bodek, A.; Bolla, G.; Bolshov, A.; Bonushkin, Y.; Bortoletto, D.; Boudreau, J.; Brandl, A.; Bromberg, C.; Brozovic, M.; Brubaker, E.; Bruner, N.; Budagov, J.; Budd, H. S.; Burkett, K.; Busetto, G.; Byrum, K. L.; Cabrera, S.; Calafiura, P.; Campbell, M.; Carithers, W.; Carlson, J.; Carlsmith, D.; Caskey, W.; Castro, A.; Cauz, D.; Cerri, A.; Cerrito, L.; Chan, A. W.; Chang, P. S.; Chang, P. T.; Chapman, J.; Chen, C.; Chen, Y. C.; Cheng, M.-T.; Chertok, M.; Chiarelli, G.; Chirikov-Zorin, I.; Chlachidze, G.; Chlebana, F.; Christofek, L.; Chu, M. L.; Chung, J. Y.; Chung, W.-H.; Chung, Y. S.; Ciobanu, C. I.; Clark, A. G.; Coca, M.; Connolly, A.; Convery, M.; Conway, J.; Cordelli, M.; Cranshaw, J.; Culbertson, R.; Dagenhart, D.; D'Auria, S.; de Cecco, S.; Dejongh, F.; dell'Agnello, S.; dell'Orso, M.; Demers, S.; Demortier, L.; Deninno, M.; de Pedis, D.; Derwent, P. F.; Devlin, T.; Dionisi, C.; Dittmann, J. R.; Dominguez, A.; Donati, S.; D'Onofrio, M.; Dorigo, T.; Eddy, N.; Einsweiler, K.; Engels, E.; Erbacher, R.; Errede, D.; Errede, S.; Eusebi, R.; Fan, Q.; Farrington, S.; Feild, R. G.; Fernandez, J. P.; Ferretti, C.; Field, R. D.; Fiori, I.; Flaugher, B.; Flores-Castillo, L. R.; Foster, G. W.; Franklin, M.; Freeman, J.; Friedman, J.; Fukui, Y.; Furic, I.; Galeotti, S.; Gallas, A.; Gallinaro, M.; Gao, T.; Garcia-Sciveres, M.; Garfinkel, A. F.; Gatti, P.; Gay, C.; Gerdes, D. W.; Gerstein, E.; Giagu, S.; Giannetti, P.; Giolo, K.; Giordani, M.; Giromini, P.; Glagolev, V.; Glenzinski, D.; Gold, M.; Goldschmidt, N.; Goldstein, J.; Gomez, G.; Goncharov, M.; Gorelov, I.; Goshaw, A. T.; Gotra, Y.; Goulianos, K.; Green, C.; Gresele, A.; Grim, G.; Grosso-Pilcher, C.; Guenther, M.; Guillian, G.; Guimaraes da Costa, J.; Haas, R. M.; Haber, C.; Hahn, S. R.; Halkiadakis, E.; Hall, C.; Handa, T.; Handler, R.; Happacher, F.; Hara, K.; Hardman, A. D.; Harris, R. M.; Hartmann, F.; Hatakeyama, K.; Hauser, J.; Heinrich, J.; Heiss, A.; Hennecke, M.; Herndon, M.; Hill, C.; Hocker, A.; Hoffman, K. D.; Hollebeek, R.; Holloway, L.; Hou, S.; Huffman, B. T.; Hughes, R.; Huston, J.; Huth, J.; Ikeda, H.; Issever, C.; Incandela, J.; Introzzi, G.; Iori, M.; Ivanov, A.; Iwai, J.; Iwata, Y.; Iyutin, B.; James, E.; Jones, M.; Joshi, U.; Kambara, H.; Kamon, T.; Kaneko, T.; Kang, J.; Karagoz Unel, M.; Karr, K.; Kartal, S.; Kasha, H.; Kato, Y.; Keaffaber, T. A.; Kelley, K.; Kelly, M.; Kennedy, R. D.; Kephart, R.; Khazins, D.; Kikuchi, T.; Kilminster, B.; Kim, B. J.; Kim, D. H.; Kim, H. S.; Kim, M. J.; Kim, S. B.; Kim, S. H.; Kim, T. H.; Kim, Y. K.; Kirby, M.; Kirk, M.; Kirsch, L.; Klimenko, S.; Koehn, P.; Kondo, K.; Konigsberg, J.; Korn, A.; Korytov, A.; Kotelnikov, K.; Kovacs, E.; Kroll, J.; Kruse, M.; Krutelyov, V.; Kuhlmann, S. E.; Kurino, K.; Kuwabara, T.; Kuznetsova, N.; Laasanen, A. T.; Lai, N.; Lami, S.; Lammel, S.; Lancaster, J.; Lannon, K.; Lancaster, M.; Lander, R.; Lath, A.; Latino, G.; Lecompte, T.; Le, Y.; Lee, J.; Lee, S. W.; Leonardo, N.; Leone, S.; Li, K.; Lin, C. S.; Lindgren, M.; Liss, T. M.; Liu, J. B.; Liu, T.; Liu, Y. C.; Litvintsev, D. O.; Lobban, O.; Lockyer, N. S.; Loginov, A.; Loken, J.; Loreti, M.; Lucchesi, D.; Lukens, P.; Lusin, S.; Lyons, L.; Lys, J.; Madrak, R.; Maeshima, K.; Maksimovic, P.; Malferrari, L.; Mangano, M.; Manca, G.; Mariotti, M.; Martignon, G.; Martin, M.; Martin, A.; Martin, V.; Martínez, M.; Matthews, J. A.; Mazzanti, P.; McFarland, K. S.; McIntyre, P.; Menguzzato, M.; Menzione, A.; Merkel, P.; Mesropian, C.; Meyer, A.; Miao, T.; Miller, R.; Miller, J. S.; Minato, H.; Miscetti, S.; Mishina, M.; Mitselmakher, G.; Miyazaki, Y.; Moggi, N.; Moore, E.; Moore, R.; Morita, Y.; Moulik, T.; Mulhearn, M.; Mukherjee, A.; Muller, T.; Munar, A.; Murat, P.; Murgia, S.; Nachtman, J.; Nagaslaev, V.; Nahn, S.; Nakada, H.; Nakano, I.; Napora, R.; Niell, F.; Nelson, C.; Nelson, T.; Neu, C.; Neubauer, M. S.; Neuberger, D.; Newman-Holmes, C.; Ngan, C.-Y. P.; Nigmanov, T.; Niu, H.; Nodulman, L.; Nomerotski, A.; Oh, S. H.; Oh, Y. D.; Ohmoto, T.; Ohsugi, T.; Oishi, R.; Okusawa, T.; Olsen, J.; Orejudos, W.; Pagliarone, C.; Palmonari, F.; Paoletti, R.; Papadimitriou, V.; Partos, D.; Patrick, J.; Pauletta, G.; Paulini, M.
2004-01-01
We have measured the number of like-sign (LS) and opposite-sign (OS) lepton pairs arising from double semileptonic decays of b and b¯ hadrons, pair produced at the Fermilab Tevatron collider. The data samples were collected with the Collider Detector at Fermilab during the 1992 1995 collider run by triggering on the existence of ?? or e? candidates in an event. The observed ratio of LS to OS dileptons leads to a measurement of the average time-integrated mixing probability of all produced b-flavored hadrons which decay weakly, ?¯=0.152±0.007 (stat)±0.011 (syst), that is significantly larger than the world average ?¯=0.118±0.005.
Ryan, James Patrick
1992-01-01
the tour through N which minimizes the average time to satisfaction for D. If the set of all previously visited nodes at any point in the tour is defined as B, the problem can then be described as follows: U, . =D ? P di (3-b) (3-c) In these equations... in Figure Selection Criteria for Search All branch-and-bound algorithms are based upon on the idea of a search tree made up of nodes and branches. Search trees for traveling salesman problems can be viewed as shown in Figure 5. These search trees...
Stetson, Karl A
2006-09-01
I present the results of an experiment to demonstrate the effect of phase mismatch between an object vibration and a bias vibration in pseudo-phase-step analysis of time-average holographic interferograms of vibration modes. Pseudo-phase-stepping applies conventional phase-step equations to zero-order Bessel function fringes and during phase unwrapping corrects for the errors incurred. A circular disk vibrating in a quadrature combination of its two one-diameter modes was used as a test object and provided a 360 degrees phase distribution. The results indicate that the process has considerable tolerance to phase mismatch. PMID:16912785
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.
2014-01-01
Background Primary IgA nephropathy (IgAN) is the most common form of idiopathic glomerulonephritis worldwide. Although most patients are able to achieve remission with the current therapy, a large number of patients will still progress to end-stage renal disease. This study aimed to evaluate kidney disease progression and the risk factors for progression in IgAN patients who achieved remission. Methods Patients from a prospective database with IgAN were included in this study. All the subjects had achieved a complete remission (CR) or partial remission (PR) following 6 months of therapy. Renal survival and the relationship between the clinical parameters and composite renal outcomes were assessed. Results The study comprised 878 IgAN patients recruited between January 2005 and December 2010. Overall, 632 patients were enrolled in this study. The data from the 369 patients who achieved remission were analyzed; the mean follow-up time was 49 months. The median serum creatinine (SCr) concentration at baseline was 91.3 ?mol/L, and the time-averaged creatinine (TA-SCr) was 91.8 ?mol/L. The mean serum albumin (ALB) level at baseline was 39.4 g/L, and the time-averaged serum albumin (TA-ALB) was 42.1 g/L. Multivariate Cox regression analyses revealed that the TA-ALB and TA-SCr levels were independently associated with the composite renal outcome. The patients with a TA-SCr value?>?120 ?mol/L and a TA-ALB level?
Nishimichi, Takahiro
2014-01-01
We explicitly test the equal-time consistency relation between the angular-averaged bispectrum and the power spectrum of the matter density field, employing a large suite of cosmological $N$-body simulations. This is the lowest-order version of the relations between $(\\ell+n)-$point and $n-$point polyspectra, where one averages over the angles of $\\ell$ soft modes. This relation depends on two wave numbers, $k'$ in the soft domain and $k$ in the hard domain. We show that it holds up to a good accuracy, when $k'/k\\ll 1$ and $k'$ is in the linear regime, while the hard mode $k$ goes from linear ($0.1\\,h\\,\\mathrm{Mpc}^{-1}$) to nonlinear ($1.0\\,h\\,\\mathrm{Mpc}^{-1}$) scales. On scales $k\\lesssim 0.4\\,h\\,\\mathrm{Mpc}^{-1}$, we confirm the relation within a $\\sim 5\\%$ accuracy, even though the bispectrum can already deviate from leading-order perturbation theory by more than $30\\%$. We further show that the relation extends up to nonlinear scales, $k \\sim 1.0\\,h\\,\\mathrm{Mpc}^{-1}$, within an accuracy of $\\sim 10\\...
NASA Astrophysics Data System (ADS)
Thunis, P.; Clappier, A.; Pisoni, E.; Degraeuwe, B.
2015-02-01
Air quality models which are nowadays used for a wide range of scopes (i.e. assessment, forecast, planning) see their intrinsic complexity progressively increasing as better knowledge of the atmospheric chemistry processes is gained. As a result of this increased complexity potential non-linearities are implicitly and/or explicitly incorporated in the system. These non-linearities represent a key and challenging aspect of air quality modeling, especially to assess the robustness of the model responses. In this work the importance of non-linear effects in air quality modeling is quantified, especially as a function of time averaging. A methodology is proposed to decompose the concentration change resulting from an emission reduction over a given domain into its linear and non-linear contributions for each precursor as well as in the contribution resulting from the interactions among precursors. Simulations with the LOTOS-EUROS model have been performed by TNO over three regional geographical areas in Europe for this analysis. In all three regions the non-linear effects for PM10 and PM2.5 are shown to be relatively minor for yearly and monthly averages whereas they become significant for daily average values. For Ozone non-linearities become important already for monthly averages in some regions. An approach which explicitly deals with monthly variations seems therefore more appropriate for O3. In general non-linearities are more important at locations where concentrations are the lowest, i.e. at urban locations for O3 and at rural locations for PM10 and PM2.5. Finally the impact of spatial resolution (tested by comparing coarse and fine resolution simulations) on the degree of non-linearity has been shown to be minor as well. The conclusions developed here are model dependent and runs should be repeated with the particular model of interest but the proposed methodology allows with a limited number of runs to identify where efforts should be focused in order to include the relevant terms into a simplified surrogate model for integrated assessment purposes.
ERIC Educational Resources Information Center
Schaffer, Eugene; Reynolds, David; Stringfield, Sam
2012-01-01
Beginning from 1 high-poverty, historically low-achieving secondary school's successful turnaround work, this article provides data relative to a successful school turnaround, the importance of external and system-level supports, and the importance of building for sustainable institutionalization of improvements. The evidence suggests the…
ERIC Educational Resources Information Center
Cucchiara, Maia Bloomfield; Rooney, Erin; Robertson-Kraft, Claire
2015-01-01
School turnaround--a reform strategy that strives for quick and dramatic transformation of low-performing schools--has gained prominence in recent years. This study uses interviews and focus groups conducted with 86 teachers in 13 schools during the early stages of school turnaround in a large urban district to examine teachers' perceptions of the…
ERIC Educational Resources Information Center
Herman, Rebecca; Huberman, Mette
2012-01-01
The TALPS study aims to build on the existing research base to develop promising methodologies to identify chronically low-performing and turnaround schools, as well as to identify promising strategies for turning around chronically low-performing schools. By looking specifically at schools identified as turnaround, in comparison to nonturnaround…
Patrick, Mark A; Joye, Colin D; De Lucia, Frank C
2014-10-01
Active illumination is an attractive approach for millimeter and submillimeter wave imaging because of its generally larger signal margins and capacity for range determination. However, the resultant speckle from diffuse targets and the requirement that specular targets have strategic angular alignment are significant shortcomings. As a result, many, if not most, demonstrations of active imaging have involved the use of strategically oriented specular targets. We have previously shown [J. Opt. Soc. Am. A29, 2643 (2012)10.1364/JOSAA.29.002643JOAOD61084-7529] that spatially time-resolved modulated multimode mixing approaches mitigate these issues. In this paper, we extend this work to demonstrate the use of range resolved multimode mixing as an alternative means to provide statistically independent images for speckle reduction averaging. We also demonstrate the use of systematic illumination of spatial modes in a "grassy field" geometry. PMID:25401236
Time-averaged distributions of solute and solvent motions: exploring proton wires of GFP and PfM2DH.
Velez-Vega, Camilo; McKay, Daniel J J; Aravamuthan, Vibhas; Pearlstein, Robert; Duca, José S
2014-12-22
Proton translocation pathways of selected variants of the green fluorescent protein (GFP) and Pseudomonas fluorescens mannitol 2-dehydrogenase (PfM2DH) were investigated via an explicit solvent molecular dynamics-based analysis protocol that allows for direct quantitative relationship between a crystal structure and its time-averaged solute-solvent structure obtained from simulation. Our study of GFP is in good agreement with previous research suggesting that the proton released from the chromophore upon photoexcitation can diffuse through an extended internal hydrogen bonding network that allows for the proton to exit to bulk or be recaptured by the anionic chromophore. Conversely for PfM2DH, we identified the most probable ionization states of key residues along the proton escape channel from the catalytic site to bulk solvent, wherein the solute and high-density solvent crystal structures of binary and ternary complexes were properly reproduced. Furthermore, we proposed a plausible mechanism for this proton translocation process that is consistent with the state-dependent structural shifts observed in our analysis. The time-averaged structures generated from our analyses facilitate validation of MD simulation results and provide a comprehensive profile of the dynamic all-occupancy solvation network within and around a flexible solute, from which detailed hydrogen-bonding networks can be inferred. In this way, potential drawbacks arising from the elucidation of these networks by examination of static crystal structures or via alternate rigid-protein solvation analysis procedures can be overcome. Complementary studies aimed at the effective use of our methodology for alternate implementations (e.g., ligand design) are currently underway. PMID:25405925
Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform
ERIC Educational Resources Information Center
Mette, Ian M.; Scribner, Jay P.
2014-01-01
This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in student…
Achieving Exact and Constant Turnaround Ratio in a DDS-Based Coherent Transponder
NASA Technical Reports Server (NTRS)
D'Addario, Larry R.
2011-01-01
A report describes a non-standard direct digital synthesizer (DDS) implementation that can be used as part of a coherent transponder so as to allow any rational turnaround ratio to be exactly achieved and maintained while the received frequency varies. (A coherent transponder is a receiver-transmitter in which the transmitted carrier is locked to a pre-determined multiple of the received carrier's frequency and phase. That multiple is called the turnaround ratio.) The report also describes a general model for coherent transponders that are partly digital. A partially digital transponder is one in which analog signal processing is used to convert the signals between high frequencies at which they are radiated and relatively low frequencies at which they are converted to or from digital form, with most of the complex processing performed digitally. There is a variety of possible architectures for such a transponder, and different ones can be selected by choosing different parameter values in the general model. Such a transponder uses a DDS to create a low-frequency quasi-sinusoidal signal that tracks the received carrier s phase, and another DDS to generate an IF or near-baseband version of the transmitted carrier. With conventional DDS implementations, a given turnaround ratio can be achieved only approximately, and the error varies slightly as the received frequency changes. The non-conventional implementation employed here allows any rational turnaround ratio to be exactly maintained.
Tinkering and Turnarounds: Understanding the Contemporary Campaign to Improve Low-Performing Schools
ERIC Educational Resources Information Center
Duke, Daniel L.
2012-01-01
An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The article…
How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds
ERIC Educational Resources Information Center
Kowal, Julie; Ableidinger, Joe
2011-01-01
In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…
Rural Economic Development Consequences of the Population Turnaround in Northern Lower Michigan.
ERIC Educational Resources Information Center
West, Patrick C.; And Others
Consequences of population turnaround for rural economic development are examined in a 9-county region of Northern Lower Michigan. Data from census reports and 374 usable responses to a questionnaire mailed to a random sample of property owners drawn from 1982 county tax assessment rolls were used to test competing hypotheses about rural…
A Case Study of Change Strategies Implemented in a Turnaround Elementary School
ERIC Educational Resources Information Center
Colson, Jo Ann
2012-01-01
This case study examined the change strategies in a turnaround school at the elementary level to understand and describe how change occurred and was sustained at this campus. This study examined the factors which contributed to the change in academic success of students, examined beliefs about change that led to the change process, identified the…
Participatory Democracy and Struggling Schools: Making Space for Youth in School Turnarounds
ERIC Educational Resources Information Center
Kirshner, Ben; Jefferson, Anton
2015-01-01
Background/Context:Federal policy, as codified in Race to the Top (RTT) funding guidelines, outlines four types of intervention: turnaround, restart, closure, and transformation. RTT has embraced a technocratic paradigm for school reform that frames choice less as the opportunity for the public to deliberate about what it wants from its schools…
ERIC Educational Resources Information Center
Herman, Rebecca; Graczewski, Cheryl; James-Burdumy, Susanne; Murray, Matthew; Perez-Johnson, Irma; Tanenbaum, Courtney
2013-01-01
The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…
Chronically Low-Performing Schools and Turnaround: Evidence from Three States
ERIC Educational Resources Information Center
Hansen, Michael; Choi, Kilchan
2012-01-01
The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This paper…
CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success
ERIC Educational Resources Information Center
Gorman, Lynn
2012-01-01
Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…
Recovering Failing local authorities - Is There a Need for Turnaround Management Strategies?
Itai Beeri
2006-01-01
This paper deals with Turnaround Management Strategies in failing local authorities. The paper suggests that while the New Public Management reform resulted in some achievements, some public organizations are persistently failing. Although there is no clear-cut distinction between failing and good performing public organizations, the paper points out some of their characteristics. The Paper analyzes the efforts to recover English
Christian Bachmann
2009-01-01
Purpose – Assuming that “change” is an ongoing, ordinary business condition, this paper seeks to introduce turnaround management as an ongoing management task to secure competitive advantage. Design\\/methodology\\/approach – The paper briefly reviews the impact of market turmoil on successful corporate management with particular reference to the Romanian market. It includes key learning based on structured, qualitative interviews of executives
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-01-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
NASA Astrophysics Data System (ADS)
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-02-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4-6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100-1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
Keith S Cover
2008-03-20
While the multiexponential nature of T2 decays measured in vivo is well known, characterizing T2 decays by a single time constant is still very useful when differentiating among structures and pathologies in MRI images. A novel, robust, fast and very simple method is presented for both estimating and displaying the average time constant for the T2 decay of each pixel from a multiecho MRI sequence. The average time constant is calculated from the average of the values measured from the T2 decay over many echoes. For a monoexponential decay, the normalized decay average varies monotonically with the time constant. Therefore, it is simple to map any normalized decay average to an average time constant. This method takes advantage of the robustness of the normalized decay average to both artifacts and multiexponential decays. Color intensity projections (CIPs) were used to display 32 echoes acquired at a 10ms spacing as a single color image. The brightness of each pixel in each color image was determined by the intensity of the corresponding pixel in the earliest image and the hue was determined by the normalized decay average. Examples demonstrate the effectiveness of using CIPs to display the results of a multiecho sequence for a healthy subject and a multiple sclerosis patient.
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low concentrations of tar in gasification streams. PMID:23398991
A. Ian Murdoch
2006-01-01
In Continuum Mechanics the notions of body, material point, and motion, are primitive. Here these concepts are derived for any (possibly time-dependent) material system via mass and momentum densities whose values are local spacetime averages of molecular quantities. The averaging procedure necessary to ensure molecular-based densities can be agreed upon by all observers (that is, are objective) has implications for
Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F. [Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands)
2008-09-15
Purpose: lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. Methods and Materials: 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Results: Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods <0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.
NASA Technical Reports Server (NTRS)
Dunn, M. G.; Kim, J.; Civinskas, K. C.; Boyle, R. J.
1992-01-01
Time-averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row and the first-stage blade row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the component. Stanton-number distributions are also reported for the second-stage vane at 50 percent span. A shock tube is used as a short-duration source of heated and pressurized air to which the turbine is subjected. Platinum thin-film gages are used to obtain the heat-flux measurements and miniature silicone-diaphragm pressure transducers are used to obtain the surface pressure measurements. The first-stage vane Stanton number distributions are compared with predictions obtained using a quasi-3D Navier-Stokes solution and a version of STAN5. This same N-S technique was also used to obtain predictions for the first blade and the second vane.
NASA Astrophysics Data System (ADS)
da Silva, M. R.; Vieira, L. E. A.; dal Lago, A.; Echer, E.; Gonzalez, W. D.; Munakata, K.; Kuwabara, T.; Schuch, N. J.
The solar activity drives a large number of phenomena that occur in the near-earth interplanetary space and in the earth's magnetosphere, the so-called Space Weather. Among the most important phenomena of Space Weather we have the Geomagnetic Storms. In these phenomena, a great transfer of energy from the Sun and the Interplanetary Medium to the earth's magnetosphere takes place. It is well known that the galactic cosmic ray particles are also affected by the solar activity. One of the most important solar ejected structures, the Coronal Mass Ejection (CME), is the main cause of intense Geomagnetic Storms, and it is known to modulate cosmic ray intensity observed with ground detectors as they approach and pass the earth (Munakata et al., 2000). Thus ground cosmic ray observations may be used for forecasting this kind of interplanetary structures. By analyzing the cosmic ray intensities measured in various directions in space by the International Muon Detector Network, together with the Interplanetary Magnetic Field (IMF) data observed by the Advanced Composition Explorer Satellite (ACE), the Loss Cone precursor can be seen some 6 to 9 hours prior to an interplanetary disturbance reaching the Earth Magnetosphere and then causing a geomagnetic storm. As the direction of the IMF is an important parameter in this forecasting scheme, this work has the objective of determining the best average time interval of the interplanetary magnetic field to be used together with the high-energy cosmic ray observations. We use ACE IMF observations from 1998 to 2001.
Lakämper, Stefan; Lamprecht, Andreas; Schaap, Iwan A T; Dual, Jurg
2015-01-01
Ultrasonic standing waves are increasingly applied in the manipulation and sorting of micrometer-sized particles in microfluidic cells. To optimize the performance of such devices, it is essential to know the exact forces that the particles experience in the acoustic wave. Although much progress has been made via analytical and numerical modeling, the reliability of these methods relies strongly on the assumptions used, e.g. the boundary conditions. Here, we have combined an acoustic flow cell with an optical laser trap to directly measure the force on a single spherical particle in two dimensions. While performing ultrasonic frequency scans, we measured the time-averaged forces on single particles that were moved with the laser trap through the microfluidic cell. The cell including piezoelectric transducers was modeled with finite element methods. We found that the experimentally obtained forces and the derived pressure fields confirm the predictions from theory and modeling. This novel approach can now be readily expanded to other particle, chamber, and fluid regimes and opens up the possibility of studying the effects of the presence of boundaries, acoustic streaming, and non-linear fluids. PMID:25370872
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
David Damanik; Serguei Tcheremchantsev
2008-01-22
We develop further the approach to upper and lower bounds in quantum dynamics via complex analysis methods which was introduced by us in a sequence of earlier papers. Here we derive upper bounds for non-time averaged outside probabilities and moments of the position operator from lower bounds for transfer matrices at complex energies. Moreover, for the time-averaged transport exponents, we present improved lower bounds in the special case of the Fibonacci Hamiltonian. These bounds lead to an optimal description of the time-averaged spreading rate of the fast part of the wavepacket in the large coupling limit. This provides the first example which demonstrates that the time-averaged spreading rates may exceed the upper box-counting dimension of the spectrum.
NASA Astrophysics Data System (ADS)
Maglich, Bogdan C.
1988-08-01
A D + beam of kinetic energy Ti = 0.7 MeV was stored in a "simple mirror" magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times, ion energy distribution, nuclear reaction rate, and reaction product spectrum. Migma formed a disc 20 cm in diameter and 0.5 cm thick. Its ion density was sharply peaked in the center; the ion-to-electron temperature ratio was TiTe ˜ 10 3; ion-electron temperature equilibrium was never reached. The volume average and central D + density were n = 3.2 × 10 9 cm -3 and nc = 3 × 10 10 cm -3 respectively, compared to the space charge limit density nsc = 4 × 10 8 cm -3. The energy confinement time was ?c = 20-30 s, limited by the change exchange reactions with the residual gas in the vacuum (5 × 10 -9 Torr). The ion energy loss rate was 1.4 keV/s. None of the instabilities that were observed in mirrors at several orders of magnitude lower density occurred. The proton energy spectrum for dd + d ? T + p + 4 MeV shows that dd collided at an average crossing angle of 160°. Evidence for exponential density buildup has also been observed. Relative to Migma III results and measured in terms of the product of ion energy E, density n, and confinement time ?, device performance was improved by a factor of 500. Using the central fast ion density, we obtained the triple product: Tn? ? 4 × 10 14 keV s cm -3, which is greater than that of the best fusion devices. The luminosity (collision rate per unit cross section) was ˜ 10 29 cm -2s -1, with o.7 A ion current through the migma center. The stabilizing features of migma are: (1) large Larmor radius; (2) small canonical angular momentum; (3) short axial length z (disc shape); (4) nonadiabatic motions in r and z; (5) precession and enegy spread; (6) ambipolar potential; (7) radial density gradient;(8) large ion-to-electron temperature ratio; (9) metal walls in z; (10) suitability for external stabilization techniques (small volume, recurrent surfacing of ions); and particularly (11) diamagnetic well. Extrapolition of the results to reactor densities led to a D + 3He reactor "basic migma disc" 0.5 m in diameter and 0.25 m thick, suspended in a 10 T field, generating 1.5 MW(th). A 2-MW(e) power plant would consist of 3 such discs in a common volume. referred to as a triplet migmacell. Its specific power is projected to be 1 MW(e)/ton. A large power plant of any size would consist of a large number of migma discs in a common volume. The advantages f such modular systems are: (1) economy of mass production: capital cost per kW will be 25% that of fission and 10% that of DT fusion; (2) plants will be economical for all sizes above 10 kW(e); (3) minimal heat pollution, thanks to direct conversion of the changed ion kinetic energy into electricity; (4) no proliferative potential; and (5) large power-to-weight ratio due to absence of shielding. Anticipated physics problems in density increase are discussed.
NASA Astrophysics Data System (ADS)
Foresti, Daniele; Nabavi, Majid; Poulikakos, Dimos
2012-05-01
Acoustic levitation is a physical phenomenon that arises when the acoustic radiation pressure is strong enough to overcome gravitational force. It is a nonlinear phenomenon which can be predicted only if higher order terms are included in the acoustic field calculation. The study of acoustic levitation is usually conducted by solving the linear acoustic equation and bridging the gap with an analytical solution. Only recently, the scientific community has shown interest in the full solution of the Navier-Stokes' equation with the aim of deeply investigating the acoustic radiation pressure. We present herein a numerical model based on Finite Volume Method (FVM) and Dynamic Mesh (DM) for the calculation of the acoustic radiation pressure acting on a rigid sphere inside an axisymmetric levitator which is the most widely used and investigated type of levitators. In this work, we focus on the third resonance mode. The use of DM is new in the field of acoustic levitation, allowing a more realistic simulation of the phenomenon, since no standing wave has to be necessarily imposed as boundary condition. The radiating plate is modeled as a rigid cylinder moving sinusoidally along the central axis. The time-averaged acoustic force exerting on the sphere is calculated for different radii Rs of the sphere (0.025 to 0.5 wavelengths). It is shown that the acoustic force increases proportional to Rs3 for small radii, then decreases when the standing wave condition is violated and finally rises again in the travelling wave radiation pressure configuration. The numerical model is validated for the inviscid case with a Finite Element Method model of the linear acoustic model based on King's approximation.
Krstic, Miroslav
], control of a tunable thermoacoustic cooler in [18], and control of plasmas in fusion reactors [27], [40 stability of the average system, and compactness of the state space of the perturbation process. Then, we
Hospice Care Association: Turnaround of a Healthcare Non-profit Organization
Adel Dimian; Anne-Valerie Ohlsson
2011-01-01
This is the case of a Singapore-based non-profit organization in need of strategic turnaround. Funding changes from the Singapore government were the primary cause of HCA’s critical financial situation. Additionally, staff response to changes in operations was severe and negative. Dr Akhileswaran was thrust into a position of leadership that initially overwhelmed and surprised him, given that his expected role
NASA Astrophysics Data System (ADS)
Cacchione, David A.; Thorne, Peter D.; Agrawal, Yogesh; Nidzieko, Nicholas J.
2008-02-01
Profiles of suspended sediment concentration and velocity were measured over a 15-day period at a near-shore site off Santa Cruz, CA in Monterey Bay. The concentration and velocity data were collected with an Acoustic Backscattering System (ABS) and Acoustic Current Profiler (ACP) that were mounted on a bottom tripod. High-resolution bottom scanning sonar was also attached to the tripod to provide images of bed features during the experiment. Hourly time-averaged near-bed concentrations of suspended sediment were calculated from three models and compared with the measurements. Surface waves and currents that were generated by a storm of moderate intensity caused bed stresses that exceeded threshold stress for D50=0.02 cm, the median size of the moderately well-sorted bottom sediment, over a period of about 7 days. Estimates of the concentration at 1 cm above the bottom, Ca1, were obtained using the ABS measurements. These observations have been compared with predictions for the concentration at 1 cm above the bottom, C1. Nielsen's models for reference concentration Co [Nielsen, P., 1986. Suspended sediment concentrations under waves. Coastal Engineering 10, 32-31; Nielsen, P., 1992. Coastal Bottom Boundary Layers and Sediment Transport, Advanced Series on Ocean Engineering. World Scientific, Hackensack, NJ.] are purely wave-based and do not include effects of bottom currents on bed stress and bedform scales. C1 calculated from this model compared well with measured Ca1 when currents were weak and small oscillatory ripples were observed in the sonar images. However, during the 3-day period of highest bottom stresses modeled C1 did not compare well to Ca1. The other two models for C1, Glenn and Grant [Glenn, S.M., Grant, W.D., 1987. A suspended sediment stratification correction for combined wave and current flows. Journal of Geophysical Research 92(C8), 8244-8264.] and van Rijn and Walstra [Van Rijn, L.C., Walstra, D.J.R., 2004. Description of TRANSPOR2004 and implementation in Delft3D-ONLINE. Interim Report prepared for DG Rijkswaterstaat, Rijksinstituut voor Kust en Zee. Delft Hydraulics Institute, The Netherlands.], accounted for combined wave-current stresses and included different formulations for predicting bedform scales. C1 calculated from these models compared well with measurements throughout the early and middle period of the storm, but underpredicted measured values during the latter part of the experiment. An alternative method based on dimensional analysis provided a straightforward way to relate C1 to bed stress and sediment parameters. This new relationship based on dimensional analysis explained about 77% of the variability in Ca1 derived from the ABS data.
NASA Astrophysics Data System (ADS)
Prevosto, L.; Kelly, H.; Mancinelli, B.
2013-12-01
This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.
NSDL National Science Digital Library
2014-09-19
In this activity, learners compare the number of letters in the names of the people in their families or group of friends. Learners write out the names in graph paper, with one letter in each square, and cut out the names. Learners sort the names from shortest to longest and then find the average length of the names by cutting and scrambling up the letters. This activity guide contains sample questions to ask, literary connections, extensions, and alignment to local and national standards.
NASA Astrophysics Data System (ADS)
Saiki, Yoshitaka; Yamada, Michio
2010-01-01
We point out that the original work [Saiki and Yamada, Phys. Rev. E 79, 015201(R) (2009)] was aimed at addressing the question of why even a single periodic orbit of low period can capture averaged properties of physical quantities reasonably well in some dynamical systems including turbulent fluids. This point of view is quite different from that of the Comment [Zaks and Goldobin, Phys. Rev. E 81, 018201 (2010)].
Belgers, J D M; Aalderink, G H; Arts, G H P; Brock, T C M
2011-10-01
We tested the effects of the herbicide metsulfuron-methyl on growth of the submerged macrophyte Myriophyllum spicatum under laboratory conditions using different exposure scenarios. The exposures of each scenario were comparable in the concentration × time factor, viz., the same 21-d time-weighted average (TWA) concentrations but variable in peak exposure concentrations (ranging from 0.1 to 21000 ng ai L?¹) and exposure periods (1, 3, 7, 14 or 21 d). To study recovery potential of the exposed M. spicatum plants we continued the observation on shoot and root growth for another 21 d in herbicide-free medium so that the total observation period was 42 d. Non-destructive endpoints, length and number of new shoots and roots, were determined weekly from day 14 onwards. Destructive endpoints, dry-weight (DW) of main shoots, new shoots and new roots, were measured at the end of the experiment (t=42 d). Metsulfuron-methyl exposure in particular inhibited new tissue formation but was not lethal to main shoots. On days 21 and 42 after start exposure, EC??/EC?? values for new tissues expressed in terms of peak concentration (=measured concentration during exposure periods of different length) showed large differences between exposure scenarios in contrast to EC??/EC?? values for days 21 and 42 expressed in terms of 21-d and 42-d TWA concentrations, respectively. At the end of the experiment (day 42), 42-d TWA EC(x) values were remarkably similar between exposure scenarios, while a similar trend could already be observed on day 21 for 21-d TWA EC(x) values. For the macrophyte M. spicatum and exposure to the herbicide metsulfuron-methyl the TWA approach seems to be appropriate to use in the risk assessment. However, the data from the toxicity experiment suggest that on day 21 also the absolute height of the pulse exposure played a (minor) role in the exposure - response relationships observed. PMID:21875741
Field demonstration of rapid turnaround, multilevel groundwater screening
Tingle, A.R. [Analysas Corp., Oak Ridge, TN (United States); Baker, L. [Automated Sciences Group, Inc., Oak Ridge, TN (United States); Long, D.D. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States). Hazardous Waste Remedial Actions Program; Miracle, M. [Advanced Sciences, Inc., Oak Ridge, TN (United States)
1994-09-01
A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.
NASA Astrophysics Data System (ADS)
Mori, Toshiya; Shinohara, Hiroshi; Kazahaya, Kohei; Hirabayashi, Jun-Ichi; Matsushima, Takeshi; Mori, Takehiko; Ohwada, Michiko; Odai, Masanobu; Iino, Hideki; Miyashita, Makoto
2013-08-01
All available SO2 flux data for 32 years (1975-2006) of Japanese volcanoes, accounting for about 10% of the world's arc volcanoes, were compiled to evaluate the temporal variation of the flux of each volcano and to estimate the time-averaged SO2 flux. The compiled data revealed that 6 volcanoes (Tokachi, Asama, Aso, Sakurajima, Satsuma-Iwojima, and Suwanosejima volcanoes) out of 17 significantly degassing volcanoes usually contributed more than 94% of the total flux. The time-averaged annual flux was 2.2 Tg a-1, which includes intense degassing of Miyakejima volcano after 2000, which raised the figure from 1.4 Tg a-1, indicating that a single huge emitter is capable of significantly skewing regional time-averaged degassing totals and indicating that the time-averaged flux assessments for infrequent huge emitters are important for accurate estimation. The regional SO2 flux distribution in cumulative frequency-flux plot does not obey a power law distribution. It shows a roll-off curve bending at about 500 t d-1, implying that it is misleading to assume the power law distribution for estimation of the global flux. Because the contribution of the major degassing volcanoes including the six volcanoes and additional sporadically degassing volcanoes during eruptive and posteruptive periods to the total flux is more than 95%, measurement of all large flux volcanoes can approximate the global flux.
G. C. Borgia; R. J. S. Brown; P. Fantazzini
1997-01-01
Fluid-flow properties of porous media, such as permeability k and irreducible water saturation Swi, can be estimated from water 1H nuclear magnetic resonance (NMR) relaxation data, but there are basic questions regarding data processing and interpretation. We found that Swi and k are better estimated if different forms of “average” relaxation time are used. NMR longitudinal relaxation data for a
ERIC Educational Resources Information Center
Sawyer, Richard
2013-01-01
Education officials and journalists frequently track changes over time in the average ACT® College Readiness Assessment Composite scores and ACT College Readiness Benchmark attainment rates of individual high schools. Using standard statistical methods, I examined how often changes in these statistics are unambiguously positive or negative, rather…
Widmann, I.L.
1995-12-01
Explosive Ordnance Disposal (EOD) Open Burning (OB) operations are performed to treat and dispose of unserviceable munitions in the Department of Defense (DOD) inventory. This thesis effort sought to develop a computer model, based upon the Gaussian Puff Equation. The model varies from standard plume modeling practices by not making the assumption that the wind direction, wind speed and turbulence are uniform throughout the duration of the burn. The model assigns meteorological data to each explosion (puff) generated by the OB source. The experiments in this research effort assigned meteorological data to the puffs based upon averaging the weather data over 1, 10, and 60 minute periods. The results of the research showed that there was a statistically significant difference (95% confidence) between 1 minute and 60 minute weather data plume concentrations in the receptor grid in 100% of the experiments performed.
Buckley, M.
2011-01-01
--- Enhanced Advertising, Security and Maintenance, to include Mercado, Salud, and Paramount Mercado Paramount Salud THE CULTURAL DISTRICT ????????????.................... * SALUD HOTEL, SPA & CASINO Ground Floor Plan ?Lobby And Courtyard Hotel... rooms Second Level Floor Plan Restored Courtyard Hotel rooms Typical Hotel Tower Floor Plan Special Penthouse Suites and SkyDeck Bar +Plunge Pool Transform vacant sites with up-zoning to higher-density Live-Work-Learn Districts. Sizeable...
NASA Astrophysics Data System (ADS)
Zhao, Yan; Belov, Pavel; Hao, Yang
2007-09-01
The accuracy of finite-difference time-domain (FDTD) modelling of left-handed metamaterials (LHMs) is dramatically improved by using an averaging technique along the boundaries of the LHM slabs. The material frequency dispersion of the LHMs is taken into account using auxiliary differential equation (ADE) based dispersive FDTD methods. The dispersive FDTD method with averaged permittivity along the material boundaries is implemented for a two-dimensional (2D) transverse electric (TE) case. A mismatch between analytical and numerical material parameters (e.g. permittivity and permeability) introduced by the time discretization in FDTD is demonstrated. The expression of numerical permittivity is formulated and it is suggested to use corrected permittivity in FDTD simulations in order to model LHM slabs with their desired parameters. The influence of the switching time of the source on the oscillation of field intensity is analysed. It is shown that there exists an optimum value that leads to fast convergence in simulations.
NASA Astrophysics Data System (ADS)
Jarochowska, Emilia; Munnecke, Axel
2015-01-01
Stable carbon isotope curves are used as a precise stratigraphic tool in the Paleozoic, even though they are commonly based on shallow-water carbonate record, characterized by low stratigraphic completeness. Identification of episodes of large-scale redeposition and erosion may improve ?13Ccarb-based correlations. Here, a series of at least three episodes of high-energy onshore redeposition are described from the Makarivka Member (new unit) of the Ustya Formation from the Homerian (middle Silurian) of Podolia, Ukraine. The Makarivka Member is emplaced within a tidal flat succession. Its most prominent part is divided into a lower polymictic conglomerate of sand- to boulder-sized clasts representing a range of subtidal facies, and an upper heterolithic unit composed of grainstone and mudstone laminae. The aim of the study is to identify the mechanism of deposition of the allochthonous conglomeratic material in this Member. Based on analogies with recent tsunami deposits, the conglomerate is interpreted to reflect the strongest landward-directed current in the tsunami run-up phase, and the heterolith - alternating high-density landward currents, stagnant intervals allowing mud and land-derived debris to settle, and backwash flows. The tsunamite was deposited during an interval of decreasing isotopic values of the Mulde excursion, a global ?13C excursion reaching + 5.2‰ in the studied sections. Clast redeposition in an interval characterized by rapidly changing ?13Ccarb offers the opportunity to evaluate the degree of temporal and spatial averaging caused by the tsunami. The clasts in the polymictic conglomerate show scattered ?13Ccarb values (- 0.3‰ to + 2.1‰) compared to homogenous (1.3‰ to 1.6‰) values in the matrix. The presence of clasts characterized by low ?13Ccarb values is explained by their decrease with bathymetry rather than erosion of pre-excursion strata, whereas high values characterize material entrained from the sea-floor and strata directly underlying the tsunamite. Close (1.3‰ and 1.5‰) average ?13Ccarb values suggest that the matrix of the conglomerate is potentially a product of clast grinding.
ERIC Educational Resources Information Center
Smith, Scott G.
2015-01-01
In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…
Li-li Li; Jun Zhao; Georgi M. Dimirovski
2011-01-01
This paper deals with the problem of reliable stabilization and H? control for a class of continuous-time switched Lipschitz nonlinear systems with actuator failures. We consider the case that actuators suffer “serious failure”—the never failed actuators cannot stabilize the given system. The differential mean value theorem (DMVT) allows transforming the switched Lipschitz nonlinear systems into switched linear parameter varying (LPV)
Borgia, G.C. [University of Bologna, Dipartimento di Ingegneria Chimica, Mineraria, e Delle Tecnolgie Ambientali, Viale Risorgimento 2, 40136 Bologna (Italy)] [University of Bologna, Dipartimento di Ingegneria Chimica, Mineraria, e Delle Tecnolgie Ambientali, Viale Risorgimento 2, 40136 Bologna (Italy); Brown, R.J. [515 W. 11th Street, Claremont, California 91711-3721 (United States)] [515 W. 11th Street, Claremont, California 91711-3721 (United States); Fantazzini, P. [University of Bologna, Dipartimento of Physics, Via Irnerio 46, 40126 Bologna (Italy)] [University of Bologna, Dipartimento of Physics, Via Irnerio 46, 40126 Bologna (Italy)
1997-11-01
Fluid-flow properties of porous media, such as permeability k and irreducible water saturation S{sub wi}, can be estimated from water {sup 1}H nuclear magnetic resonance (NMR) relaxation data, but there are basic questions regarding data processing and interpretation. We found that S{sub wi} and k are better estimated if different forms of {open_quotes}average{close_quotes} relaxation time are used. NMR longitudinal relaxation data for a suite of 106 water-saturated clean sandstones were used. Sandstones represent a specialized class of porous media, where even for small porosity, substantially all pore space is connected. The sandstones exhibit distributions of relaxation times ranging over factors from at least 10 to more than 10{sup 3}. We tried several forms of {open_quotes}average{close_quotes} relaxation time T. One family of Ts is {l_angle}T{sup p}{r_angle}{sup 1/p}, where limp{r_arrow}0 gives the geometric mean. The best estimator we found for S{sub wi} uses a form of average relaxation time {ital only}, rather than relaxation time cutoff. The time used can be any of several forms of T, giving more emphasis to {ital short times} than the geometric mean does. On the contrary, the best T for estimating permeability without other information is precisely the geometric mean. The best estimates of permeability came from fits of ln(k/{phi}) using Ts with emphasis at {ital slightly longer times}. While S{sub wi} is better estimated by using all the data points (starting from our minimum 0.4 ms), k is better estimated by starting at a few ms, that is by ignoring a {ital non-negligible} fraction of the signal for some samples. These results can be obtained also by using computations that do not need to invert multiexponential relaxation data, and good results are obtained even with only a few data points. (Abstract Truncated)
A. Fischer; F. Bake; J. Heinze; O. Diers; C. Willert; I. Röhle
2009-01-01
In order to analyze unsteady flow phenomena in combustion facilities two phase-sorting methods have been developed and investigated for the retrieval of phase-resolved data from (randomly) sampled 'single-shot' data such as PIV recordings or chemiluminescence imagery in a post-processing step. This is made possible by simultaneously recorded continuous time traces of reference data (e.g., pressure signal). Using this off-line method
Jo Vandesompele; Katleen De Preter; Filip Pattyn; Bruce Poppe; Nadine Van Roy; Anne De Paepe; Frank Speleman
2002-01-01
BACKGROUND: Gene-expression analysis is increasingly important in biological research, with real-time reverse transcription PCR (RT-PCR) becoming the method of choice for high-throughput and accurate expression profiling of selected genes. Given the increased sensitivity, reproducibility and large dynamic range of this methodology, the requirements for a proper internal control gene for normalization have become increasingly stringent. Although housekeeping gene expression has
Bogdan C. Maglich
1988-01-01
A D+ beam of kinetic energy Ti = 0.7 MeV was stored in a ``simple mirror'' magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times,
Shaffer, F.; Massah, H.; Sinclair, J.; Shahnam, M.
1995-04-30
This paper describes an experimental facility to measure time-averaged properties of particle-wall collisions. A particle tracking velocimetry (PTV) system measures particle rebound angles and velocities and impact angles and velocities for specific particles. From these measured properties, values of normal and tangential coefficient of restitution are derived. The PTV system rapidly measures thousands of particle-wall collisions enabling determination of time-averaged properties. In this initial work we use 191 {micro}m glass particles colliding with a glass plate at a fixed angle of 17.5{degree} and a velocity of 17.07 m/s. Observations are also made of aspherical particles colliding with gross sliding. In the future, this experimental system will be used with particles and wall materials of industrial significance.
A. Piccolo; G. Pistone
2007-01-01
A simplified calculus model to investigate on the transverse heat transport near the edges of a thermally isolated thermoacoustic stack in the low acoustic Mach number regime is presented. The proposed methodology relies on the well-known results of the classical linear thermoacoustic theory which are implemented into an energy balance calculus-scheme through a finite difference technique. Details of the time-averaged
NASA Technical Reports Server (NTRS)
Adamczyk, John J.
1996-01-01
The role of unsteady flow processes in establishing the performance of axial flow turbomachinery was investigated. The development of the flow model, as defined by the time average flow equations associated with the passage of the blade row embedded in a multistage configuration, is presented. The mechanisms for unsteady momentum and energy transport is outlined. The modeling of the unsteady momentum and energy transport are discussed. The procedure for simulating unsteady multistage turbomachinery flows is described.
Takuma Akimoto; Soya Shinkai; Yoji Aizawa
2014-08-01
In infinite ergodic theory, two distributional limit theorems are well-known. One is characterized by the Mittag-Leffler distribution for time averages of $L^1(m)$ functions, i.e., integrable functions with respect to an infinite invariant measure. The other is characterized by the generalized arc-sine distribution for time averages of non-$L^1(m)$ functions. Here, we provide another distributional behavior of time averages of non-$L^1(m)$ functions in one-dimensional intermittent maps where each has an indifferent fixed point and an infinite invariant measure. Observation functions considered here are non-$L^1(m)$ functions which vanish at the indifferent fixed point. We call this class of observation functions weak non-$L^1(m)$ function. Our main result represents a first step toward a third distributional limit theorem, i.e., a distributional limit theorem for this class of observables, in infinite ergodic theory. To prove our proposition, we propose a stochastic process induced by a renewal process to mimic a Birkoff sum of a weak non-$L^1(m)$ function in the one-dimensional intermittent maps.
4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan
NASA Astrophysics Data System (ADS)
Kruis, Matthijs F.; van de Kamer, Jeroen B.; Belderbos, José S. A.; Sonke, Jan-Jakob; van Herk, Marcel
2014-09-01
The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (?registration) and the regularity of the deformation vector field (\\overline{\\Delta |J|} ) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (?signal). The two irregularity measures, \\overline{\\Delta |J|} and ?registration, were dependent on each other (p < 0.0001, R2 = 0.80 for P-4D-CT, R2 = 0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower \\overline{\\Delta |J|} and ?registration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal. The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42?mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16?mm in caudal direction (4.1% of the amplitude). Similar relative offsets were found at the diaphragm. We have devised a method to use amplitude binned 4D-CT to construct motion model and generate a mid-position planning CT for radiotherapy treatment purposes. We have decimated the systematic offset of this mid-position model with a motion model derived from P-4D-CT. We found that the A-4D-CT led to a decrease of local artefacts and that this decrease was correlated to the irregularity of the external respiration signal.
Vandesompele, Jo; De Preter, Katleen; Pattyn, Filip; Poppe, Bruce; Van Roy, Nadine; De Paepe, Anne; Speleman, Frank
2002-01-01
Background Gene-expression analysis is increasingly important in biological research, with real-time reverse transcription PCR (RT-PCR) becoming the method of choice for high-throughput and accurate expression profiling of selected genes. Given the increased sensitivity, reproducibility and large dynamic range of this methodology, the requirements for a proper internal control gene for normalization have become increasingly stringent. Although housekeeping gene expression has been reported to vary considerably, no systematic survey has properly determined the errors related to the common practice of using only one control gene, nor presented an adequate way of working around this problem. Results We outline a robust and innovative strategy to identify the most stably expressed control genes in a given set of tissues, and to determine the minimum number of genes required to calculate a reliable normalization factor. We have evaluated ten housekeeping genes from different abundance and functional classes in various human tissues, and demonstrated that the conventional use of a single gene for normalization leads to relatively large errors in a significant proportion of samples tested. The geometric mean of multiple carefully selected housekeeping genes was validated as an accurate normalization factor by analyzing publicly available microarray data. Conclusions The normalization strategy presented here is a prerequisite for accurate RT-PCR expression profiling, which, among other things, opens up the possibility of studying the biological relevance of small expression differences. PMID:12184808
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-07-01
Efficient use of feed resources has become a clear challenge for the U.S. pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio, residual feed intake (RFI), real-time ultrasound back fat thickness (BF), ultrasound muscle depth, intramuscular fat content (IMF), birth weight (BW at birth), and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes B models with 35,140 SNP on 18 autosomes after quality control. Significance of nonoverlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes factor, and nonparametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166 to 140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream of the MC4R gene, was significantly associated with ADFI, ADG, and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN, and PENK. No QTL were identified for RFI, IMF, and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24962532
NASA Astrophysics Data System (ADS)
Minh, David D. L.; Chodera, John D.
2011-01-01
Recently discovered identities in statistical mechanics have enabled the calculation of equilibrium ensemble averages from realizations of driven nonequilibrium processes, including single-molecule pulling experiments and analogous computer simulations. Challenges in collecting large data sets motivate the pursuit of efficient statistical estimators that maximize use of available information. Along these lines, Hummer and Szabo developed an estimator that combines data from multiple time slices along a driven nonequilibrium process to compute the potential of mean force. Here, we generalize their approach, pooling information from multiple time slices to estimate arbitrary equilibrium expectations. Our expression may be combined with estimators of path-ensemble averages, including existing optimal estimators that use data collected by unidirectional and bidirectional protocols. We demonstrate the estimator by calculating free energies, moments of the polymer extension, the thermodynamic metric tensor, and the thermodynamic length in a model single-molecule pulling experiment. Compared to estimators that only use individual time slices, our multiple time-slice estimators yield substantially smoother estimates and achieve lower variance for higher-order moments.
ERIC Educational Resources Information Center
Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad
2014-01-01
Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of…
ERIC Educational Resources Information Center
Reyes, Augustina; Garcia, Andres
2014-01-01
In an era of school reform and high stakes accountability, the major challenge in education is to turnaround the nation's lowest-performing schools. National policy provides abundant resources with often restrictive prerequisites. Research indicates that quality principals and teachers are the most important factors for turning around the…
ERIC Educational Resources Information Center
Carlson, Robert V.
This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failed…
Zhang, Yu; Sheng, Xiaojing; Meng, Xiangfei; Li, Yan
2014-01-01
Background The movement of organelles in root hairs primarily occurs along the actin cytoskeleton. Circulation and “reverse fountain” cytoplasmic streaming constitute the typical forms by which most organelles (such as mitochondria and the Golgi apparatus) in plant root hair cells engage in bidirectional movement. However, there remains a lack of in-depth research regarding the relationship between the distribution of the actin cytoskeleton and turnaround organelle movement in plant root hair cells. Results In this paper, Arabidopsis seedlings that had been stably transformed with a GFP-ABD2-GFP (green fluorescent protein-actin-binding domain 2-green fluorescent protein) construct were utilized to study the distribution of bundles of filamentous (F)-actin and the directed motion of mitochondria along these bundles in root hairs. Observations with a confocal laser scanning microscope revealed that there were widespread circular F-actin bundles in the epidermal cells and root hairs of Arabidopsis roots. In root hairs, these circular bundles primarily start at the sub-apical region, which is the location where the turnaround movement of organelles occurs. MitoTracker probes were used to label mitochondria, and the dynamic observation of root hair cells with a confocal laser scanning microscope indicated that turnaround mitochondrial movement occurred along circular F-actin bundles. Conclusions Relevant experimental results demonstrated that the circular F-actin bundles provide a track for the turnaround and bidirectional movement of mitochondria. PMID:24626218
James Werking; G. Bersuker; Y. D. Chan
1997-01-01
We have developed a fast-turnaround method that uses antenna-transistor test structures to monitor plasma charging damage. This technique can be used to measure plasma exposure effects and latent charging damage. Results on metal etch and plasma resist strip tools demonstrate that this method can quantitatively differentiate charging damage levels between tools, and between different processes in the same tool.
ERIC Educational Resources Information Center
Jones, Stephanie; Clarke, Lane; Enriquez, Grace
2009-01-01
This book demonstrates a five-part framework for teachers, reading specialists, and literacy coaches who want to help their least engaged students become powerful readers. Merging theory and practice, the guide offers successful strategies to reach your "struggling" learners. The authors show how you can "turn-around" your instructional practice,…
ERIC Educational Resources Information Center
Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan
2013-01-01
Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…
NASA Astrophysics Data System (ADS)
Cai, Yong; Montague, Derek C.; Deshler, Terry
2011-01-01
Midcontinental surface aerosols have been measured at a small, minimally polluted city in summer and winter and on a nearby remote mountain in summer. Aerosol scattering, absorption, size distribution, and composition were measured using a three-wavelength nephelometer, an aethalometer, a passive cavity aerosol spectrometer, a scanning mobility particle sizer, an Aerodyne quadrupole aerosol mass spectrometer, and conventional filter systems. Size-dependent, time-dependent, and averaged refractive indices are estimated from the aerosol composition measurements and then used to calculate time-dependent aerosol scattering. The calculated scattering values show differences that are generally less than 5% on average for all three refractive indices, suggesting that the average refractive index is adequate for scattering estimations from time- or size-dependent aerosol measurements. The calculated scattering (backscattering) at 550 nm ranges from 2% less to 23% greater (11-22% smaller) than that measured. These differences decrease at 450 nm and increase at 700 nm and significantly exceed these values if optical size distribution measurements are not corrected for an appropriate index of refraction. Optimal agreement between calculated and measured scattering is achieved on 4 of the 6 days investigated in detail, if the real refractive index of the aerosol organic species ranges from 1.45 ± 0.02 at 450 nm to 1.62 ± 0.05 at 700 nm. Single-scatter albedos are also calculated and found to be in good agreement with those derived from the experimental observations, ranging from 0.79 to 0.87 in the city and constant, near 0.95, on the mountain top.
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
Cleary, P. W.; Hijmans, T. W.; Walraven, J. T. M. [Van der Waals-Zeeman Institute of the University of Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)
2010-12-15
We report on the manipulation of the center-of-mass motion ('sloshing') of a Bose-Einstein condensate in a time-averaged orbiting potential (TOP) trap. We start with a condensate at rest in the center of a static trapping potential. When suddenly replacing the static trap with a TOP trap centered about the same position, the condensate starts to slosh with an amplitude much larger than the TOP micromotion. We show, both theoretically and experimentally, that the direction of sloshing is related to the initial phase of the rotating magnetic field of the TOP. We show further that the sloshing can be quenched by applying a carefully timed and sized jump in the phase of the rotating field.
Durantin, Gautier; Scannella, Sebastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frederic
2014-01-01
Real-time solutions for noise reduction and signal processing represent a central challenge for the development of Brain Computer Interfaces (BCI). In this paper, we introduce the Moving Average Convergence Divergence (MACD) filter, a tunable digital passband filter for online noise reduction and onset detection without preliminary learning phase, used in economic markets analysis. MACD performance was tested and benchmarked with other filters using data collected with functional Near Infrared Spectoscopy (fNIRS) during a digit sequence memorization task. This filter has a good performance on filtering and real-time peak activity onset detection, compared to other techniques. Therefore, MACD could be implemented for efficient BCI design using fNIRS. PMID:25570400
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Astrophysics Data System (ADS)
Manweiler, J. W.; Patterson, J. D.; Armstrong, T. P.
2011-12-01
We present the results of a survey of energy spectra of protons, helium, carbon, oxygen, and iron for the rising phase of the current solar cycle through 1 July 2011 derived from the Advanced Composition Explorer EPAM and for the entire Ulysses mission spanning from late 1990 to mission-end in mid-2009 using two parameter composition aperture pulse height analyzer measurements for energies of about .3 to 10 MeV/nucleon. Examples will be shown of typical events as well as long term quiet interplanetary fluxes. This data set has been posted on the Virtual Energetic Particle Observatory and is available at Fundamental Technologies website (www.ftecs.com). We will describe the calibration procedures and comparisons with other observations at different spatial locations and energies. We have created daily-averaged 12-point energy spectra for helium, carbon, oxygen, neon, silicon, and iron. The data are in clear-text, comma separated ASCII files that can easily be ingested into most visualization software, custom code, or spreadsheet program. Each year's data file contains within its header a description of the various bins for the different ion species including upper and lower energy thresholds and the geometric factor for the C and D detectors at the base of the CA60 telescope on HISCALE. Each data record contains the year, the day of year, the duty cycle corrected accumulation time, and the number of counts for the day and flux for each of 12 energy channels for each of the six ion species. Having the raw counts for each day permits users to create averages over any number of days desired. One can easily create an average spectra for an event or CIR that is several days in duration. For periods in which the fluency is low, having the raw counts and the energy thresholds also permits one to integrate in energy as well and easily render six or four-point energy spectra when counts are low enough to pose statistical problems. We present several examples of situations for which this new data product may be utilized including event-averages, particle reservoir analysis, and analysis of quiescent periods between events.
NASA Technical Reports Server (NTRS)
Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.
1987-01-01
Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.
Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models
B. Lakshminarayana; J. Luo
1993-01-01
Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD
Valkov, T.V. [Shell International Gas Ltd., London (United Kingdom); Tan, C.S. [Massachusetts Inst. of Tech., Cambridge, MA (United States). Gas Turbine Lab.
1999-07-01
In a two-part paper, key computed results from a set of first-of-a-kind numerical simulations on the unsteady interaction of axial compressor stators with upstream rotor wakes and tip leakage vortices are employed to elucidate their impact on the time-averaged performance of the stator. Detailed interrogation of the computed flowfield showed that for both wakes and tip leakage vortices, the impact of these mechanisms can be described on the same physical basis. Specifically, there are two generic mechanisms with significant influence on performance: reversible recovery of the energy in the wakes/tip vortices (beneficial) and the associated nontransitional boundary layer response (detrimental). In the presence of flow unsteadiness associated with rotor wakes and tip vortices, the efficiency of the stator under consideration is higher than that obtained using a mixed-out steady flow approximation. The effects of tip vortices and wakes are of comparable importance. The impact of stator interaction with upstream wakes and vortices depends on the following parameters: axial spacing, loading, and the frequency of wake fluctuations in the rotor frame. At reduced spacing, this impact becomes significant. The most important aspect of the tip vortex is the relative velocity defect and the associated relative total pressure defect, which is perceived by the stator in the same manner as a wake. In Part 1, the focus will be on the framework of technical approach, and the interaction of stator with the moving upstream rotor wakes.
Valkov, T.V. [Shell International Gas Ltd., London (United Kingdom); Tan, C.S. [Massachusetts Inst. of Tech., Cambridge, MA (United States). Gas Turbine Lab.
1999-07-01
In a two-part paper, key computed results from a set of first-of-a-kind numerical simulations on the unsteady interaction of axial compressor stator with upstream rotor wakes and tip leakage vortices are employed to elucidate their impact on the time-averaged performance of the stator. Detailed interrogation of the computed flowfield showed that for both wakes and tip leakage vortices, the impact of these mechanisms can be described on the same physical basis. Specifically, there are two generic mechanisms with significant influence on performance: reversible recovery of the energy in the wakes/tip vortices (beneficial) and the associated nontransitional boundary layer response (detrimental). In the presence of flow unsteadiness associated with rotor wakes and tip vortices, the efficiency of the stator under consideration is higher than that obtained using a mixed-out steady flow approximation. The effects of tip vortices and wakes are of comparable importance. The impact of stator interaction with upstream wakes and vortices depends on the following parameters: axial spacing, loading, and the frequency of wake fluctuations in the rotor frame. At reduced spacing, this impact becomes significant. The most important aspect of the tip vortex is the relative velocity defect and the associated relative total pressure defect, which is perceived by the stator in the same manner as a wake. In Part 2, the focus will be on the interaction of stator with the moving upstream rotor tip and streamwise vortices, the controlling parametric trends, and implications on design.
G. D. Harvel; K. Hori; K. Kawanishi; J. S. Chang
1996-01-01
A Real-Time Neutron Radiography (RTNR) system and a high speed X-ray Computed Tomography (X-CT) system are used to determine the flow regime, the instantaneous cross-sectional averaged void fraction, and the time averaged void fraction in a vertical annulus flow channel. A standard optical video system is also used to observe the flow regime. The annulus flow channel is operated as
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.
MIT BLOSSOMS - Flaws of Averages
NSDL National Science Digital Library
MIT BLOSSOMS
2011-06-23
This learning video presents an introduction to the Flaws of Averages using three exciting examples: the “crossing of the river” example, the “cookie” example, and the “dance class” example. Averages are often worthwhile representations of a set of data by a single descriptive number. The objective of this module, however, is to simply point out a few pitfalls that could arise if one is not attentive to details when calculating and interpreting averages. Most students at any level in high school can understand the concept of the flaws of averages presented here. The essential prerequisite knowledge for this video lesson is the ability to calculate an average from a set of numbers. Materials needed include: pen and paper for the students; and a blackboard or equivalent. During this video lesson, students will learn about three flaws of averages: (1) The average is not always a good description of the actual situation, (2) The function of the average is not always the same as the average of the function, and (3) The average depends on your perspective. To convey these concepts, the students are presented with the three real world examples mentioned above. The total length of the four in-class video segments is 12 minutes, leaving lots of time in a typical class session for the teacher to work with the students on their own learning examples (such as those from the supplementary notes) to firm up the ideas presented here on the flaws of averages.
NASA Astrophysics Data System (ADS)
Aharonian, F. A.; Akhperjanian, A. G.; Barrio, J. A.; Bernlöhr, K.; Bojahr, H.; Calle, I.; Contreras, J. L.; Cortina, J.; Daum, A.; Deckers, T.; Denninghoff, S.; Fonseca, V.; Gonzalez, J. C.; Heinzelmann, G.; Hemberger, M.; Hermann, G.; Heß, M.; Heusler, A.; Hofmann, W.; Hohl, H.; Horns, D.; Ibarra, A.; Kankanyan, R.; Kettler, J.; Köhler, C.; Konopelko, A.; Kornmeyer, H.; Kestel, M.; Kranich, D.; Krawczynski, H.; Lampeitl, H.; Lindner, A.; Lorenz, E.; Magnussen, N.; Meyer, H.; Mirzoyan, R.; Moralejo, A.; Padilla, L.; Panter, M.; Petry, D.; Plaga, R.; Plyasheshnikov, A.; Prahl, J.; Pühlhofer, G.; Rauterberg, G.; Renault, C.; Rhode, W.; Röhring, A.; Sahakian, V.; Samorski, M.; Schmele, D.; Schröder, F.; Stamm, W.; Völk, H. J.; Wiebel-Sooth, B.; Wiedner, C.; Willmer, M.; Wittek, W.
1999-09-01
During the several months of the outburst of Mkn 501 in 1997 the source has been monitored in TeV gamma -ray s with the HEGRA stereoscopic system of imaging atmospheric Cherenkov telescopes. Quite remarkably it turned out that the shapes of the daily gamma -ray energy spectra remained essentially stable throughout the entire state of high activity despite dramatic flux variations during this period. The derivation of a long term time-averaged energy spectrum, based on more than 38,000 detected TeV photons, is therefore physically meaningful. The unprecedented gamma -ray statistics combined with the 20% energy resolution of the instrument resulted in the first detection of gamma -rays from an extragalactic source well beyond 10 TeV, and the first high accuracy measurement of an exponential cutoff in the energy region above 5 TeV deeply into the exponential regime. From 500 GeV to 24 TeV the differential photon spectrum is well approximated by a power-law with an exponential cutoff: d N/d E=N_0 (E/1 TeV)(-alpha ) exp {(-E/E_0)}, with N_0=(10.8 +/-0.2_stat +/-2.1_sys) * 10(-11) cm(-2) s(-1) TeV(-1) , alpha =1.92 +/-0.03_stat +/-0.20_sys, and E_0=(6.2 +/-0.4_stat (-1.5 +2.9)_sys) TeV. We summarize the methods for the evaluation of the energy spectrum in a broad dynamical range which covers almost two energy decades, and study in detail the principal sources of systematic errors. We also discuss several important astrophysical implications of the observed result concerning the production and absorption mechanisms of gamma -ray s in the emitting jet and the modifications of the initial spectrum of TeV radiation due to its interaction with the diffuse extragalactic background radiation.
Angel, Brad M; Simpson, Stuart L; Chariton, Anthony A; Stauber, Jenny L; Jolley, Dianne F
2015-07-01
Intermittent, fluctuating and pulsed contaminant discharges result in organisms receiving highly variable contaminant exposures. Current water quality guidelines are predominantly derived using data from continuous exposure toxicity tests, and most frequently applied by regulators with the assumption that concentrations from a single sampling event will provide a meaningful approach to assessing potential effects. This study investigated the effect of single and multiple (daily) dissolved copper pulses on the marine diatom, Phaeodactylum tricornutum, including measurements of copper uptake and elimination to investigate the toxic mechanism. Copper pulses of between 0.5 and 24h and continuous exposures with equivalent 72-h time-averaged concentrations (TACs) resulted in similar biomass inhibition of P. tricornutum, with continuous exposures often being marginally more toxic. Rates of cell division generally recovered to control levels within 24h of the copper pulse removal. Upon resuspension in clean seawater, the extracellular copper per cell decreased rapidly, whereas the intracellular copper per cell decreased slowly. Negligible loss of copper from the total algal biomass indicated that P. tricornutum did not have an effective mechanism for eliminating copper from cells, rather the intracellular copper decreased as a result of dilution by cellular division as the algal growth rate recovered. The measurement of copper uptake after 72-h exposure and kinetics of elimination thereafter suggest that continuous exposures are marginally more toxic to P. tricornutum than pulsed copper exposures with equivalent TACs because slow internalization and saturation of algal membrane transport sites results in less copper uptake into pulse-exposed cells than continuously-exposed cells coupled with dilution of internalized copper via cellular division in the post-exposure period. In the case of P. tricornutum, the results indicate that water quality guidelines for copper based on continuous exposure will be conservative when applied to short-term discharges. PMID:25911575
NASA Astrophysics Data System (ADS)
Stocker, Erich; Chou, Joyce; Woltz, Lawrence
2010-05-01
TRMM has three level 3 (space/time averaged) data products that aggregate level 2 TRMM Microwave Imager (TMI) GPROF precipitation retrievals: 3A12, a monthly accumulation of 2A12 the GPROF swath retrieval product; 3B31, a monthly accumulation of 2B31 the combined retrieval product that uses both Precipitation Radar (PR) and TMI data; 3G68 (and its variants) that provide hourly retrievals for TMI, PR and combined. The 3G products are packaged as daily files but provide hourly information at .5 x .5 degree globally, .25 x .25 degree globally, and .1 x .1 degree over Africa, Australia and South America. This paper will present detailed information of the changes in the v7 TMI GPROF level 2 retrievals that have led to changes in the level 3 accumulations. These changes have led to the need to "filter" precipitation retrievals over the ocean in the level three products. The Precipitation Processing System (PPS) has developed a common filtering approach that is used in the aggregations of all the level 3 products listed in the previous paragraph. The rationale and the effects of the filtering as also presented. Additionally, the paper includes the changes that were made in the combined monthly product—3B31. This product has been substantially changed for version 7. One of the major changes was the change to a finer space resolution than was used in previous versions. Additionally, the product is now a more direct accumulation of the level 2 2B31 combined swath precipitation retrieval. The paper concludes with a report of the status of the TMI based level 3 products for TRMM v7 reprocessing.
NASA Astrophysics Data System (ADS)
Baines, A. G.; Schwartz, J. J.; Cheadle, M. J.; John, B. E.
2005-12-01
Determining the rates of oceanic detachment faulting and how they compare to rates of seafloor spreading is of fundamental importance for constraining how these features form and their relationship to tectonic processes, including magmatic accretion, in slow-spreading environments. We use Pb/U ages of igneous zircon from lower crustal gabbros and sea-surface magnetic data from Atlantis Bank to determine half-spreading rates during detachment faulting. We suggest that these rates, which record crustal accretion and cooling of lower crust in the Antarctic plate, also reflect the time-averaged rate of motion on the detachment fault. During the time Atlantis Bank formed, from 11.2-12.5Ma, both techniques yield a half-spreading rate of 14±1km/Ma. The magnetically determined full-spreading rate has been 14km/Ma since 20Ma and magnetic data from neighboring segments show that this full spreading rate did not increase during the period of detachment faulting. Our data also show that the period of highly asymmetric spreading was confined to the segment containing Atlantis Bank, and so was associated with ridge migration and closure of the non-transform discontinuity to the East. Therefore, detachment faulting at Atlantis Bank occurred during a period of highly asymmetric spreading; a result consistent with sea-surface magnetic data from the FUJI dome at 63° 45'E on the SWIR. Our data require a re-evaluation of existing models that predict structural asymmetry but not asymmetric plate spreading rates during the formation of oceanic core-complexes. The half-spreading rate during detachment faulting suggests that for ~1.3Ma the detachment fault acted as the primary plate boundary and that the axial valley was (within resolution) stationary with respect to the hanging wall (African Plate). If volcanism continued within the axial valley, a thick layer of extrusive material would have formed. This has not been identified. A combination of hydrothermal cooling and strain localization on the detachment fault might lead to limited diking and volcanism within the hanging wall, thus restricting crustal accretion to the lower crust and producing a thick plutonic layer in the Antarctic Plate, a prediction consistent with seismic velocities and crustal thickness observed beneath Atlantis Bank.
Herrinton, Lisa J; Liu, Liyan; Altschuler, Andrea; Dell, Richard; Rabrenovich, Violeta; Compton-Phillips, Amy L
2015-01-01
The cost to build and to maintain traditional registries for many dire, complex, low-frequency conditions is prohibitive. The authors used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing elected utilization and to influence clinicians to change practices to improve care. The platform, tested in 5 medical specialty departments, enabled the specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improve opportunities. Each miniregistry required 1 to 2 hours of collaboration by a specialist. Turnaround was 1 to 14 days. PMID:25785640
NASA Astrophysics Data System (ADS)
Lakshminarayana, B.; Luo, J.
1993-07-01
The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Luo, J.
1993-01-01
The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.
P. Welch
1967-01-01
The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections.
Q. Y. Liu; M. Maris; S. T. Petcov
1997-01-01
This is the first of two articles aimed at providing comprehensive predictions for the day-night (D-N) effect for the Super-Kamiokande detector in the case of the Mikheyev-Smirnov-Wolfenstein (MSW) nue-->numu(tau) transition solution of the solar neutrino problem. The one-year-averaged probability of survival of the solar nue crossing the Earth's mantle, the core, the inner 2\\/3 of the core, and the (core+mantle)
Toward effective utilization of timing exceptions in design optimization
Kwangok Jeong; Andrew B. Kahng; Seokhyeong Kang
2010-01-01
Timing exceptions in IC implementation processes, especially timing verification, help reduce pessimism that arises from unnecessary timing constraints by masking non-functional critical paths. Ideally, timing exceptions should always be helpful for quality of results (QOR) metrics such as area or number of timing violations, and for design turnaround time (TAT) metrics such as tool runtime and number of design iterations.
Q. Y. Liu; M. Maris; S. T. Petcov
1997-01-01
This is the first of two articles aimed at providing comprehensive predictions for the day-night (D-N) effect for the Super-Kamiokande detector in the case of the Mikheyev-Smirnov-Wolfenstein (MSW) Î½{sub e}âÎ½{sub Î¼(Ï)} transition solution of the solar neutrino problem. The one-year-averaged probability of survival of the solar Î½{sub e} crossing the Earth{close_quote}s mantle, the core, the inner 2\\/3 of the core,
ERIC Educational Resources Information Center
Abbott, Laura Lynn Tanner
2014-01-01
The purpose of this quantitative non-experimental predictive study was to determine if CIA alignment factors and related district support systems are associated with student achievement to enable the turnaround of schools in crisis. This study aimed to utilize the District Snapshot Tool to determine if the district systems that support CIA…
PURNELL H. BENSON
1967-01-01
76 COLLEGE SS REPORTED HOW THEY WOULD DIVIDE AMOUNTS OF TIME PER WK. (7 AND 21 HR.) BETWEEN PAIRS OF ACTIVITIES FORMED IN ALL POSSIBLE PAIRS FROM A LIST OF 7 ACTIVITIES. MARGINAL UTILITY FUNCTIONS FOR THE ACTIVITIES WERE FITTED TO THE 42 PAIRED DIVISIONS OF TIME FOR EACH STUDENT. THE PARAMETERS OF THESE FUNCTIONS WERE THEN CORRELATED WITH GRADE-POINT
Galles, David
University of San Francisco IPEDS 2011-12 Four Year Average Graduation Rates Summary of full Female Male Enrolled Graduated Average Grad Rate * Other Sports consist of: Men and Womens Golf, Tennis a graduation rate for the most recent four-year cohort that has had an opportunity to graduate within 150 per
Galles, David
University of San Francisco IPEDS 2010-11 Four Year Average Graduation Rates Summary of full Female Male Enrolled Graduated Average Grad Rate * Other Sports consist of: Men and Womens Golf, Tennis a graduation rate for the most recent four-year cohort that has had an opportunity to graduate within 150 per
Galles, David
University of San Francisco IPEDS 2012-13 Four Year Average Graduation Rates Summary of full Female Male Enrolled Graduated Average Grad Rate * Other Sports consist of: Men and Womens Golf, Tennis a graduation rate for the most recent four-year cohort that has had an opportunity to graduate within 150 per
Errors due to average velocities
Bruce Denardo; Selmer Wong; Alpha Lo
1989-01-01
The error caused when the average velocity over an interval is used to approximate the instantaneous velocity at either the time or spatial midpoint of the interval is calculated for one-dimensional constant acceleration and for simple harmonic motion. The results for simple harmonic motion are applied to the computed energy, which these velocity approximations cause to deviate from constancy. Also
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
NASA Astrophysics Data System (ADS)
Ferraris, Stefano; Agnese, Carmelo; Baiamonte, Giorgio; Canone, Davide; Previati, Maurizio; Cat Berro, Daniele; Mercalli, Luca
2015-04-01
Modeling of rainfall statistical structure represents an important research area in hydrology, meteorology, atmospheric physics and climatology, because of the several theoretical and practical implications. The statistical inference of the alternation of wet periods (WP) and dry periods (DP) in daily rainfall records can be achieved through the modelling of inter-arrival time-series (IT), defined as the succession of times elapsed from a rainy day and the one immediately preceding it. It has been shown previously that the statistical structure of IT can be well described by the 3-parameter Lerch distribution (Lch). In this work, Lch was successfully applied to IT data belonging to a sub-alpine area (Piemonte and Valle d'Aosta, NW Italy); furthermore the same statistical procedure was applied to daily rainfall records to ITs associated. The analysis has been carried out for 26 daily rainfall long-series (? 90 yr of observations). The main objective of this work was to detect temporal trends of some features describing the statistical structure of both inter-arrival time-series (IT) and associated rainfall depth (H). Each time-series was divided on subsets of five years long and for each of them the estimation of the Lch parameter was performed, so to extend the trend analysis to some high quantiles.
Monthly Average Temperature Modelling
M. Andrade-Bejarano
This research is associated with the goal of the horticultural sector of the Colombian southwest, which is to obtain climatic\\u000a information, specifically, to predict the monthly average temperature in sites where it has not been measured. The data correspond\\u000a to monthly average temperature, and were recorded in meteorological stations at Valle del Cauca, Colombia, South America.\\u000a Two components are identified
Yoshihiro Adachi; Ichiro Daigo
2007-01-01
- Preamble. In this series of two papers, a methodology to calculate the average number of times a material is used in a society\\u000a from cradle to grave is presented and applied to allocation of environmental impact of virgin material. Part 1 focused on\\u000a methodology development and showed how the methodology works with hypothetical examples of material flows. Part 2 presents
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.; Johnson, C. L.
2012-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). Yet, numerous numerical dynamo simulations show that some departures from such a simple geometry is to be expected, not least because of the heterogeneous thermal core-mantle boundary conditions that the convecting mantle imposes on the geodynamo. Indeed, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) would suggest this is the case. However, assessing the significance of such minor departures from the GAD is particularly challenging, because non-linear directional data are sensitive not only to the time-averaged component of the field, but also to its time fluctuating component, known as the paleosecular variation (PSV). This means that in addition to data errors, PSV also must be taken into account when assessing any lava flow directional data based claims of departures of the TAF from the GAD. Furthermore, because of limited age information for these data , it is necessary to assess departures from the GAD by resorting to a statistical approach. We report recent progress using an approach we have suggested and further developed (Khokhlov et al., Geophysical Journal International, 2001, 2006) to test the compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models, against any lava flow paleomagnetic database, asssuming that these TAF and PSV models are defined within the Giant Gaussian Process statistical framework. In particular we will show how sensitive statistical measures of the compatibility of a combined set of TAF and PSV models with a given directional database can be defined. These measures can be used to test published TAF and PSV models with updated 0-5 Ma lava flow paleomagnetic data sets. They also lay the groundwork for designing inverse methods better suited to seek the minimum required departure of the TAF from the GAD.
Ukawa, Shigekazu; Tamakoshi, Akiko; Yatsuya, Hiroshi; Yamagishi, Kazumasa; Ando, Masahiko; Iso, Hiroyasu
2015-01-01
Background Sedentary behavior is associated with cardiovascular disease, diabetes mellitus, and cancer morbidity, and watching television (TV) is an important sedentary behavior. The aim of this study is to clarify the association between TV viewing time and chronic obstructive pulmonary disease (COPD)-related mortality in Japanese adults. Methods Using the Cox proportional hazard model, we assessed COPD-related mortality by TV viewing time in a national cohort of 33 414 men and 43 274 women without cancer, stroke, myocardial infarction, or tuberculosis at baseline (1988–1990). Results The median follow-up was 19.4 years; 244 men and 34 women died of COPD. Men watching ?4 hours/day of TV were more likely to die of COPD than those watching <2 hours/day (hazard ratio 1.63; 95% confidence interval, 1.04–2.55), independent of major confounders. No association was found in women. Conclusions Avoiding a sedentary lifestyle, particularly prolonged TV viewing, may help in preventing death from COPD among men. PMID:25947581
J. Remes; J. Vahakangas; A. Uusimaki
2009-01-01
Tungsten conductor lines deposited by focused ion beam (FIB) from W(CO)6 precursor gas quite often have too high a resistivity (typically 100-200 mu Omega cm) in practical integrated circuit edit work. Even if the high resistivity of the deposited conductor line can be tolerated the FIB deposition process of conductor lines with length over 100 mum can take several hours.
REASON FOR ABSENCE FORM (To be filed with bi-weekly turnaround document)
Oyet, Alwell
in excess of four consecutive days, a doctor's note must be provided) Jury Duty or Court Leave Compassionate) Reason PLEASE PRINT - USE BLACK INK ONLY BEGIN DATE END DATE APPROVED: YES NO IF NO, RETURN FORM SIGNATURE DATE DATE: DATE BEGIN TIME END TIME ID NUMBER MUST BE PROVIDED SECTION 2: FOR USE BY SUPERVISOR
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-06-01
The efficiency of producing salable products in the pork industry is largely determined by costs associated with feed and by the amount and quality of lean meat produced. The objectives of this paper were 1) to explore heritability and genetic correlations for growth, feed efficiency, and real-time ultrasound traits using both pedigree and marker information and 2) to assess accuracy of genomic prediction for those traits using Bayes A prediction models in a Duroc terminal sire population. Body weight at birth (BW at birth) and weaning (BW at weaning) and real-time ultrasound traits, including back fat thickness (BF), muscle depth (MD), and intramuscular fat content (IMF), were collected on the basis of farm protocol. Individual feed intake and serial BW records of 1,563 boars obtained from feed intake recording equipment (FIRE; Osborne Industries Inc., Osborne, KS) were edited to obtain growth, feed intake, and feed efficiency traits, including ADG, ADFI, feed conversion ratio (FCR), and residual feed intake (RFI). Correspondingly, 1,047 boars were genotyped using the Illumina PorcineSNP60 BeadChip. The remaining 516 boars, as an independent sample, were genotyped with a low-density GGP-Porcine BeadChip and imputed to 60K. Magnitudes of heritability from pedigree analysis were moderate for growth, feed intake, and ultrasound traits (ranging from 0.44 ± 0.11 for ADG to 0.58 ± 0.09 for BF); heritability estimates were 0.32 ± 0.09 for FCR but only 0.10 ± 0.05 for RFI. Comparatively, heritability estimates using marker information by Bayes A models were about half of those from pedigree analysis, suggesting "missing heritability." Moderate positive genetic correlations between growth and feed intake (0.32 ± 0.05) and back fat (0.22 ± 0.04), as well as negative genetic correlations between growth and feed efficiency traits (-0.21 ± 0.08, -0.05 ± 0.07), indicate selection solely on growth traits may lead to an undesirable increase in feed intake, back fat, and reduced feed efficiency. Genetic correlations among growth, feed intake, and FCR assessed by a multiple-trait Bayes A model resulted in increased genetic correlation between ADG and ADFI, a negative correlation between ADFI and FCR, and a positive correlation between ADG and FCR. Accuracies of genomic prediction for the traits investigated, ranging from 9.4% for RFI to 36.5% for BF, were reported that might provide new insight into pig breeding and future selection programs using genomic information. PMID:24671579
NASA Astrophysics Data System (ADS)
Kim, Seung-Kyu; Park, Jong-Eun
2014-06-01
Despite remarkable achievements with r some chemicals, a field-measurement technique has not been advanced for volatile hydrophobic organic chemicals (HOCs) that are the subjects of international concern. This study assesses the applicability of passive air sampling (PAS) by comparing PUF-PAS and its modified SIP-PAS which was made by impregnating XAD-4 powder into PUF, overviewing the principles of PAS, screening sensitive parameters, and determining the uncertainty range of PAS-derived concentration. The PAS air sampling rate determined in this study, corrected by a co-deployed low-volume active air sampler (LAS) for neutral PFCs as model chemicals, was ˜1.2 m3 day-1. Our assessment shows that the improved sorption capacity in a SIP lengthens PAS deployment duration by expanding the linear uptake range and then enlarges the effective air sampling volume and detection frequency of chemicals at trace level. Consequently, volatile chemicals can be collected during sufficiently long times without reaching equilibrium when using SIP, while this is not possible for PUF. The most sensitive parameter to influence PAS-derived CA was an air-side mass transfer coefficient (kA), implying the necessity of spiking depuration chemicals (DCs) because this parameter is strongly related with meteorological conditions. Uncertainty in partition coefficients (KPSM-A or KOA) influences PAS-derived CA to a greater extent with regard to lower KPSM-A chemicals. Also, the PAS-derived CA has an uncertainty range of a half level to a 3-fold higher level of the calculated one. This work is expected to establish solid grounds for the improvement of field measurement technique of HOCs.
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-04-28
Efficient use of feed resources has become a clear challenge for the US pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio (FCR), residual feed intake (RFI), real-time ultrasound back fat thickness (BF), muscle depth (MD), intramuscular fat content (IMF), birth weight (BW at birth) and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes-B models with 35,140 SNP on 18 autosomes after quality control. Significance of non-overlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes Factor and non-parametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166-140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream MC4R gene, was significantly associated with ADFI, ADG and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN and PENK. No QTL were identified for RFI, IMF and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24778334
Forecasting Inflation Using Dynamic Model Averaging
Gary Koop; Dimitris Korobilis
2010-01-01
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-30
... Local management information system developments...partner programs. Timely information, that identifies the...various practices or problems, is needed for ETA...analyzed. Administrative information and data are too limited...alternative mechanism for collecting information that...
Galles, David
University of San Francisco IPEDS 2012-13 Four Year Average Graduation Rates Summary of full Graduation Rate 68% 48% 63% 71% 50% 64% 79% 74% 77% 75% 66% 73% 68% 66% 68% 69% 58% 66% 72% 66% 70% Alien Amendments of 1991, USF discloses a graduation rate for the most recent four-year cohort that has had
Galles, David
University of San Francisco IPEDS 2011-12 Four Year Average Graduation Rates Summary of full Graduation Rate 66% 44% 60% 71% 56% 65% 78% 75% 77% 76% 74% 75% 67% 65% 66% 69% 62% 66% 71% 67% 70% Alien Amendments of 1991, USF discloses a graduation rate for the most recent four-year cohort that has had
NSDL National Science Digital Library
Roberts, Lila F.
2005-04-21
This demo provides students with a concrete understanding of the average rate of change for physical situations and for functions described in tabular or graphic form. Animations in gif and Quicktime format are available, as well as Excel programs for classroom or student use.
B. Lakshminarayana; J. Luo
1993-01-01
The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon\\/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model
Towards sensitive label-free immunosensing by means of turn-around point long period fiber gratings.
Chiavaioli, F; Biswas, P; Trono, C; Bandyopadhyay, S; Giannetti, A; Tombelli, S; Basumallick, N; Dasgupta, K; Baldini, F
2014-10-15
Long period fiber gratings have been effectively used in the field of biochemical sensing since a few years. Compared to other well-known label-free optical approaches, long period gratings (LPGs) take advantage of the typical peculiarity of optical fibers. Coupling the propagating core mode with a high-order cladding mode near its turn-around point (TAP) was the strategy adopted to achieve good performances without additional coatings, except for the sensing and selective biolayer deposited on the fiber. Both the modeling and manufacturing of TAP LPGs were discussed. After the functionalization of the fiber surface with the deposition of a Eudragit L100 copolymer layer followed by immunoglobulin G (IgG) covalent immobilization, an IgG/anti-IgG bioassay was implemented along the grating region and the kinetics of antibody/antigen interaction was analyzed. A quantitative comparison between a TAP LPG and a non-TAP LPG was carried out to highlight the improvement of the proposed immunosensor. The real effectiveness and feasibility of an LPG-based biosensor were demonstrated by using a complex matrix consisting of human serum, which also confirmed the specificity of the assay, and a limit of detection of 70 ?g L(-1) (460 pM) was achieved. PMID:24835405
ERIC Educational Resources Information Center
Conrad, Jill K.
2013-01-01
This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,…
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum
Herrinton, Lisa J; Liu, Liyan; Altschuler, Andrea; Dell, Richard; Rabrenovich, Violeta; Compton-Phillips, Amy L
2015-01-01
Context: Disease registries enable priority setting and batching of clinical tasks, such as reaching out to patients who have missed a routine laboratory test. Building disease registries requires collaboration among professionals in medicine, population science, and information technology. Specialty care addresses many complex, uncommon conditions, and these conditions are diverse. The cost to build and maintain traditional registries for many diverse, complex, low-frequency conditions is prohibitive. Objective: To develop and to test the Specialty Miniregistries platform, a collaborative interface designed to streamline the medical specialist’s contributions to the science and management of population health. Design: We used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing expected utilization, and to influence clinicians and others to change practices to improve care. The platform was composed of staff, technology, and structured collaborations, organized into a workflow. The platform was tested in five medical specialty departments. Main Outcome Measure: Proof of concept. Results: The platform enabled medical specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improvement opportunities. Their knowledge was used to build and to deploy the miniregistries. Each miniregistry required 1 to 2 hours of collaboration by a medical specialist. Turnaround was 1 to 14 days. Conclusions: The Specialty Miniregistries platform is useful for low-volume questions that often occur in specialty care, and it requires low levels of investment. The efficient organization of information workers to support accountable care is an emerging question. PMID:25785640
Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Luo, J.
1993-01-01
Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.
Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models
NASA Astrophysics Data System (ADS)
Lakshminarayana, B.; Luo, J.
1993-11-01
Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.
Poland, Michael P.
2014-01-01
Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of K?lauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100?km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at K?lauea between mid-2011 and mid-2013 was approximately 2?m3/s, which is about half the long-term average rate over the course of K?lauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of K?lauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at K?lauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.
Reflight of the First Microgravity Science Laboratory: Quick Turnaround of a Space Shuttle Mission
NASA Technical Reports Server (NTRS)
Simms, Yvonne
1998-01-01
Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.
Averaged shifted chi-square test
Jyh-Shyang Wu; Wen-Shuenn Deng
2012-01-01
A simple procedure based on the average of shifted chi-square statistics (ASCS) is proposed to improve the classical chi-square procedure for testing whether a random sample has been drawn from a specified continuous distribution. We repeatedly partition the sample space, say, ? times to obtain ? respective chi-square statistics. The proposed test statistic is defined as the average value of
Averaged shifted chi-square test
Jyh-Shyang Wu; Wen-Shuenn Deng
2011-01-01
A simple procedure based on the average of shifted chi-square statistics (ASCS) is proposed to improve the classical chi-square procedure for testing whether a random sample has been drawn from a specified continuous distribution. We repeatedly partition the sample space, say, ? times to obtain ? respective chi-square statistics. The proposed test statistic is defined as the average value of
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Galles, David
-Time Freshmen with a Stafford Loan and No Pell Grant Alien Four Year Enrolled Cohort: Full-Time First Native Hispanic White Unkown Total Race Female 0 45 5 196 130 276 84 736 Male 0 16 4 83 46 107 38 294 0-Resident African Native Hispanic White Unkown Total Race Female 0 30 3 154 92 190 57 526 Male 0 10 2 63 30 77 18
Galles, David
-Time Freshmen with a Stafford Loan and No Pell Grant Alien Four Year Enrolled Cohort: Full-Time First Native Hispanic White Unkown Total Race Female 0 45 5 196 111 258 68 683 Male 0 17 3 67 36 103 36 262 0-Resident African Native Hispanic White Unkown Total Race Female 0 29 3 154 82 166 45 479 Male 0 9 2 53 25 67 20 176
Galles, David
-Time Freshmen with No Stafford Loan and No Pell Grant Alien Four Year Enrolled Cohort: Full-Time First Native Hispanic White Unkown Total Race Female 99 36 9 234 83 583 139 1183 Male 100 10 5 143 55 302 72-Resident African Native Hispanic White Unkown Total Race Female 71 21 6 178 64 389 95 824 Male 66 3 2 103 35 183 44
Galles, David
-Time Freshmen with No Stafford Loan and No Pell Grant Alien Four Year Enrolled Cohort: Full-Time First Native Hispanic White Unkown Total Race Female 97 34 5 212 79 506 124 1057 Male 90 9 3 113 47 287 59 608-Resident African Native Hispanic White Unkown Total Race Female 72 19 3 154 59 335 82 724 Male 59 3 2 79 32 186 36
Convergence speed in distributed consensus and averaging
Olshevsky, Alexander
2006-01-01
We propose three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case. The convergence times of our fixed-graph algorithms compare favorably ...
Small scale magnetic flux-averaged magnetohydrodynamics
Pfirsch, D. (Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, D-8046 Garching (Germany)); Sudan, R.N. (Laboratory of Plasma Studies, Cornell University, Ithaca, New York 14853 (United States))
1994-08-01
By relaxing exact magnetic flux conservation below a scale [lambda] a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when [lambda] is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time [tau] but not for less than [tau]. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits.
ERIC Educational Resources Information Center
Hood, Lucy
2006-01-01
This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…
Lagacé-Wiens, Philippe R S; Alfa, Michelle J; Manickam, Kanchana; Harding, Godfrey K M
2008-04-01
Methicillin-resistant Staphylococcus aureus (MRSA) is a significant pathogen in both nosocomial and community settings, and screening for carriers is an important infection control practice in many hospitals. In this retrospective study, we demonstrate that the implementation of an MRSA screening protocol using a selective chromogenic medium (MRSASelect) reduced the workload for this screening test by 63.7% overall and by 12.6% per specimen and reduced the turnaround time for reporting by an average of 1.33 days for all MRSA screening specimens, 1.97 days for MRSA-positive specimens, and 1.3 days for MRSA-negative specimens compared to standard mannitol-salt agar supplemented with 6 mg of oxacillin/liter. PMID:18234863
Online Averaging Wavelet Denoising Method
E. Khadem Olama; H. Jazayeri-Rad
2011-01-01
In this paper, the non-stationary noises are challenged by an online averaging wavelet method filter. The novelty of the filter is using averaging method of combining mean values of each resolution in approximation spaces that computes the average overall frequency in a specific point. The key structure is a mirror usage of data handling the zero abnormality of the filter
Average-Atom Thomson Scattering
Johnson, Walter R.
Average-Atom Thomson Scattering Applications Xray Scattering from Warm Dense Matter Thomson Scattering in the Average-Atom Approximation W. R. Johnson, Notre Dame Collaborators: Joe Nilsen & K. T. Cheng, LLNL July 10, 2012 Atomic Physics Seminar Xray Scattering #12;Average-Atom Thomson Scattering
Average-Atom Thomson Scattering
Johnson, Walter R.
Average-Atom Thomson Scattering Applications Xray Scattering from WDM Thomson Scattering in the Average-Atom Approximation W. R. Johnson, Notre Dame Collaborators: Joe Nilsen & K. T. Cheng, LLNL Computational Challenges in WDM PLWS-4 Xray Scattering #12;Average-Atom Thomson Scattering Applications Outline
Averaging Quaternions F. Landis Markley
Crassidis, John L.
Averaging Quaternions F. Landis Markley NASA Goddard Space Flight Center, Greenbelt, MD 20771 Yang that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions
AVERAGING QUATERNIONS , F. Landis Markley
Crassidis, John L.
AAS 07-213 AVERAGING QUATERNIONS Yang Cheng , F. Landis Markley , John L. Crassidis , and Yaakov Oshman§ This paper presents an algorithm to average a set of quaternion observa- tions. The average quaternion is determined by minimizing the weighted sum of the squared Frobenius norms of the corresponding
Average Speed and Unit Conversion
NSDL National Science Digital Library
2009-01-01
Students will determine average speeds from data collected and convert units for speed problems. Students try to roll the ball with a prescribed average speed based on intuition. Then, based on unit conversion we will see how accurate the rolls really were.
When good = better than average
Don A. Moore
2007-01-01
People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These
Average case complexity for finite Boolean functions
Alexander Chashkin
2001-01-01
The average time of computing Boolean functions by straight-line programs with a conditional stop is considered. A straight-line program consists of operators of two types. Every operator of the first type computes a binary Boolean function whose arguments are either the values computed by preceding operators or the values of the input variables. Every operator of the second type either
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
Improved averaging for non-null interferometry
NASA Astrophysics Data System (ADS)
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Distributed averaging in dynamic networks
Rajagopalan, Shreevatsa
2010-01-01
The question of computing average of numbers present at nodes in a network in a distributed manner using gossip or message-passing algorithms has been of great recent interest across disciplines -- algorithms, control and ...
[Real-time PCR test for detection and quantification of human polyomavirus BK (BKV) DNA].
D?ugosz, Angelika; Rynans, Sylwia; Dzieciatkowski, Tomasz; M?ynarczyk, Grazyna
2011-01-01
The human polyomavirus BK (BKV) is wide-spread pathogen, associated with urogenital tract disorders or even nephropathy in immunosuppressed patients. Nowadays molecular detection by real-time PCR (qPCR) is recognized as a method-of-choice for detecting human polyomaviruses in clinical samples. The aim of the study was development of real-time PCR assay for detection and quantification of polyomavirus BK DNA in clinical samples, using specific primers targeting a viral DNA VP3 gene and a TaqMan hydrolyzing probe. The analytical sensitivity of assay was tested using serial dilutions of BKV DNA in range between 13500 and 15 copies/ml. 27 urine samples and 23 plasma samples taken from a group of 22 adult recipients of allogeneic HSCT were tested for the presence of polyomavirus BK in the LightCycler system. Described in-house real-time PCR assay detected BKV DNA in 8 specimens (6 urine and 2 plasma). Detected average viral load was 170 copies/ml for plasma and 1250 copies/ml for urine samples, respectively. The results of this study show that developed TaqMan-based probe qPCR assay is very reliable and valuable for detection and quantification of BKV DNA, both in urine and plasma samples. These data, combined with its rapid turnaround time for results and decreased hands-on time, make the LightCycler PCR assay highly suitable for the rapid diagnostics of polyomavirus BK infections in the clinical laboratory. PMID:22184901
Averaging in spherically symmetric cosmology
Coley, A. A.; Pelavas, N. [Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova Scotia (Canada)
2007-02-15
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis.
High average power pockels cell
Daly, Thomas P. (Pleasanton, CA)
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Exploiting scale dependence in cosmological averaging
Mattsson, Teppo; Ronkainen, Maria, E-mail: teppo.mattsson@helsinki.fi, E-mail: maria.ronkainen@helsinki.fi [Helsinki Institute of Physics, University of Helsinki, PO Box 64, FIN-00014 (Finland); Department of Physical Sciences, University of Helsinki, PO Box 64, FIN-00014 (Finland)
2008-02-15
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre-Tolman-Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z{approx}2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion.
Volume averaging in the quasispherical Szekeres model
Krzysztof Bolejko
2008-11-22
This paper considers the volume averaging in the quasispherical Szekeres model. The volume averaging became of considerable interest after it was shown that the volume acceleration calculated within the averaging framework can be positive even though the local expansion rate is always decelerating. This issue was intensively studied within spherically symmetric models. However, since our Universe is not spherically symmetric similar analysis is needed in non symmetrical models. This papers presents the averaging analysis within the quasispherical Szekeres model which is a non-symmetrical generalisation of the spherically symmetric Lema\\^itre--Tolman family of models. Density distribution in the quasispherical Szekeres has a structure of a time-dependent mass dipole superposed on a monopole. This paper shows that when calculating the volume acceleration, $\\ddot{a}$, within the Szekeres model, the dipole does not contribute to the final result, hence $\\ddot{a}$ only depends on a monopole configuration. Thus, the volume averaging within the Szekeres model leads to literally the same solutions as obtained within the Lema\\^itre--Tolman model.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Models of space averaged energetics of plates
NASA Technical Reports Server (NTRS)
Bouthier, O. M.; Bernhard, R. J.
1990-01-01
The analysis of high frequency vibrations in plates is of particular interest in the study of structure borne noise in aircrafts. The current methods of analysis are either too expensive (finite element method) or may have a confidence band wider than desirable (Statistical Energy Analysis). An alternative technique to model the space and time averaged response of structural acoustics problems with enough detail to include all significant mechanisms of energy generation, transmission, and absorption is highly desirable. The focus of this paper is the development of a set of equations which govern the space and time averaged energy density in plates. To solve this equation, a new type of boundary value problem must be treated in terms of energy density variables using energy and intensity boundary conditions. A computer simulation verification study of the energy governing equation is performed. A finite element formulation of the new equations is also implemented and several test cases are analyzed and compared to analytical solutions.
Averaging analysis of adaptive control algorithms
NASA Technical Reports Server (NTRS)
Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.
1988-01-01
The method of averaging is used to analyze discrete-time indirect adaptive control. The analysis focuses on various prediction-error-driven identification algorithms coupled with a general linear control law. The plant is not required to be in the model set of the identifier, which accounts for systems with unmodeled plant dynamics. Exogenous input signals including known command signals and unknown disturbances are also included. Both gradient and Newton-based algorithms are considered.
Average of Synthetic Exact Filters
David S. Bolme; Bruce A. Draper; J. Ross Beveridge
2009-01-01
This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater
Averaging inhomogenous cosmologies - a dialogue
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging inhomogeneous cosmologies - a dialogue.
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Average of Synthetic Exact Filters
David S. Bolme; Bruce A. Draper; J. Ross Beveridge
2009-01-01
This paper introduces a class of correlation filters called Average of Synthetic Exact Filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as Synthetic Discriminant Functions (SDFs) which only spec- ify a single output value per training image. Advantages of ASEF training include: insenitivity to over-fitting,
Bayesian Model Averaging: A Tutorial
Jennifer A. Hoeting; David Madigan; Adrian E. Raftery; Chris T. Volinsky
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-confident in- ferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent
Averaging of globally coupled oscillators
NASA Astrophysics Data System (ADS)
Swift, James W.; Strogatz, Steven H.; Wiesenfeld, Kurt
1992-03-01
We study a specific system of symmetrically coupled oscillators using the method of averaging. The equations describe a series array of Josephson junctions. We concentrate on the dynamics near the splay-phase state (also known as the antiphase state, ponies on a merry-go-round, or rotating wave). We calculate the Floquet exponents of the splay-phase periodic orbit in the weak-coupling limit, and find that all of the Floquet exponents are purely imaginary; in fact, all the Floquet exponents are zero except for a single complex conjugate pair. Thus, nested two-tori of doubly periodic solutions surround the splay-phase state in the linearized averaged equations. We numerically integrate the original system, and find startling agreement with the averaging results on two counts: The observed ratio of frequencies is very close to the prediction, and the solutions of the full equations appear to be either periodic or doubly periodic, as they are in the averaged equations. Such behavior is quite surprising from the point of view of generic dynamical systems theory-one expects higher-dimensional tori and chaotic solutions. We show that the functional form of the equations, and not just their symmetry, is responsible for this nongeneric behavior.
The modulated average structure of mullite.
Birkenstock, Johannes; Pet?í?ek, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al4.8Si1.2O9.6) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al4.82?(3)Si1.18?(1)O9.59?(5)) and by average structure refinements. 8?(5) to 20?(13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137?(2) 0 ½], q2 = [0 0.4021?(5) 0.1834?(2)] and q3 = [0 0.4009?(5) -0.1834?(2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real structure of mullite is locally ordered (as previously known), but on the long-range its average is not completely disordered, the modulated structure of mullite may be denoted the true `average structure of mullite'. PMID:26027012
NASA Astrophysics Data System (ADS)
Hoover, Wm. G.; Hoover, Carol G.; Petravic, Janka
2008-10-01
Homogeneous shear flows (with constant strainrate dvx/dy ) are generated with the Doll’s and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear normal-stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, Pxx-Pyy , depend on both the thermostat type and the chosen shearflow algorithm. The Doll’s and Sllod algorithms predict opposite signs for this normal-stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures: Txx>Tzz>Tyy .
Wm. G. Hoover; Carol G. Hoover; Janka Petravic
2008-07-19
Homogeneous shear flows (with constant strainrate du/dy) are generated with the Doll's and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, P(xx) - P(yy), depend on both the thermostat type and the chosen shearflow algorithm. The Doll's and Sllod algorithms predict opposite signs for this stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures, T(xx) > T(zz) > T(yy).
Thomas M. Smith; C. F. Ropelewski; R. W. Reynolds
1994-01-01
Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2014-12-01
The JPL Airborne Snow Observatory (ASO) is an integrated LIDAR and Spectrometer measuring snow depth and rate of snow melt in the Sierra Nevadas, specifically, the Tuolumne River Basin, Sierra Nevada, California above the O'Shaughnessy Dam of the Hetch Hetchy reservoir, and the Uncompahgre Basin, Colorado, amongst other sites. The ASO data was delivered to water resource managers from the California Department of Water Resources in under 24 hours from the time that the Twin Otter aircraft landed in Mammoth Lakes, CA to the time disks were plugged in to the ASO Mobile Compute System (MCS) deployed at the Sierra Nevada Aquatic Research Laboratory (SNARL) near the airport. ASO performed weekly flights and each flight took between 500GB to 1 Terabyte of raw data, which was then processed from level 0 data products all the way to full level 4 maps of Snow Water Equivalent, albedo mosaics, and snow depth from LIDAR. These data were produced by Interactive Data analysis Language (IDL) algorithms which were then unobtrusively and automatically integrated into an Apache OODT and Apache Tika based Big Data processing system. Data movement was both electronic and physical including novel uses of LaCie 1 and 2 TeraByte (TB) data bricks and deployment in rugged terrain. The MCS was controlled remotely from the Jet Propulsion Laboratory, California Institute of Technology (JPL) in Pasadena, California on behalf of the National Aeronautics and Space Administration (NASA). Communication was aided through the use of novel Internet Relay Chat (IRC) command and control mechanisms and through the use of the Notifico open source communication tools. This talk will describe the high powered, and light-weight Big Data processing system that we developed for ASO and its implications more broadly for airborne missions at NASA and throughout the government. The lessons learned from ASO show the potential to have a large impact in the development of Big Data processing systems in the years to come.
Using the moving synchronous average to analyze fuzzy cyclostationary signals
NASA Astrophysics Data System (ADS)
Leclère, Quentin; Hamzaoui, Nacer
2014-02-01
Cyclostationarity is a property of vibration and acoustic signals recorded on rotating machines operating at constant speed. It states that the statistic properties of signals are periodic: the random process defined by the signal observed at a given position in the cycle is stationary, the cycle being defined as the angle interval between two identical configurations of the mechanical system. Cyclostationarity is not fully satisfied if the signals are acquired in the time domain on rotating machines with a fluctuating rotation speed. Indeed, if the instantaneous rotation speed is not purely periodic, it means that time samples taken at a constant time interval (equal to the average cycle duration) do not correspond exactly to an angle in the cycle. In this particular case, a synchronous averaging of cycle realizations can still be processed to estimate a periodic part using a predefined trigger angle to align cycle realizations before the averaging process. In these conditions, the synchronous average depends on the chosen synchronization angle: each point of the synchronous average is an estimate of the expected value of the signal at a given time preceding or following the synchronization angle. The synchronous average can be computed as a function of the synchronization angle, varying over an entire cycle. The result is a moving synchronous average that can be post-processed for diagnosis purposes. For example, a time frequency representation of the moving synchronous average can be computed, and the synchronization angle maximizing each point of the time frequency map can be easily extracted. Under certain conditions of instantaneous speed fluctuations, this analysis allows the precise localization of different mechanical events in the cycle, as well as their contributions in the analyzed vibration or acoustic signal. The analysis of the moving synchronous average leads also to the estimation of the energy loss of the synchronous average processed in angle caused by cycle-to-cycle speed fluctuations.
Tongue motion averaging from contour sequences Min Li, Chandra Kambhamettu
Delaware, University of
Tongue motion averaging from contour sequences Min Li, Chandra Kambhamettu Video/Image Modeling of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different of the speech motion is then obtained by averaging the time aligned contours from different repetitions
THE AVERAGED CONTROL SYSTEM OF FAST OSCILLATING CONTROL SYSTEMS
Paris-Sud XI, Université de
THE AVERAGED CONTROL SYSTEM OF FAST OSCILLATING CONTROL SYSTEMS ALEX BOMBRUN AND JEAN-BAPTISTE POMET Abstract. For control systems that either have an explicit periodic dependence on time or have periodic solutions and small controls, we define an average control system that takes into account all
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866
Meskhidze, Nicholas
concentration(ppt) time (hour) Observed_HO2_TRAMP DSMACC_HO2_TRAMP #12;Figure S5. HCHO concentration from Tex 20 25 30 35 Cumulativeprobability HCHO Concentration (ppb) HCHO_TexAQS 2000 hcho_TexAQS 2006 0 1000
SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND
385: SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN 1947 SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN, 1947 Part I- -Sea surface temperature monthly average charts, northeastern Pacific Ocean 5 Part II- -Sea
Monthly Average Temperature for Boston, MA
NSDL National Science Digital Library
The phenomenon is monthly average temperature data for Boston, MA from March 1872 until September 2000. In addition to monthly averages, the National Weather Service table also shows the yearly average temperature.
Measuring complexity through average symmetry
NASA Astrophysics Data System (ADS)
Alamino, Roberto C.
2015-07-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle—measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalized, including to continuous cases and general networks. By applying this measure to a series of objects, it is shown that it can be consistently used for both small scale structures with exact symmetry breaking and large scale patterns, for which, differently from similar measures, it consistently discriminates between repetitive patterns, random configurations and self-similar structures
Spectral and Parametric Averaging for Integrable Systems
Tao Ma; R. A. Serota
2013-06-03
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos - spectral averaging and parametric averaging. For spectral averaging, we introduce a new procedure - rescaled spectral averaging. Unlike traditional spectral averaging, it can describe the correlation function of spectral staircase and produce persistent oscillations of the interval level number variance. Parametric averaging, while not as accurate as rescaled spectral averaging for the correlation function of spectral staircase and interval level number variance, can also produce persistent oscillations of the global level number variance and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
Average Lorentz Self-Force From Electric Field Lines
Aashish, Sandeep
2014-01-01
We generalize the derivation of electromagnetic fields of a charged particle moving with a constant acceleration [1] to a variable acceleration (piecewise constants) over a small finite time interval using Coulomb's law, relativistic transformations of electromagnetic fields and Thomson's construction [2]. We derive the average Lorentz self-force for a charged particle in arbitrary non-relativistic motion via averaging the fields at retarded time.
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Estimating Average Temperatures on Appalachian Summits
Robert J. Leffler
1981-01-01
The relationship between average temperatures, latitudes and summit elevations throughout the Appalachian Mountains is examined. Regression equations relating summit elevations to monthly and annual average daily temperature ranges are derived. Linear equations for computing 30-year average monthly and average daily maximum and minimum temperatures for each month of the year on summits with no data are developed. Analysis of the
Districts Created to Steer "Turnarounds"
ERIC Educational Resources Information Center
Samuels, Christina A.
2011-01-01
If the job of a traditional superintendent is hard, imagine the complexities involved in building a school system from scratch--especially one composed of schools with some of the most intractable educational challenges. That's the task facing education leaders in Michigan and Tennessee, which are building special districts to take over…
Turnaround team revitalizes mining operations
Buchsbaum, L.
2005-11-01
Starting from scratch, the Broe Companies' Century Coal trains the next generation. The article describes how the newly created Century Coal LLC, controlled by Denver-based Broe Companies investment firm, is rebuilding and expanding its highwall mining operations, increasing production from a current 1 million tons to 5 to 6 million tons in 2006 and 2007. The company has a $100,000 outreach program with 95% of these funds going to local communities. Present coal mining operations are spread around Bell, Clay, Harlan Knox and Leslie Counties. A priority is the renovation of the WenLar preparation plant in Bell County. 5 photos.
Turnaround Project Signs Six States
ERIC Educational Resources Information Center
Maxwell, Lesli A.
2010-01-01
Dozens of schools are slated for aggressive interventions over the next three years under a new, multistate effort that aims to clear hurdles that have hindered previous attempts to improve underperforming schools. Education officials in Colorado, Delaware, Illinois, Louisiana, Massachusetts, and New York have agreed to partner with Mass Insight…
NSDL National Science Digital Library
In this activity learners explore the connections of digital time displays with numeric and geometric properties. Students look for times that have bilateral or rotational symmetry, or have a certain digital sum, etc. Ideas for implementation, extension and support are included.
Average observational quantities in the timescape cosmology
Wiltshire, David L. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand, and International Center for Relativistic Astrophysics Network (ICRANet), Piazzale le della Repubblica 10, Pescara 65121 (Italy)
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
A World Average of Fluorescence Yield Measurements
NASA Astrophysics Data System (ADS)
Cady, R.
2011-09-01
Twelve measurements of air fluoescence yield made by six different groups are put into a common format then averaged. The methods used to convert them are described. The results obtained by Rosado et al. are also averaged.
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
Averaging in LRS class II spacetimes
NASA Astrophysics Data System (ADS)
Kašpar, Petr; Svítek, Otakar
2015-02-01
We generalize Buchert's averaged equations (Gen Relativ Gravit 32; 105, 2000; Gen Relativ Gravit 33; 1381, 2001) to LRS class II dust model in the sense that all Einstein equations are averaged, not only the trace part. We derive the relevant averaged equations and we investigate backreaction on expansion and shear scalars in an approximate LTB model. Finally we propose a way to close the system of averaged equations.
Optimal perturbation models for averaged orbit generation
A. C. Long; W. D. McClain
1976-01-01
Averaging techniques applied to the variation of parameters (VOP) formulation of the equations of motion are being investigated as methods for long-term prediction of artificial satellite orbits. Analytically averaged equations were compared with numerically averaged equations with respect to accuracy and efficiency for computation of zonal and nonresonant third-body perturbations. Numerically averaged equations were also evaluated for computation of long-period
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Effect of wind averaging time on wind erosivity estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...
Time-averaged subspace methods for radar clutter texture retrieval
Fulvio Gini; Georgios B. Giannakis; Maria Greco; G. Tong Zhou
2001-01-01
Subspace approaches have become popular in the last two decades for retrieving constant amplitude harmonics observed in white additive noise because they may exhibit superior resolution over the FFT-based methods, especially with short data records and closely spaced harmonics. We demonstrate that MUSIC and ESPRIT methods can also be applied when the harmonics are corrupted by white or wideband multiplicative
Predictabilty of time averages: The influence of the boundary forcing
NASA Technical Reports Server (NTRS)
Shukla, J.
1982-01-01
The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.
Dennis L. Molfese; A. F. Key; S. Kelly; N. Cunningham; S. Terrell; M. Ferguson; V. J. Molfese; T. Bonebright
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between chil- dren with above-average, average, and below-average reading skills. ERP amplitudes and peak latencies decreased as reading skills in- creased. Furthermore, hemisphere differences
Probing turbulence intermittency via autoregressive moving-average models
NASA Astrophysics Data System (ADS)
Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele
2014-12-01
We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index ? that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that ? is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that ? is a suitable index to reconstruct intermittency in experimental turbulent fields.
Probing turbulence intermittency via autoregressive moving-average models.
Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele
2014-12-01
We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index ? that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that ? is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that ? is a suitable index to reconstruct intermittency in experimental turbulent fields. PMID:25615037
Average Annual Pay by State and Industry, 1997
NSDL National Science Digital Library
Average annual pay levels by industry varied widely across the nation according to this BLS release for 1997, with high average incomes of $49,995 recorded in mining areas and a low average pay of $15,877 recorded in retail trade, an industry largely populated by part-time workers. Regionally, The District of Columbia held the greatest annual pay level at $46,775 in 1997, while South and North Dakota, Montana, Mississippi, and Arkansas retained low annual salaries of $21,645, $22,047, $21,947, $22,772, and $23,268 respectively. Additional comparisons of annual incomes by state and industry for 1996 and 1997 are available at this Average Annual Pay by State and Industry page with four detailed statistical tables.
Spatially averaged turbulent stress and its partitioning
Dubravka Pokrajac; Ian McEwan; Vladimir Nikora
2008-01-01
Double averaging of the fundamental flow equations is an attractive methodology for investigating spatially heterogeneous\\u000a flows. The resulting double-averaged equations can be used as a framework for development of turbulence models. In order to\\u000a fully explore the potential of such models the stress terms that appear in the momentum equation as a result of each averaging\\u000a step needs to be
Cell averaging Chebyshev methods for hyperbolic problems
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Harten, Ami
1990-01-01
A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.
Spectral and parametric averaging for integrable systems
NASA Astrophysics Data System (ADS)
Ma, Tao; Serota, R. A.
2015-05-01
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
APPENDIX A: MONTHLY AVERAGED DATA In many instances monthly averaged data are
Oregon, University of
are presented in two tables. The first table gives long-term hourly average data. The hourly data are averaged for each month and then averaged over each year that has hourly data for that month. The second table has daily data averaged for each month for which data were gathered. This table also contains the monthly
INVERSIONS FOR AVERAGE SUPERGRANULAR FLOWS USING FINITE-FREQUENCY KERNELS
Svanda, Michal, E-mail: michal@astronomie.cz [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)
2012-11-10
I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.
Alignment blur in coherently averaged images
D. M. Monro; D. M. Simpson
1996-01-01
Blurring of coherently averaged images due to imperfect alignment is studied, and two restoration methods are proposed and evaluated. It is shown that iterative realignment is more powerful than post-filtering in reducing blur. The value of averaging and restoration is illustrated on human subjects in noisy video sequences
Average Bit-Complexity of Euclidean Algorithms
Ali Akhavi; Brigitte Vallée
2000-01-01
We obtain new results regarding the precise average bit- complexity of ve algorithms of a broad Euclidean type. We develop a general framework for analysis of algorithms, where the average-case complexity of an algorithm is seen to be related to the analytic behaviour in the complex plane of the set of elementary transformations determined by the algorithms. The methods rely
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
ON THE AVERAGE ENERGIES OF MOLECULAR IONS
Stevenson
1959-01-01
The average excitation energy e-bar with which molecule ions are formed ; by electron impact-induced ionization in relation to the average energy expended ; in the formation of molecule ions is considered. Under the assumption of an ; essential applicability of a quasi-equilibrium rate theory to mass spectra, e-bar ; can be related to two observable mass spectral qualities. The
Averages in vector spaces over finite fields
Wright J.; Carbery A.; Stones B.
2008-01-01
We study the analogues of the problems of averages and maximal averages over a surface in R-n when the euclidean structure is replaced by that of a vector space over a finite field, and obtain optimal results in a number ...
Averages of Forecasts: Some Empirical Results
Spyros Makridakis; Robert L. Winkler
1983-01-01
An alternative to using a single forecasting method is to average the forecasts obtained from several methods. In this paper we investigate empirically the impact of the number and choice of forecasting methods on the accuracy of simple averages. It is concluded that the forecasting accuracy improves, and that the variability of accuracy among different combinations decreases, as the number
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
Flexibility of spatial averaging in visual perception
Lombrozo, Tania; Judson, Jeff; MacLeod, Donald I.A
2005-01-01
The classical receptive field (RF) concept—the idea that a visual neuron responds to fixed parts and properties of a stimulus—has been challenged by a series of recent physiological results. Here, we extend these findings to human vision, demonstrating that the extent of spatial averaging in contrast perception is also flexible, depending strongly on stimulus contrast and uniformity. At low contrast, spatial averaging is greatest (about 11?min of arc) within uniform regions such as edges, as expected if the relevant neurons have orientation-selective RFs. At high contrast, spatial averaging is minimal. These results can be understood if the visual system is balancing a trade-off between noise reduction, which favours large areas of averaging, and detail preservation, which favours minimal averaging. Two distinct populations of neurons with hard-wired RFs could account for our results, as could the more intriguing possibility of dynamic, contrast-dependent RFs. PMID:15870034
Video denoising using multiple class averaging with Multiresolution
Pizurica, Aleksandra
, which can cause certain artifacts. In the second approach, based on the output of the motion detector averaging. It automatically adapts to motion - reducing the contribution of the pixels in the previous presents a non-linear technique for noise reduc- tion in video that is suitable for real-time processing
Speech Noise Estimation using Enhanced Minima Controlled Recursive Averaging
Ningping Fan; Justinian Rosca; Radu Balan
2007-01-01
Accurate noise power spectrum estimation in a noisy speech signal is a key challenge problem in speech enhancement. One state-of-the-art approach is the minima controlled recursive averaging (MCRA). This paper presents an enhanced MCRA algorithm (EMCRA), which demonstrates less speech signal leakage and faster response time to follow abrupt changes in the noise power spectrum. Experiments using real speech and
AVERAGING PRINCIPLE FOR DIFFUSION PROCESSES VIA DIRICHLET FORMS
Boyer, Edmond
Abstract. We study diffusion processes driven by a Brownian motion with regular drift in a finite dimension setting. The drift has two components on different time scales, a fast conservative component and a slow, interpretations and new results of the averaging principle for such processes when we speed up the conservative
DEFORMOTION Deforming Motion, Shape Average and the Joint
Soatto, Stefano
average" in order to track moving and deforming objects through time. Fig. 1. A jellyfish is "moving while direction while "locally deforming." The jellyfish in Fig. 1 is just another example to illustrate the same our intuition? For instance, in Fig. 6, how do we describe the "motion" of a jellyfish? Or in Fig. 5
Speeding up k-means Clustering by Bootstrap Averaging
Ian Davidson; Ashwin Satyanarayana
2003-01-01
Abstract K-means clustering is one of the most popular clustering algorithms used in data mining. However, clustering is a time consuming task, particularly w ith the large data sets found in data mining. In this p aper we show,how,bootstrap averaging with k-means can produce,results comparable,to clustering all of the data but in much less time. The approach,of bootstr ap (sampling
Averaging in cosmology based on Cartan scalars
NASA Astrophysics Data System (ADS)
Kašpar, P.; Svítek, O.
2014-05-01
We present a new approach for averaging in general relativity and cosmology. After a short review of the theory originally taken from the equivalence problem, we consider two ways of dealing with averaging based on Cartan scalars. We apply the theory for two different Lemaître-Tolman-Bondi models. In the first one, the correlation term behaves as a positive cosmological constant, in the second example, the leading correlation term behaves like spatial curvature. We also show the non-triviality of averaging for linearized monochromatic gravitational wave.
The average tangential electric field at the noon magnetopause
NASA Technical Reports Server (NTRS)
Lindqvist, P.-A.; Mozer, F. S.
1990-01-01
Two hundred and five magnetopause passes of the ISEE 1 satellite through the magnetopause within 2 hrs of local noon during the time period 1977-1981 were examined in order to determine the relationship between the dawn-dusk tangential electric field at the magnetopause and the northward component of the magnetosheath magnetic field; the ratio of these quantities yields the speed of the plasma flow toward the magnetopause (i.e., the reconnection flow speed). The results indicate that, on the average, reconnection occurs at the subsolar magnetopause, and that the average reconnection flow speed of magnetosheath plasma toward the magnetopause is about 15 percent of the local Alfven speed.
Thermal ghost imaging with averaged speckle patterns
Shapiro, Jeffrey H.
We present theoretical and experimental results showing that a thermal ghost imaging system can produce images of high quality even when it uses detectors so slow that they respond only to intensity-averaged (that is, ...
Ensemble Averages when ?is a Square Integer
Christopher D. Sinclair
2010-08-25
We give a hyperpfaffian formulation of partition functions and ensemble averages for Hermitian and circular ensembles when L is an arbitrary integer and \\beta=L^2 and when L is an odd integer and \\beta=L^2 +1.
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
SAMPLE AVERAGE APPROXIMATION METHOD FOR COMPOUND ...
2013-06-30
Jun 30, 2013 ... Sample Average Approximation (SAA) method (also known as ..... ? ? X, with rate (of a numerical sequence) 1/?n and distribution ?, if there is a ...... In [32] this property was attached a status ...... tics Reports, 34 (1990), pp.
Convergence speed in distributed consensus and averaging
Olshevsky, Alexander
We study the convergence speed of distributed iterative algorithms for the consensus and averaging problems, with emphasis on the latter. We first consider the case of a fixed communication topology. We show that a simple ...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
Reasonable Averages That Give Wrong Answers
NSDL National Science Digital Library
Shahani, A. K. (Arjan Kewalram)
Averages are meant to convey the essential features of a set of data, or a random variable, in a simple and a concise way. Like any other summary, an average can be misleading, misused and abused; there is a fair amount of literature on this aspect of averages, the book by D. Huff(1973) being a particularly readable account. In one intuitive use of averages there is a source of error which can be quite serious and which is often not recognized. This source of error is illustrated below by a quality control problem, a project, an experiment and a game. A Taylor series expansion gives an insight into the nature of the error.
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...2010-07-01 2010-07-01 false Emission averaging. 63.846 Section 63...AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS...SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air...
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures. PMID:16043055
Applications of high average power nonlinear optics
Velsko, S.P.; Krupke, W.F.
1996-02-05
Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.
Averaging for split-step scheme
Vadim Zharnitsky
2003-01-01
The split-step Fourier method for solving numerically nonlinear Schrödinger equations (NLS) is considered as NLS with rapidly varying coefficients. This connection is exploited to justify the split-step approximation using an averaging technique. The averaging is done up to the second order and it is explained why (in this context) symmetric split-step produces a higher order scheme. The same approach is
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
A high average power pockels cell
Daly, T.P.
1986-02-10
A high average power pockels cell is disclosed which reduced the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Radial averages of astigmatic TEM images.
Fernando, K Vince
2008-10-01
The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images. PMID:18662790
completely vegetarian Average spend per person per
Rambaut, Andrew
completely vegetarian 3% 11.5% 15.8% 10% Average spend per person per week in the UK on food-stats-food-family-annual-2009/pdf Average spend per person per week on eating out, out of total food spend, was 26% (£8.who.int/mediacentre/factsheets/fs311/en/ UK SPEND ON FOOD ALL HOUSEHOLDS LOWEST FIFTH GLOBAL POPULATION BY BODY MASS INDEX UK SPEND
Self-averaging characteristics of spectral fluctuations
NASA Astrophysics Data System (ADS)
Braun, Petr; Haake, Fritz
2015-04-01
The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second, a small imaginary part of the quasi-energy. Self-averaging universal (like the circular unitary ensemble (CUE) average) behavior is found for the smoothed correlator, apart from noise which shrinks like 1/\\sqrt{N} as the dimension N of the quantum Hilbert space grows. There are periodically repeated quasi-energy windows of correlation decay and revival wherein the smoothed correlation remains finite as N\\to ? such that the noise is negligible. In between those windows (where the CUE averaged correlator takes on values of the order 1/{{N}2}) the noise becomes dominant and self-averaging is lost. We conclude that the noise forbids distinction of CUE and GUE-type behavior. Surprisingly, the underlying smoothed generating function does not enjoy any self-averaging outside the range of its variables relevant for determining the two-point correlator (and certain higher-order ones). We corroborate our numerical findings for the noise by analytically determining the CUE variance of the smoothed single-matrix correlator.
Average Cosmological Invariant Parameters of Cosmic Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Mitrofanov, Igor G.; Anfimov, Dmitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles A.
1999-01-01
Average cosmological invariant parameters (ACIPs) are calculated for six groups of BATSE cosmic gamma-ray bursts selected by their peak fluxes on the 1.024 S timescale. The ACIPs represent the average temporal and spectral properties of these events equally in the observer frame of reference and in the comoving frames of outbursting emitters. The parameters are determined separately for rise fronts and for back slopes of bursts, defined as the time profiles before and after the main peaks, respectively. The ACIPs for the rise fronts are found to be different for different intensity groups, while the ACIPs for the back slopes show no significant dependence on intensity. We conclude that emitters of bursts manifest standard average properties only during the back slopes of bursts.
Exact Averaging of Stochastic Equations for Flow in Porous Media
Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi
2008-03-15
It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.
The average tangential electric field at the noon magnetopause
Lindqvist, P.A. (Royal Inst. of Tech., Stockholm (Sweden)); Mozer, F.S. (Univ. of California, Berkeley (United States))
1990-10-01
The tangential electric field at the magnetopause in the magnetopause frame of reference has been estimated for 205 magnetopause crossings of the ISEE 1 satellite within 2 hours of local noon during 1977-1981. At most of these crossings, time variations of the electric field caused its fluctuations to be large compared with its mean value. This implies both that variations of the field (due, for example, to time varying reconnection) may be of fundamental significance, and that statistical analyses of a large number of crossings are required to determine the average dependence of the tangential electric field at the magnetopause on the magnetosheath magnetic field. A generalized least squares method, which takes into account uncertainties in both the electric and magnetic field measurements, has been developed and applied to the data. The results indicate that, on the average, reconnecton occurs at the subsolar magnetopause; that the average reconnecton flow speed of magnetosheath plasma toward the magnetopause is 20{sup +14}{sub {minus}6} km/s (which is about 15% of the local Alfven speed); and that the dependence of the average tangential electric field at the magnetopause on the magnetosheath north-south magnetic field is sufficient to account for a major part of the magnetospheric potential and convection.
Signal Quality Influence on Averaging Procedure for DIAL Pollution Monitoring
NASA Technical Reports Server (NTRS)
Egert, S.; Peri, D.; Berg, L.
1992-01-01
Presented here is an analysis of the relative effect of each stochastic phenomenon on the accuracy of LIDAR measurements. The preferred averaging procedure to be applied is noted. The analysis takes into account the temporal correlation of the transmitted power variations which are associated with laser modal noise, short term temporal correlation of atmospheric turbulence, and 'white' additive noise. A distinction is made between two types of systems, one that transmits a sequence of pulses tuned into two wavelengths, but with a relatively large time separation, and another that transmits pulses in two wavelengths almost coincidentally. This distinction is important because the effect of speckle noise can be minimized when the pulses at two wavelengths are sent within the correlation time of the atmospheric turbulence. The presented analysis compares the signal to noise ratio in the calculation of the ratio beta (sub 1)/beta (sub 2), as achieved by using two methods of calculations: first averaging the data related to each wavelength and then taking the ratio, and first taking the ratio and then averaging. For both methods, normalization with respect to the transmitted power is considered, either prior to or after the averaging of the signals in each wavelength.
On the Choice of Average Solar Zenith Angle
NASA Astrophysics Data System (ADS)
Cronin, T.
2014-12-01
Studies with idealized climate models often make simplifying decisions to average solar radiation over space and time. But clear-sky and cloud albedo are increasing functions of the solar zenith angle, so the choice of average solar zenith angle is important and can lead to significant climate biases. Here, I use radiative transfer calculations for a pure scattering atmosphere and with a more detailed radiative transfer model to argue that one should in general choose the insolation-weighted zenith angle, rather than the simpler daytime-average zenith angle. The insolation-weighted zenith angle is especially superior if clouds are responsible for much of the shortwave reflection. Use of the daytime-average zenith angle may lead to a high bias in planetary albedo of ~3%, equivalent to a deficit in shortwave absorption of 10 W m-2 in the global energy budget (comparable to the radiative forcing of a roughly sixfold change in CO2 concentration). Other studies that have used general circulation models with spatially constant insolation have underestimated the global-mean zenith angle, with a consequent low bias in planetary albedo of ~2-6%, or a surplus in shortwave absorption of ~7-20 W m-2 in the global energy budget. I also discuss how a simple time-varying solar zenith angle could be used to minimize zenith angle-related biases in albedo for models of global climate that choose to spatially homogenize insolation.
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation of global 21-cm signal measurements are detections of Lyman Alpha Emitters at high redshifts and constraints on the midpoint of reionization, both of which are among the primary science objectives of ongoing or near-future experiments.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
MO2A; unlocking the potential of compositional static average-case analysis
Michel P. Schellekens
2010-01-01
Compositionality is the “golden key” to static analysis and plays a central role in static worst-case time analysis. We show that compositionality, combined with the capacity for tracking data distributions, unlocks a useful novel technique for average-case analysis.The applicability of the technique has been demonstrated via the static average-case analysis tool DISTRI. The tool automatically extracts average-case time from source
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
On the determination of local instantaneous averages in particulate flow measurements
NASA Technical Reports Server (NTRS)
Vandewall, R. E.; Soo, S. L.
1993-01-01
Determination of instantaneous local average particle density of a gas-particle suspension requires satisfying both the time scale relation and the volume scale relation or its continuum counter part of time averaging. This procedure was validated by comparing simultaneous velocity and mass flux measurements and the laser phase Doppler measurements.
Nonlocal imaging by conditional averaging of random reference measurements
Kai-Hong Luo; Boqiang Huang; Wei-Mou Zheng; Ling-An Wu
2013-03-22
We report the nonlocal imaging of an object by conditional averaging of the random exposure frames of a reference detector, which only sees the freely propagating field from a thermal light source. A bucket detector, synchronized with the reference detector, records the intensity fluctuations of an identical beam passing through the object mask. These fluctuations are sorted according to their values relative to the mean, then the reference data in the corresponding time-bins for a given fluctuation range are averaged, to produce either positive or negative images. Since no correlation calculations are involved, this correspondence imaging technique challenges our former interpretations of "ghost" imaging. Compared with conventional correlation imaging or compressed sensing schemes, both the number of exposures and computation time are greatly reduced, while the visibility is much improved. A simple statistical model is presented to explain the phenomenon.
Do people really believe they are above average?
Elanor F. Williams; Thomas Gilovich
2008-01-01
A question that has plagued self-enhancement research is whether participants truly believe the overly positive self-assessments they report, or whether better-than-average effects reflect mere hopes or self-presentation. In a test of people’s belief in the accuracy of their self-enhancing trait ratings, participants made a series of bets, each time choosing between betting that they had scored at least as high
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
Average magnitude difference function pitch extractor
M. Ross; H. Shaffer; A. Cohen; R. Freudberg; H. Manley
1974-01-01
This paper describes a method for using the average magnitude difference function (AMDF) and associated decision logic to estimate the pitch period of voiced speech sounds. The AMDF is a variation on autocorrelation analysis where, instead of correlating the input speech at various delays (where multiplications and summations are formed at each value of delay), a difference signal is formed
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Geomagnetic effects on the average surface temperature
P. Ballatore
2004-01-01
Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices
Science of NHL Hockey: Statistics & Averages
NSDL National Science Digital Library
NBC Learn
2010-10-07
Being a top goalie in the NHL takes more than quick reflexes and nerves of steel, it also requires a firm grip on the numbers. Namely, the key averages and statistics of goaltending. "Science of NHL Hockey" is a 10-part video series produced in partnership with the National Science Foundation and the National Hockey League.
Relaxed Averaged Alternating Reflections for Diraction Imaging
D. Russell Luke
We report on progress in algorithms for iterative phase retrieval. The theory of convex optimisation is used to develop and to gain insight into counterparts for the nonconvex problem of phase retrieval. We propose a relaxation of averaged alternating reflectors and determine the fixed point set of the related operator in the convex case. A numerical study supports our theoretical
Relaxed averaged alternating reflections for diffraction imaging
D. Russell Luke
2005-01-01
We report on progress in algorithms for iterative phase retrieval. The theory of convex optimization is used to develop and to gain insight into counterparts for the nonconvex problem of phase retrieval. We propose a relaxation of averaged alternating reflectors and determine the fixed-point set of the related operator in the convex case. A numerical study supports our theoretical observations
Bayesian Model Averaging for Linear Regression Models
Adrian E. Raftery; David Madigan; Jennifer A. Hoeting
1998-01-01
We consider the problem of accounting for model uncertainty in linear regressionmodels. Conditioning on a single selected model ignores model uncertainty, and thusleads to the underestimation of uncertainty when making inferences about quantitiesof interest. A Bayesian solution to this problem involves averaging over all possiblemodels (i.e., combinations of predictors) when making inferences about quantities ofAdrian E. Raftery is Professor of
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Code of Federal Regulations, 2010 CFR
2010-07-01
...AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification...averaging of credits is allowed across all classes of nonroad spark-ignition engines at or below 19 kW. (c) Credits...
Code of Federal Regulations, 2011 CFR
2011-07-01
...AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification...averaging of credits is allowed across all classes of nonroad spark-ignition engines at or below 19 kW. (c) Credits...
Code of Federal Regulations, 2012 CFR
2012-07-01
...AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification...averaging of credits is allowed across all classes of nonroad spark-ignition engines at or below 19 kW. (c) Credits...
Code of Federal Regulations, 2014 CFR
2014-07-01
...AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification...averaging of credits is allowed across all classes of nonroad spark-ignition engines at or below 19 kW. (c) Credits...
Code of Federal Regulations, 2013 CFR
2013-07-01
...AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification...averaging of credits is allowed across all classes of nonroad spark-ignition engines at or below 19 kW. (c) Credits...
Hyperplane Arrangements with Large Average Diameter
2007-09-23
Sep 23, 2007 ... Abstract: The largest possible average diameter of a bounded cell of a simple hyperplane arrangement ... timization problem min{cT x : x ? P}. Dedieu ..... [7] L. Finschi: Oriented matroids database http://www.om.math.ethz.ch .
THE AVERAGE CASE ANALYSIS OF ALGORITHMS
Flajolet, Philippe
& Princeton Saddle Point Asymptotics October 22, 1994 &&&&&&&&&& #12; #12; i THE AVERAGE CASE ANALYSIS of Generating Functions; 6. Saddle Point Asymptotics; 7. Mellin Transform Asymptotics; 8. Functional Equations numbers, StirÂ ling's formula and the asymptotic counting of integer partitions. 6.1 Introduction Saddle
NASA Technical Reports Server (NTRS)
1994-01-01
Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.
High average power diode pumped solid state lasers for CALIOPE
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
ERIC Educational Resources Information Center
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
jModelTest: phylogenetic model averaging.
Posada, David
2008-07-01
jModelTest is a new program for the statistical selection of models of nucleotide substitution based on "Phyml" (Guindon and Gascuel 2003. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst Biol. 52:696-704.). It implements 5 different selection strategies, including "hierarchical and dynamical likelihood ratio tests," the "Akaike information criterion," the "Bayesian information criterion," and a "decision-theoretic performance-based" approach. This program also calculates the relative importance and model-averaged estimates of substitution parameters, including a model-averaged estimate of the phylogeny. jModelTest is written in Java and runs under Mac OSX, Windows, and Unix systems with a Java Runtime Environment installed. The program, including documentation, can be freely downloaded from the software section at http://darwin.uvigo.es. PMID:18397919
Average gluon and quark jet multiplicities
A. V. Kotikov
2014-11-30
We show the results in [1,2] for computing the QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new results came due a recent progress in timelike small-x resummation obtained in the MSbar factorization scheme. They depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets demonstrates by its goodness how our results solve a longstandig problem of QCD. Including all the available theoretical input within our approach, alphas(Mz)=0.1199 +- 0.0026 has been obtained in the MSbar scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln x terms through the NNLL level and of ln Q2 terms by the renormalization group. This result is in excellent agreement with the present world average.
Average magnification effect of clumping of matter
Kibble, T W B; Lieu, Richard
2004-01-01
The aim of this paper is to re-examine the question of the average magnification in a universe with some inhomogeneously distributed matter. We present an analytic proof, valid under rather general conditions, including clumps of any shape and size, and strong lensing, that so long as the clumps are uncorrelated the average \\emph{reciprocal} magnification (in only one of at least four possible meanings of the words) is precisely the same as in a homogeneous universe with equal mean density. We also discuss, in the context of observations of discrete and extended sources, the physical significance of the various different measures of magnification and the circumstances in which they are appropriate. The interpretation of quasar, type 1A supernovae and cosmic microwave background data could be significantly biased if the wrong measure is employed.
SOME NEW VELOCITY AVERAGING RESULTS MICHAEL WESTDICKENBERG
is bounded in the Sobolev space W1/2,2 (Rn ). Hence we have a gain of one half derivative here. Golse, Lions Lp(Rn ×Rn ), then ¯f Ws,p (Rn ) for all s strictly less than min{1/p, 1/p }. DiPerna, Lions & MeyerPerna, Lions & Meyer [10] show for this case, that the average is contained in a Besov space built on Lorentz
Potential of Average Force in a Plasma
O. Theimer; P. Kepple
1966-01-01
The potential of average force W1,2qq' experienced by a charge q' at a distance|r1-r2| from a charge q is calculated from the Bogoliubov-Born-Green-Kirkwood-Yvon equations of classical statistical mechanics without linearization or equivalent approximations. Diverging integrals are eliminated by the condition that bound-particle states with negative internal energy, e.g., atoms, be excluded from the partition function. The 3-particle distribution functions required
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Average chemical composition of the lunar surface
NASA Technical Reports Server (NTRS)
Turkevich, A. L.
1973-01-01
The available data on the chemical composition of the lunar surface at eleven sites (3 Surveyor, 5 Apollo and 3 Luna) are used to estimate the amounts of principal chemical elements (those present in more than about 0.5% by atom) in average lunar surface material. The terrae of the moon differ from the maria in having much less iron and titanium and appreciably more aluminum and calcium.
Gridded electron guns for high average power
H. E. Gallagher
1962-01-01
The design and performance of electron guns producing high-average-power beams which are controlled by an intercepting-type grid are described. The design consists of a modification of the design of Pierce-type electron guns to allow for the effect of grid insertion, and calculation of the amplification factors based on the potential distribution. The performance is evaluated in terms of the beam
Stochastic Games with Average Payoff Criterion
Ghosh, M. K. [Department of Mathematics, Indian Institute of Science, Bangalore 560012 (India); Bagchi, A. [Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
Optimal Averaging of Incomplete Climatological Data
C. Gebhardt; B. Kusserow; A. Hense
2000-01-01
Summary We present a multivariate statistical interpolation method for optimal averaging of incomplete climatological data. This\\u000a objective analysis is based on a linear regression of the data under the constraints of unbiasedness and minimized analysis\\u000a error variance. One of the important features of the presented interpolation is the efficient exchange of common information\\u000a between the analysed variables. This exchange is
Disk-Averaged Synthetic Spectra of Mars
Giovanna Tinetti; Victoria S. Meadows; David Crisp; William Fong; Thangasamy Velusamy; Heather Snively
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the
[Signal averaging electrocardiography in chronic alcoholism].
Pochmalicki, G; Genest, M; Jibril, A; Abdmoulah, M; Chatila, M; Zemir, H; Cayla, J M
1998-03-01
Cardiovascular death is the main cause of mortality in chronic alcoholics, perhaps due to a pro-arrhythmogenic effect of alcohol associated with infraclinical myocardial lesions. The authors investigated prospectively 41 patients (average age: 49.7 years) who were chronic alcoholics but had no acute alcoholic episodes for cardiac disease (ECG, signal averaging for late ventricular potentials, echocardiography and Holter ECG monitoring) and hepatic disease (liver biopsy). The history of alcoholism was 14 +/- 9 years, the quantity of alcohol ingested before they stopped drinking being 89 +/- 31 grammes/day. Thirty per cent of patients displayed 2 or 3 criteria of late ventricular potentials (LP). The authors demonstrated a correlation between the daily quantity of alcohol consumed before stopping drinking and the duration of the filtered QRS complex (p = 0.02). Moreover, the frequency of fatty infiltration found on liver biopsy, greater in alcoholics with LP (35% versus 19%, p = 0.025) correlated with the amplitude of the last 40 ms of the average QRS (p = 0.0485), with the duration of potentials of less than 40 microvolts (p = 0.05) and, above all, with the number of criteria of LP (p = 0.02). Finally, the presence of LP was also related to the following biological abnormalities: GGT (p = 0.027), ASAT (p = 0.046), ALAT (p = 0.039). The ECG abnormalities may reflect early infra-clinical myocardial lesions secondary to cellular metabolic abnormalities perhaps analogous to the fatty hepatic changes. However, the prognostic value of these signal-averaging ECG abnormalities remains unknown. PMID:9749235
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Disk-averaged synthetic spectra of Mars
Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy
2004-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...
The estimation of average hazard ratios by weighted Cox regression.
Schemper, Michael; Wakounig, Samo; Heinze, Georg
2009-08-30
Often the effect of at least one of the prognostic factors in a Cox regression model changes over time, which violates the proportional hazards assumption of this model. As a consequence, the average hazard ratio for such a prognostic factor is under- or overestimated. While there are several methods to appropriately cope with non-proportional hazards, in particular by including parameters for time-dependent effects, weighted estimation in Cox regression is a parsimonious alternative without additional parameters. The methodology, which extends the weighted k-sample logrank tests of the Tarone-Ware scheme to models with multiple, binary and continuous covariates, has been introduced in the nineties of the last century and is further developed and re-evaluated in this contribution. The notion of an average hazard ratio is defined and its connection to the effect size measure P(X
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
I. A. Stepanow
2006-03-22
The traditional method of finding the average value of a physical quantity often gives wrong results. Another formula of averaging is derived which gives correct results. It is applied to calculation of the average radius of tapering tube and the average flow velocity in the tube. The new formula is applicable to many other processes.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-08-14
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171
Is dark energy an effect of averaging?
Nan Li; Marina Seikel; Dominik J. Schwarz
2008-01-22
The present standard model of cosmology states that the known particles carry only a tiny fraction of total mass and energy of the Universe. Rather, unknown dark matter and dark energy are the dominant contributions to the cosmic energy budget. We review the logic that leads to the postulated dark energy and present an alternative point of view, in which the puzzle may be solved by properly taking into account the influence of cosmic structures on global observables. We illustrate the effect of averaging on the measurement of the Hubble constant.
S. Rodriguez-Abudo; D. Foster
2010-01-01
A technique for double-averaging the momentum equations has been successfully implemented on PIV observations of the two-dimensional time-dependent velocity field over a rippled bed. This technique, originally proposed by Gimenez-Curto and Corniero Lera [1996], uses ensemble and subsequent spatial averaging to yield the Double Averaged Navier-Stokes (DANS) equations. The resulting formulation yields a balance between the acceleration deficit and various
Coastal and global averaged sea level rise for 1950 to 2000
Neil J. White; John A. Church; Jonathan M. Gregory
2005-01-01
We compare estimates of coastal and global averaged sea level for 1950 to 2000. During the 1990s and around 1970, we find coastal sea level is rising faster than the global average but that it rises slower than the global average during the late 1970s and late 1980s. The differences are largely a result of sampling the time-varying geographical distribution
W. Zakowicz; I. J. Feng; R. H. Pratt
1982-01-01
Consequences of a simple integral definition of electron charge bound to an ion are examined for Thomas-Fermi (TF) and Debye-Huckel-Thomas-Fermi (DHTF) average atom statistical potentials used to describe high temperature high density plasmas. A self-consistent scheme for calculating average degree of ionization within the DHTF approach is described. With the simple integral definition of bound charge the DHTF model, unlike
Averaging in the presence of sliding errors
Yost, G.P. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Lawrence Berkeley Lab., CA (United States)
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment`s measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Kevin Jordan; Trent Allison; Richard Evans; James Coleman; Albert Grippo
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
hp calculators HP 50g Average sales prices
Vetter, Frederick J.
hp calculators HP 50g Average sales prices The STAT menu Averages and Standard Deviations Practice finding average sale prices and standard deviations #12;hp calculators HP 50g Average sales prices hp calculators - 2 - HP 50g Average sales prices The STAT menu The Statistics menu is accessed from the ORANGE
Desmoothing of averaged periodical signals for geodetic applications
NASA Astrophysics Data System (ADS)
Földváry, Lóránt
2015-06-01
In general, observations are normally considered to refer to an epoch in time, however, observations take time. During this time span temporal variations of the observable alias the measurement. Similar phenomenon can be defined in the space domain as well: data treated to refer to a geographical location often contains integrated information of the surroundings. In each case the appropriate signal content can partially be recovered by desmoothing the averaged data. The present study delivers the theoretical foundation of a desmoothing method, and suggests its use on different applications in geodesy. The theoretical formulation of the desmoothing has been derived for 1-D and 2-D signals, the latter is interpreted on a plain and also on a sphere. The presented case studies are less elaborated, but intended to demonstrate the need and usefulness of the desmoothing tool.
Grade-point average, changes of major, and majors selected by students leaving engineering
Matthew W. Ohland; Guili Zhang; Brian Thorndyke; Timothy J. Anderson
2004-01-01
Graduation success, grade-point average, and destination major of ten cohorts of students matriculating and subsequently leaving undergraduate engineering programs at nine southeastern universities are studied from 1987-2002. Grade point averages are frozen at the time students leave engineering to investigate the role of grades in their decision to leave engineering and their choice of a destination major. This study adds
The averaged dynamics of the hydrogen atom in crossed electric and magnetic fields
The averaged dynamics of the hydrogen atom in crossed electric and magnetic fields as a perturbedÂ0430, USA July 13, 2000 Abstract We treat the classical dynamics of the hydrogen atom in perpendicular action--angle variables, we separate the different time scales of the motion. The method of averaging
The Event-Related Low-Frequency Activity of Highly and Average Intelligent Children
ERIC Educational Resources Information Center
Liu, Tongran; Shi, Jiannong; Zhao, Daheng; Yang, Jie
2008-01-01
Using time-frequency analysis techniques to investigate the event-related low-frequency (delta: 0.5-4 Hz; theta: 4-8 Hz) activity of auditory event-related potentials (ERPs) data of highly and average intelligent children, 18 intellectually gifted children, and 18 intellectually average children participated the present study. Present findings…
Averaging Principle for QuasiGeostrophic Motions under Rapidly Oscillating Forcing \\Lambda
Averaging Principle for QuasiGeostrophic Motions under Rapidly Oscillating Forcing \\Lambda Hongjungeostrophic motions with rapidly oscillating forcing is proved, both on finite but large time intervals periodic motions, rapidly oscil lating forcing, averaging principle, stable manifolds and unstable
Barkai, Eli
Aging Effects and Population Splitting in Single-Particle Trajectory Averages Johannes H. P. Schulz study time averages of single particle trajectories in scale-free anomalous diffusion processes of Nordlund [10], experi- mentalists have tracked individual trajectories of particles and used
Forecast of hourly average wind speed with ARMA models in Navarre (Spain)
J. L. Torres; A. García; M. De Blas; A. De Francisco
2005-01-01
In this article we have used the ARMA (autoregressive moving average process) and persistence models to predict the hourly average wind speed up to 10h in advance. In order to adjust the time series to the ARMA models, it has been necessary to carry out their transformation and standardization, given the non-Gaussian nature of the hourly wind speed distribution and
Calculating Ensemble Averaged Descriptions of Protein Rigidity without Sampling
González, Luis C.; Wang, Hui; Livesay, Dennis R.; Jacobs, Donald J.
2012-01-01
Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability. PMID:22383947
High average power second harmonic generation in air
Beresna, Martynas; Kazansky, Peter G. [Optoelectronics Research Centre, University of Southampton, Southampton SO17 1BJ (United Kingdom); Svirko, Yuri [Department of Physics and Mathematics, University of Joensuu, Joensuu FI-80101 (Finland); Barkauskas, Martynas; Danielius, Romas [Light Conversion Ltd., Vilnius, LT-10223 (Lithuania)
2009-09-21
We demonstrate second harmonic vortex generation in atmospheric pressure air using tightly focused femtosecond laser beam. The circularly polarized ring-shaped beam of the second harmonic is generated in the air by fundamental beam of the same circular polarization, while the linear polarized beam produces two-lobe beam at the second harmonic frequency. The achieved normalized conversion efficiency and average second harmonic power are two orders of magnitude higher compared to those previously reported and can be increased up to 20 times by external gas flow. We demonstrate that the frequency doubling originates from the gradient of photoexcited free electrons created by pondermotive force.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
Average transverse momentum quantities approaching the lightfront
Daniel Boer
2014-09-29
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.
Average gait differential image based human recognition.
Chen, Jinyan; Liu, Jiansheng
2014-01-01
The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648
Angle-averaged Compton cross sections
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Light-cone averages in a Swiss-cheese universe
Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino [Dipartimento di Fisica 'G. Galilei' Universita di Padova, INFN Sezione di Padova, via Marzolo 8, Padova I-35131 (Italy); Department of Astronomy and Astrophysics, University of Chicago, Chicago, Illinois 60637-1433 (United States); Department of Astronomy and Astrophysics, Enrico Fermi Institute, and Kavli Institute for Cosmological Physics, University of Chicago, Chicago, Illinois 60637-1433 (United States); Dipartimento di Fisica 'G. Galilei' Universita di Padova, INFN Sezione di Padova, via Marzolo 8, Padova I-35131 (Italy)
2008-01-15
We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the {lambda}CDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w{sub 0} and w{sub a} follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.
Ensemble Averaged Conservation Equations for Multiphase, Multi-component, and Multi-material Flows
Ray A. Berry
2003-08-01
Many important “fluid” flows involve a combination of two or more materials having different properties. The multiple phases or components often exhibit relative motion among the phases or material classes. The microscopic motions of the individual constituents are complex and the solution to the micro-level evolutionary equations is difficult. Characteristic of such flows of multi-component materials is an uncertainty in the exact locations of the particular constituents at any particular time. For most practical purposes, it is not possible to exactly predict or measure the evolution of the details of such systems, nor is it even necessary or desirable. Instead, we are usually interested in more gross features of the motion, or the “average” behavior of the system. Here we present descriptive equations that will predict the evolution of this averaged behavior. Due to the complexities of interfaces and resultant discontinuities in fluid properties, as well as from physical scaling issues, it is essential to work with averaged quantities and parameters. We begin by tightening up, or more rigorously defining, our concept of an average. There are several types of averaging. The published literature predominantly contains two types of averaging: volume averaging [Whitaker 1999, Dobran 1991] and time averaging [Ishii 1975]. Occasionally combinations of the two are used. However, we utilize a more general approach by adopting what is known as ensemble averaging.
Gridded electron guns for high average power
H. E. Gallagher
1960-01-01
A procedure for the design of a gridded converging beam electron gun is described. This procedure has been used to design several gridded guns in the perveance range of0.1 times 10^{-5}to2 times 10^{-6}. The characteristics of these gridded guns are compared with the design objectives. A method of calculating the grid control characteristics is developed. This-method utilizes charts compiled from
NASA Astrophysics Data System (ADS)
Maleika, Wojciech
2015-02-01
The paper presents a new method of digital terrain model (DTM) estimation based on modified moving average interpolation. There are many methods that can be employed in DTM creation, such as kriging, inverse distance weighting, nearest neighbour and moving average. The moving average method is not as precise as the others; hence, it is not commonly comprised in scientific work. Considering the high accuracy, the relatively low time costs, and the huge amount of measurement data collected by multibeam echosounder, however, the moving average method is definitely one of the most promising approaches. In this study, several variants of this method are analysed. An optimization of the moving average method is proposed based on a new module of selecting neighbouring points during the interpolation process—the "growing radius" approach. Tests experiments performed on various multibeam echosounder datasets demonstrate the high potential of this modified moving average method for improved DTM generation.
Averaging and Metropolis iterations for positron emission tomography.
Szirmay-Kalos, László; Magdics, Milán; Tóth, Balázs; Bükki, Tamás
2013-03-01
Iterative positron emission tomography (PET) reconstruction computes projections between the voxel space and the lines of response (LOR) space, which are mathematically equivalent to the evaluation of multi-dimensional integrals. The dimension of the integration domain can be very high if scattering needs to be compensated. Monte Carlo (MC) quadrature is a straightforward method to approximate high-dimensional integrals. As the numbers of voxels and LORs can be in the order of hundred millions and the projection also depends on the measured object, the quadratures cannot be precomputed, but Monte Carlo simulation should take place on-the-fly during the iterative reconstruction process. This paper presents modifications of the maximum likelihood, expectation maximization (ML-EM) iteration scheme to reduce the reconstruction error due to the on-the-fly MC approximations of forward and back projections. If the MC sample locations are the same in every iteration step of the ML-EM scheme, then the approximation error will lead to a modified reconstruction result. However, when random estimates are statistically independent in different iteration steps, then the iteration may either diverge or fluctuate around the solution. Our goal is to increase the accuracy and the stability of the iterative solution while keeping the number of random samples and therefore the reconstruction time low. We first analyze the error behavior of ML-EM iteration with on-the-fly MC projections, then propose two solutions: averaging iteration and Metropolis iteration. Averaging iteration averages forward projection estimates during the iteration sequence. Metropolis iteration rejects those forward projection estimates that would compromise the reconstruction and also guarantees the unbiasedness of the tracer density estimate. We demonstrate that these techniques allow a significant reduction of the required number of samples and thus the reconstruction time. The proposed methods are built into the Teratomo system. PMID:23221817
Bias in Molecular Dynamics Averages due to Finite Step Size Stephen D. Bond and Robert D. Skeel
Skeel, Robert
Bias in Molecular Dynamics Averages due to Finite Step Size Stephen D. Bond and Robert D. Skeel the deterministic case. In molecular dynamics (MD) it is assumed that time averages of "physically meaningful func for estimating and correcting bias in averages computed from molecular dynamics sim- ulation. Variations
Nonlocal thermodynamic equilibrium self-consistent average-atom model for plasma physics
G. Faussurier; C. Blancard; E. Berthier
2001-01-01
A time-dependent collisional-radiative average-atom model is presented to study statistical properties of highly charged ion plasmas in off-equilibrium conditions. The time evolution of electron populations and the electron covariance matrix is obtained as approximate solutions of a master equation. Atomic structure is described either with a screened-hydrogenic model including l splitting, or by calculating one-electron states in a self-consistent average-atom
Numerical Study of Fractional Ensemble Average Transport Equations
NASA Astrophysics Data System (ADS)
Kim, S.; Park, Y.; Gyeong, C. B.; Lee, O.
2014-12-01
In this presentation, a newly developed theory is applied to the case of stationary and non-stationary stochastic advective flow field, and a numerical solution method is presented for the resulting fractional Fokker-Planck equation (fFPE), which describes the evolution of the probability density function (PDF) of contaminant concentration. The derived fFPE is evaluated for three different form: 1) purely advective form, 2) second-order moment form and 3) second-order cumulant form. The Monte Carlo analysis of the fractional governing equation is then performed in a stochastic flow field, generated by a fractional Brownian motion for the stationary and non-stationary stochastic advection, in order to provide a benchmark for the results obtained from the fFPEs. When compared to the Monte Carlo simulation based PDFs and their ensemble average, the second-order cumulant form gives a good fit in terms of the shape and mode of the PDF of the contaminant concentration. Therefore, it is quite promising that the non-Fickian transport behavior can be modeled by the derived fractional ensemble average transport equations either by means of the long memory in the underlying stochastic flow, or by means of the time-space non-stationarity of the underlying stochastic flow, or by means of the time and space fractional derivatives of the transport equations.