Turnaround Time Modeling for Conceptual Rocket Engines
NASA Technical Reports Server (NTRS)
Nix, Michael; Staton, Eric J.
2004-01-01
Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.
Bacteriology turnaround time in seven Malaysian general hospitals.
Lim, V K; Cheong, Y M
1992-06-01
A turnaround time study was conducted for bacteriological culture tests in seven Malaysian general hospitals. The turnaround times were determined using a specially designed form that was completed by the ward staff. Doctors at these hospitals were also polled to find out whether they were satisfied with the promptness of bacteriological test reporting in their hospitals. The turnaround times obtained from this survey were found to be satisfactory taking into account the constraints of laboratory methods employed. Nevertheless only about a third of doctors expressed satisfaction with the timeliness of the bacteriological test reporting. Doctors and microbiologists should get together and agree on acceptable standards of turnaround times that are practical and reasonable. PMID:1469917
Turnaround Time and Market Capacity in Contract Cheating
ERIC Educational Resources Information Center
Wallace, Melisa J.; Newton, Philip M.
2014-01-01
Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…
Factors that impact turnaround time of surgical pathology specimens in an academic institution.
Patel, Samip; Smith, Jennifer B; Kurbatova, Ekaterina; Guarner, Jeannette
2012-09-01
Turnaround time of laboratory results is important for customer satisfaction. The College of American Pathologists' checklist requires an analytic turnaround time of 2 days or less for most routine cases and lets every hospital define what a routine specimen is. The objective of this study was to analyze which factors impact turnaround time of nonbiopsy surgical pathology specimens. We calculated the turnaround time from receipt to verification of results (adjusted for weekends and holidays) for all nonbiopsy surgical specimens during a 2-week period. Factors studied included tissue type, number of slides per case, decalcification, immunohistochemistry, consultations with other pathologists, and diagnosis. Univariate and multivariate analyses were performed. A total of 713 specimens were analyzed, 551 (77%) were verified within 2 days and 162 (23%) in 3 days or more. Lung, gastrointestinal, breast, and genitourinary specimens showed the highest percentage of cases being signed out in over 3 days. Diagnosis of malignancy (including staging of the neoplasia), consultation with other pathologists, having had a frozen section, and use of immunohistochemical stains were significantly associated with increased turnaround time in univariate analysis. Decalcification was not associated with increased turnaround time. In multivariate analysis, consultation with other pathologists, use of immunohistochemistry, diagnosis of malignancy, and the number of slides studied continued to be significantly associated with prolonged turnaround time. Our findings suggest that diagnosis of malignancy is central to significantly prolonging the turnaround time for surgical pathology specimens, thus institutions that serve cancer centers will have longer turnaround time than those that do not. PMID:22406373
Improving medical imaging report turnaround times: the role of technolgy.
Marquez, Luis O; Stewart, Howard
2005-01-01
At Southern Ohio Medical Center (SOMC), the medical imaging department and the radiologists expressed a strong desire to improve workflow. The improved workflow was a major motivating factor toward implementing a new RIS and speech recognition technology. The need to monitor workflow in a real-time fashion and to evaluate productivity and resources necessitated that a new solution be found. A decision was made to roll out both the new RIS product and speech recognition to maximize the resources to interface and implement the new solution. Prior to implementation of the new RIS, the medical imaging department operated in a conventional electronic-order-entry to paper request manner. The paper request followed the study through exam completion to the radiologist. SOMC entered into a contract with its PACS vendor to participate in beta testing and clinical trials for a new RIS product for the US market. Backup plans were created in the event the product failed to function as planned--either during the beta testing period or during clinical trails. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. Speech recognition enhanced the RIS technology as soon as it was implemented. The results show that the project has been a success. The new RIS, combined with speech recognition and the PACS, makes for a very effective solution to patient, exam, and results management in the medical imaging department. PMID:15898577
24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.
Code of Federal Regulations, 2010 CFR
2010-04-01
...Indicator #1, vacancy rate and unit turnaround time. 901.10 Section 901.10 Housing and Urban Development Regulations Relating...DEVELOPMENT PUBLIC HOUSING MANAGEMENT ASSESSMENT PROGRAM § 901.10 Indicator #1, vacancy rate and unit...
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
2011-01-01
Background Assessing turnaround times can help to analyse workflows in hospital information systems. This paper presents a systematic review of literature concerning different turnaround time definitions. Our objectives were to collect relevant literature with respect to this kind of process times in hospitals and their respective domains. We then analysed the existing definitions and summarised them in an appropriate format. Methods Our search strategy was based on Pubmed queries and manual reviews of the bibliographies of retrieved articles. Studies were included if precise definitions of turnaround times were available. A generic timeline was designed through a consensus process to provide an overview of these definitions. Results More than 1000 articles were analysed and resulted in 122 papers. Of those, 162 turnaround time definitions in different clinical domains were identified. Starting and end points vary between these domains. To illustrate those turnaround time definitions, a generic timeline was constructed using preferred terms derived from the identified definitions. The consensus process resulted in the following 15 terms: admission, order, biopsy/examination, receipt of specimen in laboratory, procedure completion, interpretation, dictation, transcription, verification, report available, delivery, physician views report, treatment, discharge and discharge letter sent. Based on this analysis, several standard terms for turnaround time definitions are proposed. Conclusion Using turnaround times to benchmark clinical workflows is still difficult, because even within the same clinical domain many different definitions exist. Mapping of turnaround time definitions to a generic timeline is feasible. PMID:21609424
Plaza, Stephen M.; Scheffer, Louis K.; Saunders, Mathew
2012-01-01
The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality. PMID:23028540
NASA Astrophysics Data System (ADS)
Peterson, D. R.; Price, J. H.; Upshaw, J. L.; Weldon, W. F.; Zowarka, R. C., Jr.
1991-01-01
Improvements to heavy-duty, explosively operated opening and closing switches to reduce component cost, installation cost, and turnaround time without sacrificing reliability are discussed. Heavy-duty opening and closing switches operated by small explosive charges (50 g or less) are essential to operation of the 60-MJ Balcones power supply. The six independent modules can be discharged sequentially. Each delayed inductor must be isolated from the railgun circuit with a heavy-duty closing switch capable of carrying megaampere currents of millisecond duration. Similar closing switches are used to crowbar the railgun as the projectile approaches the muzzle. Efficient switching is crucial to efficient conversion: rotor kinetic energy to stored inductive energy with about 50 percent efficiency; stored inductive energy to projectile kinetic energy with about 30 percent efficiency. The switches must operate with a precision and repeatability of 10 to the -5 sec, readily achievable with explosives. The opening switches must be structurally and thermally capable of carrying megampere currents for more than 100 ms and develop 10 kV on opening, stay open for 0.01 s, and safely and reliably dissipate megajoules of inductive energy in the event of a fault, a failure of the switch to operate, or an attempt to commutate into an open circuit. An example of the severe switching requirements is presented.
Average-Time Games on Timed Automata
Jurdzinski, Marcin
2009-01-01
An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-time games. This complements our results for reachability-time games and partially solves a problem posed by Bouyer et al., to design an algorithm for solving average-price games on priced timed automata. The paper also establishes the exact computational complexity of solving average-time games: the problem is EXPTIME-complete for timed automata with at least two clocks.
Preparing printed circuit boards for rapid turn-around time on a plotter
Hawtree, J.
1998-01-01
This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab`s Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day`s time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter.
Turnaround in Cyclic Cosmology
Baum, Lauris; Frampton, Paul H. [University of North Carolina, Chapel Hill, North Carolina 27599-3255 (United States)
2007-02-16
It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion.
Ambulance handovers: can a dedicated ED nurse solve the delay in ambulance turnaround times?
Clarey, A; Allen, M; Brace-McDonnell, S; Cooke, M W
2014-05-01
With ever increasing concern over ambulance handover delays this paper looks at the impact of dedicated A&E nurses for ambulance handovers and the effect it can have on ambulance waiting times. It demonstrates that although such roles can bring about reduced waiting times, it also suggests that using this as a sole method to achieve these targets would require unacceptably low staff utilisation. PMID:23636604
Random time averaged diffusivities for Lévy walks
NASA Astrophysics Data System (ADS)
Froemberg, D.; Barkai, E.
2013-07-01
We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.
Ways to reduce patient turnaround time and improve service quality in emergency departments
David Sinreich; Yariv Marmor
2005-01-01
Purpose – Recent years have witnessed a fundamental change in the function of emergency departments (EDs). The emphasis of the ED shifts from triage to saving the lives of shock-trauma rooms equipped with state-of-the-art equipment. At the same time walk-in clinics are being set up to treat ambulatory type patients. Simultaneously ED overcrowding has become a common sight in many
Correlated continuous time random walk with time averaged waiting time
NASA Astrophysics Data System (ADS)
Lv, Longjin; Ren, Fu-Yao; Wang, Jun; Xiao, Jianbin
2015-03-01
In this paper, we study the dynamics of a correlated continuous time random walk with time averaged waiting time. The mean square displacement (MSD) shows this process is subdiffusive and generalized Einstein relation holds. We also get the asymptotic behavior of the probability density function (PDF) of this process is stretched Gaussian. At last, by computing the time averaged MSD, we find ergodicity breaking occurs in this process.
Rosenkrantz, Andrew B; Bonavita, John A; Foran, Mark P; Matza, Brent W; McMenamy, John M
2014-02-01
The aim of this study was to assess for an association between radiologists' turnaround time (TAT) and report quality for emergency department (ED) abdominopelvic CT examinations. Reports of 60 consecutive ED abdominopelvic CT studies from five abdominal radiologists (300 total reports) were included. An ED radiologist, abdominal radiologist, and ED physician independently evaluated satisfaction with report content (1-10 scale), satisfaction with report clarity (1-10 scale), and extent to which the report advanced the patient on a previously published clinical spectrum scale (1-5 scale). TAT (time between completion of imaging and completion of the final report) and report quality were compared between radiologists using unpaired t tests; associations between TAT and report quality scores for individual radiologists were assessed using Pearson's correlation coefficients. The five radiologists' mean TAT varied from 35 to 53 min. There were significant differences in report content in half of comparisons between radiologists by observer 1 (p???0.032) and in a minority of comparisons by observer 2 (p???0.047), in report clarity in majority of comparisons by observer 1 (p???0.031) and in a minority of comparisons by observer 2 (p???0.010), and in impact on patient care in a minority of comparisons for all observers (p???0.047). There were weak positive correlations between TAT and report content and clarity for three radiologists for observer 1 (r?=?0.270-0.362) and no correlation between TAT and any report quality measure for remaining combinations of the five radiologists and three observers (r?=?-0.197 to +0.181). While both TAT and report quality vary between radiologists, these two factors were not associated for individual radiologists. PMID:24197655
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Spatial averaging of time-frequency distributions
Yimin Zhang; Moeness G. Amin
1999-01-01
This paper presents a novel approach based on time-frequency distributions (TFDs) for separating signals received by a multiple antenna array. This approach provides a significant improvement in performance over the previously introduced spatial time-frequency distributions, specifically for signals with close time-frequency signatures. In this approach, spatial averaging of the time-frequency distributions of the sensor data is performed to eliminate the
Fairmont Hotels' Turnaround Strategy
W. Terry Umbreit
1996-01-01
In 1994 a wealthy investor purchased controlling interest in Fairmont Hotels, but in 1991 there was no guarantee the five-property chain would even be in operation by 1994. The year 1991 began Fairmont's turnaround when the owners, the Swig family, hired Robert Small, most recently of Walt Disney World hotels, to run the company. Small focused on a top-line strategy
Turnaround Momentum Still Fragile
ERIC Educational Resources Information Center
Klein, Alyson
2012-01-01
The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a…
6, 68256839, 2006 Time-averaging of
Paris-Sud XI, UniversitÃ© de
Kong University of Science and Technology, Clear Water Bay, Hong Kong, PRC 2 Department of Chemical Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, PRC Received: 17 May distributions presented in the literature are mostly averages of long scan-time (30 s or more) spectra mainly
Time-dependent angularly averaged inverse transport
Guillaume Bal; Alexandre Jollivet
2009-05-07
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.
Edwards, Thomas
2005-09-01
The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3} fusion methods using DWPF simulated glass (part 1) and (to the extent possible) using DWPF radioactive glass (part 2). (3) Recommend specific equipment to implement the new methods and work with SRNL's Equipment Engineering Section (EES) to ensure compatibility of the equipment with the DWPF shielded cells. The focus of this report is major task activity No.2 (parts 1 and 2). The other two major task activities are to be addressed separately. The measurements supporting task activity No.1 are being conducted at the DWPF. EES is currently designing and fabricating the DWPF cell equipment needed for the new method as part of major task activity No.3.
Adam, Heather J.; Karlowsky, James A.; Nichol, Kimberly A.; Pang, Paulette F.; Guenther, Jodi; Webb, Amanda A.; Miller, Crystal; Alfa, Michelle J.
2012-01-01
Matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) mass spectrometry represents a revolution in the rapid identification of bacterial and fungal pathogens in the clinical microbiology laboratory. Recently, MALDI-TOF has been applied directly to positive blood culture bottles for the rapid identification of pathogens, leading to reductions in turnaround time and potentially beneficial patient impacts. The development of a commercially available extraction kit (Bruker Sepsityper) for use with the Bruker MALDI BioTyper has facilitated the processing required for identification of pathogens directly from positive from blood cultures. We report the results of an evaluation of the accuracy, cost, and turnaround time of this method for 61 positive monomicrobial and 2 polymicrobial cultures representing 26 species. The Bruker MALDI BioTyper with the Sepsityper gave a valid (score, >1.7) identification for 85.2% of positive blood cultures with no misidentifications. The mean reduction in turnaround time to identification was 34.3 h (P < 0.0001) in the ideal situation where MALDI-TOF was used for all blood cultures and 26.5 h in a more practical setting where conventional identification or identification from subcultures was required for isolates that could not be directly identified by MALDI-TOF. Implementation of a MALDI-TOF-based identification system for direct identification of pathogens from blood cultures is expected to be associated with a marginal increase in operating costs for most laboratories. However, the use of MALDI-TOF for direct identification is accurate and should result in reduced turnaround time to identification. PMID:22875888
Making Sense of School Turnarounds
ERIC Educational Resources Information Center
Hess, Frederick M.
2012-01-01
Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…
Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports
ERIC Educational Resources Information Center
Silva, Elena
2012-01-01
If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposed…
R. Lee Lyman
2003-01-01
Use of models derived from foraging theory to explain variation in prey-abundance indices over time and space, evidenced in the zooarchaeological record, is common in western North America. Such use presumes that models derived from ecologically scaled observations are applicable to archaeologically scaled observations. The influence of time averaging and space averaging, whether inherent in the zooarchaeological record or resulting
Ensemble averaging vs. time averaging in molecular dynamics simulations of thermal conductivity
NASA Astrophysics Data System (ADS)
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-01
In this report, we compare time averaging and ensemble averaging as two different methods for phase space sampling in molecular dynamics (MD) calculations of thermal conductivity. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium MD. We introduce two different schemes for the ensemble averaging approach and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical MD, the approaches used for generating independent trajectories may find their greatest utility in computationally expensive simulations such as first principles MD. For such simulations, where each time step is costly, time averaging can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each trajectory is independent. For this reason, particularly when using massively parallel architectures, ensemble averaging can result in much shorter simulation times (˜100-200X), but exhibits similar overall computational effort.
Improving the Average Response Time in Collective I/O
Jin, Chen; Sehrish, Saba; Liao, Wei-keng; Choudhary, Alok; Schuchardt, Karen L.
2011-09-21
In collective I/O, MPI processes exchange requests so that the rearranged requests can result in the shortest file system access time. Scheduling the exchange sequence determines the response time of participating processes. Existing implementations that simply follow the increasing order of file ofsets do not necessary produce the best performance. To minimize the average response time, we propose three scheduling algorithms that consider the number of processes per file stripe and the number of accesses per process. Our experimental results demonstrate improvements of up to 50% in the average response time using two synthetic benchmarks and a high-resolution climate application.
Time-average and time-dependent parts of core flow and Peter Olson
Olson, Peter L.
Time-average and time-dependent parts of core flow Hagay Amit and Peter Olson April 11, 2005-geostrophic flow. We obtain snap- shot images of core flow at Â¥ year intervals, which we combine to give time-average and time-dependent parts of the motion over this time interval. The most prominent time- average flow
Scaling of average receiving time and average weighted shortest path on weighted Koch networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Chen, Dandan; Dong, Yujuan; Liu, Jie
2012-12-01
In this paper we present weighted Koch networks based on classic Koch networks. A new method is used to determine the average receiving time (ART), whose key step is to write the sum of mean first-passage times (MFPTs) for all nodes to absorption at the trap located at a hub node as a recursive relation. We show that the ART exhibits a sublinear or linear dependence on network order. Thus, the weighted Koch networks are more efficient than classic Koch networks in receiving information. Moreover, average weighted shortest path (AWSP) is calculated. In the infinite network order limit, the AWSP depends on the scaling factor. The weighted Koch network grows unbounded but with the logarithm of the network size, while the weighted shortest paths stay bounded.
Trapping ultracold atoms in a time-averaged adiabatic potential
Gildemeister, M.; Nugent, E.; Sherlock, B. E.; Kubasik, M.; Sheard, B. T.; Foot, C. J. [Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU (United Kingdom)
2010-03-15
We report an experimental realization of ultracold atoms confined in a time-averaged, adiabatic potential (TAAP). This trapping technique involves using a slowly oscillating ({approx}kHz) bias field to time-average the instantaneous potential given by dressing a bare magnetic potential with a high-frequency ({approx}MHz) magnetic field. The resultant potentials provide a convenient route to a variety of trapping geometries with tunable parameters. We demonstrate the TAAP trap in a standard time-averaged orbiting potential trap with additional Helmholtz coils for the introduction of the radio frequency dressing field. We have evaporatively cooled 5x10{sup 4} atoms of {sup 87}Rb to quantum degeneracy and observed condensate lifetimes of longer than 3 s.
Combining Quick-Turnaround and Batch Workloads at Scale
NASA Technical Reports Server (NTRS)
Matthews, Gregory A.
2012-01-01
NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.
NASA Astrophysics Data System (ADS)
Shen, Hong; Yu, Longkun; Fan, Chengyu
2014-11-01
The general analytic expression for the temporal spectrum of atmospheric scintillation under weak turbulence condition is derived. It was employed to study the characteristics of the temporal spectra for horizontal uniform path and whole layer atmospheric non-uniform path. In the non-uniform path, the H-V turbulence model and the Gaussian wind speed model are utilized. It has been found that when the receiver diameter is larger than Fresnel scale (?L)1/2, the temporal spectrum of the plane wave have a power law behavior with a scaling index - 17/3 in high-frequency range. The change of the turbulence strength has little influence on the shape of the temporal spectrum. Based on the characteristics of the temporal spectrum, the aperture-averaging and time-averaging effects on scintillation were analyzed in the frequency domain.
Scaling of average sending time on weighted Koch networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Liu, Jie
2012-10-01
Random walks on weighted complex networks, especially scale-free networks, have attracted considerable interest in the past. But the efficiency of a hub sending information on scale-free small-world networks has been addressed less. In this paper, we study random walks on a class of weighted Koch networks with scaling factor 0 < r ? 1. We derive some basic properties for random walks on the weighted Koch networks, based on which we calculate analytically the average sending time (AST) defined as the average of mean first-passage times (MFPTs) from a hub node to all other nodes, excluding the hub itself. The obtained result displays that for 0 < r < 1 in large networks the AST grows as a power-law function of the network order with the exponent, represented by log 43r+1/r, and for r = 1 in large networks the AST grows with network order as N ln N, which is larger than the linear scaling of the average receiving time defined as the average of MFPTs for random walks to a given hub node averaged over all starting points.
An Illustration of the Average Exit Time Measure of Poverty
John Gibson; Susan Olivia; Jel I O
The goal of the World Bank is ‘a world free of poverty ’ but the most widely used poverty measures do not show when poverty might be eliminated. The ‘head-count index ’ simply counts the poor, while the ‘poverty gap index ’ shows their average shortfall from the poverty line. Neither measure reflects changes in the distribution of incomes amongst the poor, but squaring the poverty gap brings sensitivity to inequality, albeit at the cost of intuitive interpretation. This paper illustrates a new measure of poverty [Morduch, J., 1998, Poverty, economic growth and average exit time, Economics Letters, 59: 385-390]. This new poverty measure is distributionally-sensitive and has a ready interpretation as the average time taken to exit poverty with a constant and uniform growth rate.
Combinatorial Algorithms for Data Migration to Minimize Average Completion Time
Gandhi, Rajiv C.
Combinatorial Algorithms for Data Migration to Minimize Average Completion Time Rajiv Gandhi1 ratio of 2 1.414, thus improving the 1.796-approximation given by Gandhi et al. (ACM Transaction, therefore it is important to compute a data migration schedule that converts the initial layout
Combinatorial Algorithms for Data Migration to Minimize Average Completion Time
Gandhi, Rajiv C.
Combinatorial Algorithms for Data Migration to Minimize Average Completion Time Rajiv Gandhi1.796-approximation given by Gandhi et al.(ACM Trans- action on Algorithms, 2(1):116-129, 2006). We show performed, the storage system is running suboptimally, therefore it is important to compute a data migration
Series Overview. Sustaining School Turnaround at Scale. Brief 1
ERIC Educational Resources Information Center
Education Resource Strategies, 2012
2012-01-01
Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…
Critical Averaging Time for Atmospheric Boundary Layer Fluxes
NASA Astrophysics Data System (ADS)
Holmes, H.
2005-11-01
Calculation of heat and momentum fluxes in the Atmospheric Boundary Layer (ABL) requires separating the turbulent signal into mean and fluctuating components. Since the ABL is not statistically stationary, separation of these components depends on the inherent scales of motion in the flow. A new method is presented that utilizes energy spectra and cospectra analyses of raw velocity and temperature signals to select a critical averaging time, tc, for calculating the unsteady mean components of those signals. The new method is applied to high quality sonic anemometry data acquired at the Surface Layer Turbulence and Environmental Science Test (SLTEST) Facility located in Utah's western desert. Results for the unstable boundary layer show a correlation between tc and the characteristic time scale based on the ratio of mixed layer depth and convective velocity. Extension of the new method toward selection of a critical averaging time appropriate for the near-neutral boundary layer will also be discussed.
Turnaround Schools and the Leadership They Require
ERIC Educational Resources Information Center
Leithwood, Kenneth; Strauss, Tiiu
2008-01-01
"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Time-averaged adiabatic ring potential for ultracold atoms
Sherlock, B. E.; Gildemeister, M.; Owen, E.; Nugent, E.; Foot, C. J. [Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU (United Kingdom)
2011-04-15
We report the experimental realization of a versatile ring trap for ultracold atoms. The ring geometry is created by the time-averaged adiabatic potential resulting from the application of an oscillating magnetic bias field to a rf-dressed quadrupole trap. Lifetimes for a Bose-Einstein condensate in the ring exceed 11s and the ring radius was continuously varied from 50 {mu}m to 262 {mu}m. An efficient method of loading the ring from a conventional time-averaged orbiting potential trap is presented together with a rotation scheme which introduces angular momentum into the system. The ring presents an opportunity to study the superfluid properties of a condensate in a multiply connected geometry and also has applications for matter-wave interferometry.
Average waiting time profiles of uniform DQDB model
Rao, N.S.V. [Oak Ridge National Lab., TN (United States); Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D. [Old Dominion Univ., Norfolk, VA (United States). Dept. of Computer Science
1993-09-07
The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.
Analysis of 3-D vibrations from time-averaged holograms.
Tonin, R; Bies, D A
1978-12-01
In a previous paper [R. Tonin and D. A. Bies, J. Sound Vib. 52(3), 315 (1977)] the theory of time-averaged holographic interferometry was extended to include simple harmonic motion in three orthogonal directions at a single frequency. The amended characteristic function formula was used to calculate the radial and tangential components of a vibrating cylinder by first determining the radial component and from this the tangential component of vibration. In this paper the analysis of the previous paper is improved by making use of a technique originally introduced for the investigation of static deflection using time-averaged holography [S. K. Dhir and J. P. Sikora, Exp. Mech. 12(7), 323 (1972)]. The improved procedure allows simultaneous determination of all vibration amplitude components. The procedure is used for the investigation of the low order resonant vibration modes of four cylinders of various sizes and materials with shear-diaphragm end conditions with good results. The procedure is quite general in its application and not restricted to the study of cylinders. It lends itself easily to the study of coupled-mode vibration problems and in fact many complex resonance phenomena. PMID:20208598
Comment on "Turnaround in Cyclic Cosmology"
Xin Zhang
2008-04-02
We comment on a recent paper by L. Baum and P. H. Frampton [Phys. Rev. Lett. 98, 071301 (2007)] where it was argued that the entropy problem can be resolved in a peculiar cyclic universe model through a deflation mechanism (i.e., the universe is fragmented into many disconnected causal patches at the turnaround). We point out that in this cyclic model the Hubble length will become infinity at the turnaround, thus the deflation scenario is not valid.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Relegation and Reversion: Longitudinal Analysis of School Turnaround and Decline
ERIC Educational Resources Information Center
Hochbein, Craig
2012-01-01
School turnaround has become a popular strategy for improvement of chronically low-performing schools. Research related to school turnaround has relied substantially upon case studies. These case studies often focus on successful turnarounds and report immediate outcomes, failing to provide information about the sustainability of the results. In…
H. S. Behera; Simpi Patel; Bijayalakshmi Panda
2011-01-01
The main objective of the paper is to improve the Round Robin (RR) algorithm using dynamic ITS by coalescing it with Shortest Remaining Time Next (SRTN) algorithm thus reducing the average waiting time, average turnaround time and the number of context switches. The original time slice has been calculated for each process based on its burst time.This is mostly suited
In vivo T(2) relaxation time measurement with echo-time averaging.
Prescot, Andrew P; Shi, Xianfeng; Choi, Changho; Renshaw, Perry F
2014-08-01
The accuracy of metabolite concentrations measured using in vivo proton ((1) H) MRS is enhanced following correction for spin-spin (T2 ) relaxation effects. In addition, metabolite proton T2 relaxation times provide unique information regarding cellular environment and molecular mobility. Echo-time (TE) averaging (1) H MRS involves the collection and averaging of multiple TE steps, which greatly simplifies resulting spectra due to the attenuation of spin-coupled and macromolecule resonances. Given the simplified spectral appearance and inherent metabolite T2 relaxation information, the aim of the present proof-of-concept study was to develop a novel data processing scheme to estimate metabolite T2 relaxation times from TE-averaged (1) H MRS data. Spectral simulations are used to validate the proposed TE-averaging methods for estimating methyl proton T2 relaxation times for N-acetyl aspartate, total creatine, and choline-containing compounds. The utility of the technique and its reproducibility are demonstrated using data obtained in vivo from the posterior-occipital cortex of 10 healthy control subjects. Compared with standard methods, distinct advantages of this approach include built-in macromolecule resonance attenuation, in vivo T2 estimates closer to reported values when maximum TE???T2 , and the potential for T2 calculation of metabolite resonances otherwise inseparable in standard (1) H MRS spectra recorded in vivo. PMID:24865447
Time averaging for the strongly confined nonlinear Schrodinger equation.
Castella, FranÃ§ois
gas of quantum particles, evolving in the three dimensional space (x, z) R3 (x R2 , z R), yetÂ´ematiques de Toulouse - UniversitÂ´e Paul Sabatier - 118 rte de Narbonne - 31062 Toulouse Cedex 2 IRMAR, UniversitÂ´e de Rennes 1 - Campus de Beaulieu - 35042 Rennes Cedex, France 1 #12;in time. In other words
The time-averaged paleomagnetic field 0–5 Ma
M. W. McElhinny; P. L. McFadden; R. T. Merrill
1996-01-01
Persistent departures from the geocentric axial dipole field model of the time-aver- aged paleomagnetic field over the past 5 Myr have been analyzed using oceanic data from deep-sea cores and continental data from igneous rocks and sediments. The data set comprises the equivalent of 9490 spot readings of the field (5831 normal and 3659 reverse) from 930 groups of data.
Assimilation of Time-averaged Pseudoproxies for Climate Reconstruction
NASA Astrophysics Data System (ADS)
Steiger, Nathan J.
We examine the efficacy of a novel ensemble data assimilation (DA) technique in climate field reconstructions (CFR) of surface temperature. We perform four pseudoproxy experiments with both general circulation model (GCM) and 20th Century Reanalysis (20CR) data by reconstructing surface temperature fields from a sparse network of noisy pseudoproxies. We compare the DA approach to a conventional CFR approach based on Principal Component Analysis (PCA) for experiments on global domains. DA outperforms PCA in reconstructing global-mean temperature in all four experiments, and is more cosistent across experiments, with a range of time-series correlations of 0.69--0.94 compared to 0.19--0.87 for the PCA method. DA improvements are even more evident in spatial reconstruction skill, especially in sparsely sampled pseudoproxy regions and for a 20CR experiment. We hypothesize that DA improves spatial reconstructions because it relies on local temperature correlations. These relationships appear to be more robust than orthogonal patterns of variability, which can be non-stationary. Additionally, comparing results for GCM and 20CR data indicates that pseudoproxy experiments that rely solely on GCM data may give a false impression of reconstruction skill.
Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer
NASA Astrophysics Data System (ADS)
Dunn, M. G.; Rae, W. J.; Rigby, D. L.
Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.
Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer
NASA Technical Reports Server (NTRS)
Dunn, M. G.; Rae, W. J.; Rigby, D. L.
1987-01-01
Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.
Collins, C J S; Barnes, Graham R
2009-10-21
In motor control, prediction of future events is vital for overcoming sensory-motor processing delays and facilitating rapid and accurate responses in a dynamic environment. In human ocular pursuit this is so pervasive that prediction of future target motion cannot easily be eliminated by randomizing stimulus parameters. We investigated the prediction of temporally randomized events during pursuit of alternating constant-velocity (ramp) stimuli in which the timing of direction changes varied unpredictably over a given range. Responses were not reactive; instead, smooth eye velocity began to decelerate in anticipation of each target reversal. In the first experiment, using a continuous-motion stimulus, we found that the time at which this occurred was relatively constant regardless of ramp duration, but increased as mean ramp duration of the range increased. Regression analysis revealed a quantitative association between deceleration timing and the previous two or three ramp durations in a trial, suggesting that recent stimulus history was used to create a running average of anticipatory timing. In the second experiment, we used discrete motion stimuli, with intervening periods of fixation, which allowed both target velocity and reversal timing to be varied, thereby decoupling ramp duration and displacement. This enabled us to confirm that the timing of anticipatory deceleration was based on the history of timing, rather than displacement, within the stimulus. We conclude that this strategy is used to minimize error amid temporal uncertainty, while simultaneously overcoming inherent delays in visuomotor processing. PMID:19846718
ATS simultaneous and turnaround ranging experiments
NASA Technical Reports Server (NTRS)
Watson, J. S.; Putney, B. H.
1971-01-01
This report explains the data reduction and spacecraft position determination used in conjunction with two ATS experiments - Trilateration and Turnaround Ranging - and describes in detail a multilateration program that is used for part of the data reduction process. The process described is for the determination of the inertial position of the satellite, and for formating input for related programs. In the trilateration procedure, a geometric determination of satellite position is made from near simultaneous range measurements made by three different tracking stations. Turnaround ranging involves two stations; one, the master station, transmits the signal to the satellite and the satellite retransmits the signal to the slave station which turns the signal around to the satellite which in turn retransmits the signal to the master station. The results of the satellite position computations using the multilateration program are compared to results of other position determination programs used at Goddard. All programs give nearly the same results which indicates that because of its simplicity and computational speed the trilateration technique is useful in obtaining spacecraft positions for near synchronous satellites.
"Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform
ERIC Educational Resources Information Center
Johnson, Amanda Walker
2013-01-01
"Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas. The neoliberal…
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
NASA Astrophysics Data System (ADS)
Gomez, Vicente; Ramirez, Patricio; Cervera, Javier; Nasir, Saima; Ali, Mubarak; Ensinger, Wolfgang; Mafe, Salvador
2015-02-01
The possibility of taking advantage of a fluctuating environment for energy and information transduction is a significant challenge in biological and artificial nanostructures. We demonstrate here directional electrical transduction from fluctuating external signals using a single nanopore of conical shape immersed in an ionic aqueous solution. To this end, we characterize experimentally the average output currents obtained by the electrical rectification of zero time-average input potentials. The transformation of external potential fluctuations into nonzero time-average responses using a single nanopore in liquid state is of fundamental significance for biology and nanophysics. This energy and information conversion constitutes also a significant step towards macroscopic scaling using multipore membranes.
Estimating the average time for inter-continental transport of air pollutants
NASA Astrophysics Data System (ADS)
Liu, Junfeng; Mauzerall, Denise L.
2005-06-01
We estimate the average time required for inter-continental transport of atmospheric tracers based on simulations with the global chemical tracer model MOZART-2 driven with NCEP meteorology. We represent the average transport time by a ratio of the concentration of two tracers with different lifetimes. We find that average transport times increase with tracer lifetimes. With tracers of 1- and 2-week lifetimes the average transport time from East Asia (EA) to the surface of western North America (NA) in April is 2-3 weeks, approximately a half week longer than transport from NA to western Europe (EU) and from EU to EA. We develop an `equivalent circulation' method to estimate a timescale which has little dependence on tracer lifetimes and obtain similar results to those obtained with short-lived tracers. Our findings show that average inter-continental transport times, even for tracers with short lifetimes, are on average 1-2 weeks longer than rapid transport observed in plumes.
Praktikum bei Helbling Business Advisors Financial Performance & Turnaround,
Stein, Oliver
Restrukturierung) Mergers & Acquisitions (Kauf, Verkauf, Fusionen, Joint-Ventures) Corporate Finance (FinancialPraktikum bei Helbling Business Advisors Financial Performance & Turnaround, Mergers & Acquisitions Restrukturierung, Corporate Finance und Innovationsmanagement. Zur Unterstützung unserer Beraterteams in Düsseldorf
34. BOILER HOUSE, COAL CONVEYOR AND TURNAROUND TRACK FOR COAL ...
34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA
Turnaround operations analysis for OTV. Volume 2: Detailed technical report
NASA Technical Reports Server (NTRS)
1988-01-01
The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.
3. Goodbye Creek Picnic ground turnaround. Old rightofway faintly visible ...
3. Goodbye Creek Picnic ground turnaround. Old right-of-way faintly visible in distance. View N. - Crater Lake National Park Roads, Goodbye Creek Bridge, Spanning Goodbye Creek at Munson Valley Road, Klamath Falls, Klamath County, OR
Chaotic motion at the emergence of the time averaged energy decay
Cesar Manchein; Jane Rosa; Marcus W. Beims
2009-05-29
A system plus environment conservative model is used to characterize the nonlinear dynamics when the time averaged energy for the system particle starts to decay. The system particle dynamics is regular for low values of the $N$ environment oscillators and becomes chaotic in the interval $13\\le N\\le15$, where the system time averaged energy starts to decay. To characterize the nonlinear motion we estimate the Lyapunov exponent (LE), determine the power spectrum and the Kaplan-Yorke dimension. For much larger values of $N$ the energy of the system particle is completely transferred to the environment and the corresponding LEs decrease. Numerical evidences show the connection between the variations of the {\\it amplitude} of the particles energy time oscillation with the time averaged energy decay and trapped trajectories.
Approximation Schemes for Minimizing Average Weighted Completion Time with Release Dates
Foto N. Afrati; Evripidis Bampis; Chandra Chekuri; David R. Karger; Claire Kenyon; Sanjeev Khanna; Ioannis Milis; Maurice Queyranne; Martin Skutella; Clifford Stein; Maxim Sviridenko
1999-01-01
We consider the problem of scheduling jobs with re- lease dates on machines so as to minimize their average weighted completion time. We present the first known poly- nomial time approximation schemes for several variants of this problem. Our results include PTASs for the case of identical parallel machines and a constant number of unre- lated machines with and without
Neural Networks Used to Compare Designed and Measured Time-Average Patterns
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1999-01-01
Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.
Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio
NASA Astrophysics Data System (ADS)
Li, Shenghong; Bi, Guoan
2014-12-01
Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-01-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ? 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-08-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
Climate Projections Using Bayesian Model Averaging and Space-Time Dependence
Haran, Murali
Climate Projections Using Bayesian Model Averaging and Space-Time Dependence K. Sham Bhat, Murali Haran, Adam Terando, and Klaus Keller. Abstract Projections of future climatic changes are a key input to the design of climate change mitiga- tion and adaptation strategies. Current climate change projections
Time series models to simulate and forecast hourly averaged wind speed in Quetta, Pakistan
Lalarukh Kamal; Yasmin Zahra Jafri
1997-01-01
Stochastic simulation and forecast models of hourly average wind speeds are presented. Time series models take into account several basic features of wind speed data including autocorrelation, non-Gaussian distribution and diurnal nonstationarity. The positive correlation between consecutive wind speed observations is taken into account by flitting an ARMA (p,q) process to wind speed data transformed to make their distribution approximately
DISCRETE-TIME CONTROLLED MARKOV PROCESSES WITH AVERAGE COST CRITERION: A SURVEY
ARISTOTLE ARAPOSTATHIS; VIVEK S. BORKAR; EMMANUEL FERN; MRINAL K. GHOSH; STEVEN I. MARCUS
This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies.
G. E. P. Box; David A. Pierce
1970-01-01
Many statistical models, and in particular autoregressive—moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrected sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values,
Whole-brain, time-locked activation with simple tasks revealed using massive averaging and
Reber, Paul J.
by jointly analyzing activity in voxels that show no significance using conventional univariate analysis techWhole-brain, time-locked activation with simple tasks revealed using massive averaging and model Institute of Technology, Cambridge, MA, and approved February 21, 2012 (received for review December 21
The Impact of Overassignment on Grade Point Averages of First-Time Freshmen.
ERIC Educational Resources Information Center
Desler, Mary; North, Gary
1978-01-01
Data show that the first-time freshmen, who were overassigned in residence halls, did no worse than other students concerning grade point average. The peer pressure may have been a motivating force. There was no male/female difference. Data also show no difference in environment perception. (LPG)
Plio-Pleistocene time-averaged field in southern Patagonia recorded in lava flows
Singer, Bradley S.
Plio-Pleistocene time-averaged field in southern Patagonia recorded in lava flows V. Mejia and N. D of 53 lava flows from southern Patagonia (latitudes 49.5Â°Â52.1Â°S) that include the Pali-Aike volcanic, 7 figures, 3 tables. Keywords: paleomagnetic secular variation; Patagonia; Pali-Aike Volcanic Field
Spatial and Temporal scales of time-averaged 700 MB height anomalies
NASA Technical Reports Server (NTRS)
Gutzler, D.
1981-01-01
The monthly and seasonal forecasting technique is based to a large extent on the extrapolation of trends in the positions of the centers of time averaged geopotential height anomalies. The complete forecasted height pattern is subsequently drawn around the forecasted anomaly centers. The efficacy of this technique was tested and time series of observed monthly mean and 5 day mean 700 mb geopotential heights were examined. Autocorrelation statistics are generated to document the tendency for persistence of anomalies. These statistics are compared to a red noise hypothesis to check for evidence of possible preferred time scales of persistence. Space-time spectral analyses at middle latitudes are checked for evidence of periodicities which could be associated with predictable month-to-month trends. A local measure of the average spatial scale of anomalies is devised for guidance in the completion of the anomaly pattern around the forecasted centers.
Extraction of Periodic Signal Without External Reference by Time-Domain Average Scanning
LiMin Zhu; Han Ding; XiangYang Zhu
2008-01-01
A novel signal-analysis technique that is capable of extracting periodic signals from noisy discrete-time observations without an additional external reference is presented. It is based on a derived formulation estimating the maximally allowable synchronous error for a given attenuation threshold after the time-domain averaging. First, a continuous signal is reconstructed from the discrete samples by using cubic-spline interpolation. Second, the
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions for the three-dimensional DES when compared to experiment. The initial results using the RANS and DES procedures compared well with experimental data for the wall-mounted hump as well. Using the RANS model, the onset of separation was accurately predicted while the reattachment point was over-predicted. The RANS procedure also over-predicted the mean pressure, skin friction, and velocity profiles in the separation zone. The DES procedure showed much better results for the secondary flow of a wing/endwall junction; the three-dimensional structures resolved in the wake of the DES improved the local flow physics in the separation region and the predictions of the mean pressure distribution, skin friction, and streamwise velocity. The DES procedure exhibited a three-dimensional ow structure in the wake, with a 13.65% shorter mean separation region compared to RANS and a mean reattachment length that is in good agreement with experimental measurements. DES predictions of the pressure coefficient in the separation region also exhibit good agreement with experiment and are more accurate than RANS predictions.
NASA Technical Reports Server (NTRS)
Decker, A. J.
1984-01-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three-dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed. Previously announced in STAR as N84-21849
k=10 GS PC TPDA GES Average Time Results -Child -Sample Size 500
Brown, Laura E.
0 5 10 15 20 25 90 MMHC OR1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC k=10 GS PC 11516 MMHC OR1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC k=10 GS PC TPDA GES Time1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC k=10 GS PC TPDA GES Time Average Time
A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.
Effect of RF Pulse Sequence on Temperature Elevation for a Given Time-Average SAR
Wang, Zhangwei; Collins, Christopher M.
2010-01-01
In calculations of temperature increase during MRI, it is typically assumed adequate to consider the Specific energy Absorption Rate (SAR) levels averaged over an entire repetition time (TR) rather than explicitly consider the heating (as it occurs in reality) during the RF pulses only. Here we investigate this assumption with numerical calculations of SAR and temperature increase for a human head in a volume coil at 64 MHz and 300 MHz during three very different pulse sequences, each having a TR of 200 ms and a time-average whole-head SAR of 3.0W/kg, as well as with semi-analytical calculations considering a gradient-echo sequence in a segment of tissue with SAR of 10W/kg delivered in a 1ms pulse with TR of up to 5000 ms. While it is possible to calculate a temporal effect of specific pulse sequence on temperature, the difference between pulse sequences is so small and so transient that it should typically be adequate to consider only the time-average SAR in each TR. PMID:21116480
Numerical multi-scale time averaging approach to quantum systems with pseudo random potentials
Tal Kachman; Shmuel Fishman; Avy Soffer
2015-01-14
We consider the problem of numerically solving the Schrodinger equation with a quasi periodic in space time potential, approximating multiplicative noise term. We introduce a numerical scheme based on a newly developed multi-time scale averaging technique. We show that with this new method we can solve efficiently and with rigorous control of the error the above equation for large times. A comparison with the standard split-step method shows order of magnitude improvement in computational times, besides the controlled errors. We apply this method to a free particle perturbed by quasi-periodic potential with many frequencies. We observe the (anomalous) diffusion in momentum space, over large time intervals, as large as the system is away from the Chirikov resonance strips. In the limit of zero potential, the diffusion rate becomes standard, but with a coefficient that approaches zero
Single FGF-23 Measurement and Time-Averaged Plasma Phosphate Levels in Hemodialysis Patients
Seiler, Sarah; Lucisano, Gaetano; Ege, Philipp; Fell, Lisa H.; Rogacev, Kyrill S.; Lerner-Gräber, Anne; Klingele, Matthias; Ziegler, Matthias; Fliser, Danilo
2013-01-01
Summary Background and objectives Plasma phosphate levels display considerable intraindividual variability. The phosphatonin fibroblast growth factor 23 is a central regulator of plasma phosphate levels, and it has been postulated to be a more stable marker than conventional CKD–mineral and bone disorder parameters. Thus, fibroblast growth factor 23 has been hypothesized to reflect time-averaged plasma phosphate levels in CKD patients. Design, setting, participants, & measurements Among 40 patients from the outpatient dialysis center, serial measurements of plasma calcium and phosphate (before every dialysis session) as well as C-terminal fibroblast growth factor 23, parathyroid hormone, and alkaline phosphatase (one time weekly) were performed over a study period of 4 weeks in November and December of 2011. Intraindividual variability of repeated plasma fibroblast growth factor 23 measurements compared with other CKD–mineral and bone disorder markers was tested, and the association of a single plasma fibroblast growth factor 23 measurement with time-averaged plasma phosphate levels was analyzed. Results Against expectations, intraindividual variability of fibroblast growth factor 23 (median coefficient of variation=27%; interquartile range=20–35) was not lower than variability of plasma phosphate (median coefficient of variation=15%; interquartile range=10–20), parathyroid hormone (median coefficient of variation=24%; interquartile range=15–39), plasma calcium (median coefficient of variation=3%; interquartile range=2–4), or alkaline phosphatase (median coefficient of variation=5%; interquartile range=3–10). Moreover, the correlation between the last fibroblast growth factor 23 measurement after 4 weeks and time-averaged plasma phosphate did not surpass the correlation between the last fibroblast growth factor 23 measurement and a single plasma phosphate value (r=0.67, P<0.001; r=0.76, P<0.001, respectively). Conclusions Surprisingly, fibroblast growth factor 23 was not more closely associated to time-averaged plasma phosphate levels than a single plasma phosphate value, and it did not show a lower intraindividual variability than other tested markers of CKD–mineral and bone disorder. Thus, fibroblast growth factor 23 should not be used in clinical practice as a reflector of time-averaged plasma phosphate levels. PMID:23846463
Fluctuations of Time Averages for Langevin Dynamics in a Binding Force Field
NASA Astrophysics Data System (ADS)
Dechant, A.; Lutz, E.; Kessler, D. A.; Barkai, E.
2011-12-01
We derive a simple formula for the fluctuations of the time average x¯(t) around the thermal mean ?x?eq for overdamped Brownian motion in a binding potential U(x). Using a backward Fokker-Planck equation, introduced by Szabo, Schulten, and Schulten in the context of reaction kinetics, we show that for ergodic processes these finite measurement time fluctuations are determined by the Boltzmann measure. For the widely applicable logarithmic potential, ergodicity is broken. We quantify the large nonergodic fluctuations and show how they are related to a superaging correlation function.
Time scales and variability of area-averaged tropical oceanic rainfall
NASA Technical Reports Server (NTRS)
Shin, Kyung-Sup; North, Gerald R.; Ahn, Yoo-Shin; Arkin, Phillip A.
1990-01-01
A statistical analysis of time series of area-averaged rainfall over the oceans has been conducted around the diurnal time scale. The results of this analysis can be applied directly to the problem of establishing the magnitude of expected errors to be incurred in the estimation of monthly area-averaged rain rate from low orbiting satellites. Such statistics as the mean, standard deviation, integral time scale of background red noise, and spectral analyses were performed on time series of the GOES precipitation index taken at 3-hour intervals during the period spanning December 19, 1987 to March 31, 1988 over the central and eastern tropical Pacific. The analyses have been conducted on 2.5 x 2.5 deg and 5 x 5 deg grid boxes, separately. The study shows that rainfall measurements by a sun-synchronous satellite visiting a spot twice per day will include a bias due to the existence of the semidiurnal cycle in the SPCZ ranging from 5 to 10 percentage points. The bias in the ITCZ may be of the order of 5 percentage points.
TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA
Iuppa, R. [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy)] [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy); Di Sciascio, G., E-mail: roberto.iuppa@roma2.infn.it, E-mail: giuseppe.disciascio@roma2.infn.it [Istituto Nazionale di Fisica Nucleare, Sezione di Roma Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy)
2013-04-01
Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.
Further Evidence on Economic and Noneconomic Reasons for Turnaround Migration.
ERIC Educational Resources Information Center
Adamchak, Donald J.
1987-01-01
Surveys importance of economic and environmental factors in motivation of metro- and nonmetro-origin migrants relocating in 13 Kansas nonmetropolitan nonamenity turnaround counties. Finds employment-related reasons predominate, with economic characteristics of counties a significant factor. Quality of life/environmental reasons were less…
Can Principals' Emotional Intelligence Matter to School Turnarounds?
ERIC Educational Resources Information Center
Cai, Qijie
2011-01-01
Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: the…
Rethinking the nonmetropolitan turnaround: renewed rural growth or extended urbanization?
Wang, Xiaodong
2007-04-25
This dissertation proposes a new, synthesized perspective for explaining the Ã¢Â?Â?Nonmetropolitan TurnaroundÃ¢Â?Â in the 1970s and 1990s. By studying the definition of urbanization carefully, using the human ecological perspective, many processes...
Negotiation Areas for "Transformation" and "Turnaround" Intervention Models
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…
Shin, Jung-Ah; Lee, Byung-Kyu; Kwon, Jiwoon; Lee, Naroo; Chung, Kwang-Jae; Lee, Jong-Han; Lee, In-Seop; Kang, Seong-Kyu; Jang, Jae-Kil
2010-01-01
Objectives The level of benzene exposure in the petrochemical industry during regular operation has been well established, but not in turnaround (TA), where high exposure may occur. In this study, the characteristics of occupational exposure to benzene during TA in the petrochemical companies were investigated in order to determine the best management strategies and improve the working environment. This was accomplished by evaluating the exposure level for the workers working in environments where benzene was being produced or used as an ingredient during the unit process. Methods From 2003 to 2008, a total of 705 workers in three petrochemical companies in Korea were studied. Long- and short-term (< 1 hr) samples were taken during TAs. TA was classified into three stages: shut-down, maintenance and start-up. All works were classified into 12 occupation categories. Results The long-term geometric mean (GM) benzene exposure level was 0.025 (5.82) ppm (0.005-42.120 ppm) and the short-term exposure concentration during TA was 0.020 (17.42) ppm (0.005-61.855 ppm). The proportions of TA samples exceeding the time-weighted average, occupational exposure level (TWA-OEL in Korea, 1 ppm) and the short-term exposure limit (STEL-OEL, 5 ppm) were 4.1% (20 samples of 488) and 6.0% (13 samples of 217), respectively. The results for the benzene exposure levels and the rates of exceeding the OEL were both statistically significant (p < 0.05). Among the 12 job categories of petrochemical workers, mechanical engineers, plumbers, welders, fieldman and scaffolding workers exhibited long-term samples that exceeded the OEL of benzene, and the rate of exceeding the OEL was statistically significant for the first two occupations (p < 0.05). Conclusion These findings suggest that the periodic work environment must be assessed during non-routine works such as TA. PMID:22953163
An analytical model for time-averaged ion energy distributions in collisional rf sheaths
NASA Astrophysics Data System (ADS)
Chen, Wen-Cong; Pu, Yi-Kang
2014-08-01
An analytical model is proposed for time-averaged ion energy distributions (IEDs) in collisional rf sheaths (?i < sm, where ?i is the ion mean free path and sm is the sheath thickness), in which charge transfer is the dominant ion-neutral collision mechanism. Our model is different from the model in Israel et al 2006 J. Appl. Phys. 99 093303 in two aspects. Firstly, to calculate the ion flux, we consider ions created in both the space charge region and the quasi-neutral region instead of only the quasi-neutral region. Secondly, to calculate the ion energy, we use an rf-modulated transit time instead of only the average transit time. Consequently, our model is valid over a wider pressure range (from a weakly collisional sheath to a strongly collisional sheath) compared with the model in Israel et al 2006 J. Appl. Phys. 99 093303. Our model shows that, in a collisional rf sheath (argon, 5 Pa, 27.12 MHz and 100 W), 65% of the ion flux in the IED comes from secondary ions created in the space charge region. Results of our model are compared with those obtained by measurement, direct integration method and particle-in-cell/Monte Carlo collision simulation.
Time Average Holography Study of Human Tympanic Membrane with Altered Middle Ear Ossicular Chain
NASA Astrophysics Data System (ADS)
Cheng, Jeffrey T.; Ravicz, Michael E.; Rosowski, John J.; Hulli, Nesim; Hernandez-Montes, Maria S.; Furlong, Cosme
2009-02-01
Computer-assisted time average holographic interferometry was used to study the vibration of the human tympanic membrane (TM) in cadaveric temporal bones before and after alterations of the ossicular chain. Simultaneous laser Doppler vibrometer measurements of stapes velocity were performed to estimate the conductive hearing loss caused by ossicular alterations. The quantified TM motion described from holographic images was correlated with stapes velocity to define relations between TM motion and stapes velocity in various ossicular disorders. The results suggest that motions of the TM are relatively uncoupled from stapes motion at frequencies above 1000 Hz.
ERIC Educational Resources Information Center
Murphy, Joseph; Meyers, Coby V.
2009-01-01
In this article, we provide a grounded narrative of capacity building in the turnaround equation by exploring the turnaround literature outside of education and applying it to troubled schools. Our analysis is based upon reviews of: (1) 14 comprehensive, historical volumes that examine the turnaround phenomenon; (2) 16 book-length analyses of…
A ‘3Rs’ Strategy for Public Service Turnaround: Retrenchment, Repositioning and Reorganization
George A. Boyne
2004-01-01
Problems of public service ‘failure’ are high on the political agenda in the UK, and many national and local organizations are searching for effective turnaround strategies. Although little research on turnaround in the public sector has been undertaken, there is a substantial number of studies of decline and recovery in private firms. Evidence from these studies suggests that turnaround is
Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay
Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.
2000-01-01
Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-01
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467
Apollo/Saturn 5 space vehicle countdown. Volume 2: Turnaround from scrub
NASA Technical Reports Server (NTRS)
1972-01-01
The procedures required to prepare a space vehicle for subsequent launch attempt after cancelling lift-off activities are presented. The cancellation must occur after the start of cryogenic loading, but prior to initiation of ignition sequences. The sequence of operations necessary to return the space vehicle to a predetermined configuration at which time the launch count can be resumed or rescheduled for following launch opportunities is developed. The considerations and constraints that are the basis for the scrub/turnaround times are described.
Enhancing MALDI Time-Of-Flight Mass Spectrometer Performance through Spectrum Averaging.
Mitchell, Morgan; Mali, Sujina; King, Charles C; Bark, Steven J
2015-01-01
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30-50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD) detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements. PMID:25798583
Enhancing MALDI Time-Of-Flight Mass Spectrometer Performance through Spectrum Averaging
Mitchell, Morgan; Mali, Sujina; King, Charles C.; Bark, Steven J.
2015-01-01
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30–50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD) detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements. PMID:25798583
Average excitation energies from time-dependent density functional response theory
NASA Astrophysics Data System (ADS)
Hu, Chunping; Sugino, Osamu
2007-02-01
The authors present an occupation number averaging scheme for time-dependent density functional response theory (TD-DFRT) in frequency domain. The known problem that TD-DFRT within the local (spin) density approximation (LDA/LSDA) inaccurately predicts Rydberg and charge-transfer excitation energies has been reexamined from the methodology of linear response, without explicit correction of the exchange-correlation potential. The working equations of TD-DFRT are adapted to treat arbitrary difference of orbital occupation numbers, using the nonsymmetric matrix form of Casida's formulation of TD-DFRT [M. E. Casida, in Recent Advances in Density Functional Methods, edited by D. P. Chong (World Scientific, Singapore, 1995), Pt. I, p. 155]. The authors' scheme is applied to typical closed-shell and open-shell molecular systems by examining the dependence of excitation energies on the fraction of excited electron. Good performance of this modified linear response scheme is shown, and is consistent with the authors' previous examination by the real-time propagation approach, suggesting that the calculation of average excitation energies might be one of the ways to better decode excitation energies from LDA/LSDA. Different techniques for treating singlet, triplet, and doublet states are discussed.
The BBSome controls IFT assembly and turnaround in cilia
Wei, Qing; Zhang, Yuxia; Li, Yujie; Zhang, Qing; Ling, Kun; Hu, Jinghua
2012-01-01
The bidirectional movement of intraflagellar transport (IFT) particles, which are composed of motors, IFT-A and IFT-B subcomplexes, and cargos, is required for cilia biogenesis and signaling 1, 2. A successful IFT cycle depends on the massive IFT particle to be properly assembled at the ciliary base and turned around from anterograde to retrograde transport at the ciliary tip. However, how IFT assembly and turnaround are regulated in vivo remains elusive. From a whole-genome mutagenesis screen in C. elegans, we identified two hypomorphic mutations in dyf-2 and bbs-1 as the only mutants showing normal anterograde IFT transport but defective IFT turnaround at the ciliary tip. Further analyses revealed that the BBSome 3, 4, a group of conserved proteins affected in human Bardet-Biedl syndrome (BBS) 5, assembles IFT complexes at the ciliary base, then binds to anterograde IFT particle in a DYF-2- (an ortholog of human WDR19) and BBS-1-dependent manner, and lastly reaches the ciliary tip to regulate proper IFT recycling. Our results unravel the BBSome as the key player regulating IFT assembly and turnaround in cilia. PMID:22922713
NASA Technical Reports Server (NTRS)
Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.
1995-01-01
Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.
Area-preserving maps models of gyro-averaged ${\\bf E} \\times {\\bf B}$ chaotic transport
J. D. da Fonseca; D. del-Castillo-Negrete; I. L. Caldas
2014-09-10
Discrete maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on area-preserving maps describing finite Larmor radius (FLR) effects on ${\\bf E} \\times {\\bf B}$ chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. FLR effects are included by gyro-averaging the Hamiltonians of the maps which, depending on the zonal flow profile, can have monotonic or non-monotonic frequencies. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that in both cases FLR leads to chaos suppression, changes in the stability of fixed points, and robustness of transport barriers. FLR effects are also responsible for changes in the phase space topology and zonal flow bifurcations. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the Larmor radius of the threshold for the destruction of transport barriers.
NASA Astrophysics Data System (ADS)
Mao, Yijun; Xu, Chen; Qi, Datong
2015-02-01
A vector aeroacoustics method is developed to analyze the acoustic energy flow path from the rotating source. In this method, the instantaneous and time-averaged active acoustic intensity vectors are evaluated from the time-domain and frequency-domain acoustic pressure and acoustic velocity formulations, respectively. With the above method, the acoustic intensity vectors and the acoustic energy streamlines are visualized to investigate the propagation feature of the noise radiated from the monopole and dipole point sources and the rotor in subsonic rotation. The result reveals that a portion of the acoustic energy spirals many circles before moving towards the far field, and another portion of the acoustic energy firstly flows inward along the radial direction and then propagates along the axial direction. Further, an acoustic black hole exists in the plane of source rotation, from which the acoustic energy cannot escape once the acoustic energy flows into it. Moreover, by visualizing the acoustic intensity field around the rotating sources, the acoustic-absorption performance of the acoustic liner built in the casing and centerbody is discussed.
New device for time-averaged measurement of volatile organic compounds (VOCs).
Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio
2014-07-01
Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes through a glass cell containing adsorbent material where the VOCs are retained. The adsorbent used, made in LABAQUA, is a mixture of alginic acid and activated carbon. Due to its high permeability it allows the passage and retention of THMs in a suitable way, thus solving many of the problems of other common adsorbents. Also, to avoid degradation of the adsorbent, it is wrapped in a low density polyethylene (LDPE) membrane. After a sampling period of between 1 and 14 days, the adsorbent is collected and analyzed in the laboratory to quantify the VOC average concentration. This device resolves some of the limitations of the classical sampling system (spot samples), since we will take into account the fluctuations in the concentration of VOCs by averaging the same over time. This study presents the results obtained by the device for quantifying the VOCs legislated in the Directive 2000/60/EC. We present the validation of linearity over time and the limits of quantification, as well as the results of sample rate (Rs) obtained for each compound. The results demonstrate the high robustness and high sensitivity of the device. In addition the system has been validated in real waste water samples, comparing the results obtained with this device with the values of classical spot sampling, obtaining excellent results. PMID:24388502
Detecting macroeconomic phases in the Dow Jones Industrial Average time series
NASA Astrophysics Data System (ADS)
Wong, Jian Cheng; Lian, Heng; Cheong, Siew Ann
2009-11-01
In this paper, we perform statistical segmentation and clustering analysis of the Dow Jones Industrial Average (DJI) time series between January 1997 and August 2008. Modeling the index movements and log-index movements as stationary Gaussian processes, we find a total of 116 and 119 statistically stationary segments respectively. These can then be grouped into between five and seven clusters, each representing a different macroeconomic phase. The macroeconomic phases are distinguished primarily by their volatilities. We find that the US economy, as measured by the DJI, spends most of its time in a low-volatility phase and a high-volatility phase. The former can be roughly associated with economic expansion, while the latter contains the economic contraction phase in the standard economic cycle. Both phases are interrupted by a moderate-volatility market correction phase, but extremely-high-volatility market crashes are found mostly within the high-volatility phase. From the temporal distribution of various phases, we see a high-volatility phase from mid-1998 to mid-2003, and another starting mid-2007 (the current global financial crisis). Transitions from the low-volatility phase to the high-volatility phase are preceded by a series of precursor shocks, whereas the transition from the high-volatility phase to the low-volatility phase is preceded by a series of inverted shocks. The time scale for both types of transitions is about a year. We also identify the July 1997 Asian Financial Crisis to be the trigger for the mid-1998 transition, and an unnamed May 2006 market event related to corrections in the Chinese markets to be the trigger for the mid-2007 transition.
NASA Astrophysics Data System (ADS)
Charuchittipan, Doojdao; Babel, Wolfgang; Mauder, Matthias; Leps, Jens-Peter; Foken, Thomas
2014-09-01
The modified ogive analysis and the block ensemble average were employed to investigate the impact of the averaging time extension on the energy balance closure over six land-use types. The modified ogive analysis, which requires a steady-state condition, can extend the averaging time up to a few hours and suggests that an averaging time of 30 min is still overall sufficient for eddy-covariance measurements over low vegetation. The block ensemble average, which does not require a steady-state condition, can extend the averaging time to several days. However, it can improve the energy balance closure for some sites during specific periods, when secondary circulations exist in the vicinity of the sensor. These near-surface secondary circulations mainly transport sensible heat, and when near-ground warm air is transported upward, the sensible heat flux observed by the block ensemble average will increase at longer averaging times. These findings suggest an alternative energy balance correction for a ground-based eddy-covariance measurement, in which the attribution of the residual depends on the ratio of sensible heat flux to the buoyancy flux. The fraction of the residual attributed to the sensible heat flux by this energy balance correction is larger than in the energy balance correction that preserves the Bowen ratio.
Redshift-space equal-time angular-averaged consistency relations of the gravitational dynamics
Nishimichi, Takahiro
2015-01-01
We present the redshift-space generalization of the equal-time angular-averaged consistency relations between $(\\ell+n)$- and $n$-point polyspectra of the cosmological matter density field. Focusing on the case of $\\ell=1$ large-scale mode and $n$ small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of $N$-body simulations, we find that our squeezed bispectrum relation is valid to better than $20\\%$ up to $1h$Mpc$^{-1}$, for both the monopole and quadrupole at $z=0.35$, in a $\\Lambda$CDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come fr...
Thermodynamic formula for the cumulant generating function of time-averaged current.
Nemoto, Takahiro; Sasa, Shin-ichi
2011-12-01
The cumulant generating function of time-averaged current is studied from an operational viewpoint. Specifically, for interacting Brownian particles under nonequilibrium conditions, we show that the first derivative of the cumulant generating function is equal to the expectation value of the current in a modified system with an extra force added, where the modified system is characterized by a variational principle. The formula reminds us of Einstein's fluctuation theory in equilibrium statistical mechanics. Furthermore, since the formula leads to the fluctuation-dissipation relation when the linear response regime is focused on, it is regarded as an extension of the linear response theory to that valid beyond the linear response regime. The formula is also related to previously known theories such as the Donsker-Varadhan theory, the additivity principle, and the least dissipation principle, but it is not derived from them. Examples of its application are presented for a driven Brownian particle on a ring subject to a periodic potential. PMID:22304046
NASA Astrophysics Data System (ADS)
Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.
2009-02-01
In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (?CT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.
the phenomena of HDS, which showed that there existed an obvious vertical concentration gradient even though1 TIME-AVERAGED TURBULENT MIXING AND VERTICAL CONCENTRATION DISTRIBUTION OF HIGH-averaged suspended sediment concentration model for HDS was developed. To avoid the stability problems
NASA Astrophysics Data System (ADS)
Tauxe, Lisa; Luskin, Casey; Selkin, Peter; Gans, Phillip; Calvert, Andy
2004-08-01
This study presents paleomagnetic results from the Snake River Plain (SRP) in southern Idaho as a contribution to the time-averaged field global database. Paleomagnetic samples were measured from 26 sites, 23 of which (13 normal, 10 reverse) yielded site mean directions meeting our criteria for acceptable paleomagnetic data. Flow ages (on 21 sites) range from 5 ka to 5.6 Ma on the basis of 40Ar/39Ar dating methods. The age and polarity for the 21 dated sites are consistent with the Geomagnetic Reversal Time Scale except for a single reversely magnetized site dated at 0.39 Ma. This is apparently the first documented excursion associated with a period of low paleointensity detected in both sedimentary and igneous records. Combining the new data from the SRP with data published from the northwest United States between the latitudes of 40° and 50°N, there are 183 sites in all that meet minimum acceptability criteria for legacy and new data. The overall mean direction of 173 normally magnetized sites has a declination of 2.3°, inclination of 61.4°, a Fisher concentration parameter (?) of 58, and a radius of 95% confidence (?95) of 1.4°. Reverse sites have a mean direction of 182.4° declination, -58.6° inclination, ? of 50, and ?95 of 6.9°. Normal and reversed mean directions are antipodal and indistinguishable from a geocentric axial dipole field at the 95% confidence level. Virtual geomagnetic pole dispersion was found to be circularly symmetric, while the directional data were elongate north-south. An updated and corrected database for the northwestern U.S. region has been contributed to the Magnetics Information Consortium (MagIC) database at http://earthref.org.
Lu, Zhiming
simulations are commonly utilized and the mean breakthrough curve is often calculated from the arithmetic average of all realizations. The arithmetic mean breakthrough curve in general overestimates the mass flow]. The mean breakthrough curve computed from arithmetic averaging in general overestimates the mass flow rate
Uncertainty and variability in historical time-weighted average exposure data.
Davis, Adam J; Strom, Daniel J
2008-02-01
Beginning around 1940, private companies began processing of uranium and thorium ore, compounds, and metals for the Manhattan Engineer District and later the U.S. Atomic Energy Commission (AEC). Personnel from the AEC's Health and Safety Laboratory (HASL) visited many of the plants to assess worker exposures to radiation and radioactive materials. They developed a time-and-task approach to estimating "daily weighted average" (DWA) concentrations of airborne uranium, thorium, radon, and radon decay products. While short-term exposures greater than 10(5) dpm m(-3) of uranium and greater than 10(5) pCi L(-1) of radon were observed, DWA concentrations were much lower. The HASL-reported DWA values may be used as inputs for dose reconstruction in support of compensation decisions, but they have no numerical uncertainties associated with them. In this work, Monte Carlo methods are used retrospectively to assess the uncertainty and variability in the DWA values for 63 job titles from five different facilities that processed U, U ore, Th, or 226Ra-222Rn between 1948 and 1955. Most groups of repeated air samples are well described by lognormal distributions. Combining samples associated with different tasks often results in a reduction of the geometric standard deviation (GSD) of the DWA to less than those GSD values typical of individual tasks. Results support the assumption of a GSD value of 5 when information on uncertainty in DWA exposures is unavailable. Blunders involving arithmetic, transposition, and transcription are found in many of the HASL reports. In 5 out of the 63 cases, these mistakes result in overestimates of DWA values by a factor of 2 to 2.5, and in 2 cases DWA values are underestimated by factors of 3 to 10. PMID:18188049
Meeting the challenge of a group practice turnaround.
Porn, L M
2001-03-01
Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing. PMID:11258274
Toint, Philippe
and human energy expenditure for travel, which is assumed to be constant in time and space. The second one little over time and space, as pointed out first by Zahavi (1977), even if disaggregated travel times1 From average travel time budgets to daily travel time distributions: an appraisal of two
Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.
2011-01-01
Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..
Time-Averaged Velocity, Temperature and Density Surveys of Supersonic Free Jets
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.; Mielke, Amy F.
2005-01-01
A spectrally resolved molecular Rayleigh scattering technique was used to simultaneously measure axial component of velocity U, static temperature T, and density p in unheated free jets at Mach numbers M = 0.6,0.95, 1.4 and 1.8. The latter two conditions were achieved using contoured convergent-divergent nozzles. A narrow line-width continuous wave laser was passed through the jet plumes and molecular scattered light from a small region on the beam was collected and analyzed using a Fabry-Perot interferometer. The optical spectrum analysis air density at the probe volume was determined by monitoring the intensity variation of the scattered light using photo-multiplier tubes. The Fabry-Perot interferometer was operated in the imaging mode, whereby the fringe formed at the image plane was captured by a cooled CCD camera. Special attention was given to remove dust particles from the plume and to provide adequate vibration isolation to the optical components. The velocity profiles from various operating conditions were compared with that measured by a Pitot tube. An excellent comparison within 5m's demonstrated the maturity of the technique. Temperature was measured least accurately, within 10K, while density was measured within 1% uncertainty. The survey data consisted of centerline variations and radial profiles of time-averaged U, T and p. The static temperature and density values were used to determine static pressure variations inside the jet. The data provided a comparative study of jet growth rates with increasing Mach number. The current work is part of a data-base development project for Computational Fluid Dynamics and Aeroacoustics codes that endeavor to predict noise characteristics of high speed jets. A limited amount of far field noise spectra from the same jets are also presented. Finally, a direct experimental validation was obtained for the Crocco-Busemann equation which is commonly used to predict temperature and density profiles from known velocity profiles. Data presented in this paper are available in ASCII format upon request.
Diagnostic quality of time-averaged ECG-gated CT data
NASA Astrophysics Data System (ADS)
Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan
2009-02-01
Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.
Influence of Organizing Strategies, Time, and Grade Point Averages on Retention Performance
ERIC Educational Resources Information Center
Cohen, Shelby Ruth
1977-01-01
The relationships among retention performance, organizing strategy preference, and grade point average were assessed in 227 college students, with the resulting correlation coefficients suggesting that the type of organizing strategy used in memorizing exerts a strong influence on the recall of paired associates over both short and long periods of…
Miller, William H.
energy levels. II. Application to H2CO, NH3 , CH4, CH2D2 Alexey L. Kaledin and William H. Miller densities of molecules, previously tested successfully on H2O, is applied here to several larger molecules dynamics, however, the phase space average converges more slowly. Results for vibrational energy levels
6 The effect of time averaged sampling on 6.3 Introduction
of these sensors employ sub- sampling to reduce noise in the sensor's output signal. Some examples of current the attenuation of signal fluctuations caused by sensor spatial averaging have been addressed by Gurvich (1962 caused by sensor separation have been addressed by Kaimal et. al. (1968), Moore (1986), Kirstensen (1997
Use of the harmonic mean: On averaging count-rate data for variable-time, fixed-count intervals
Colle, R.
1996-09-01
The average of count rates from measurements performed by accumulating a fixed total number of counts over a variable time interval should be obtained with harmonic mean. Use of an arithmetic mean in these cases will give intuitively incorrect results. Uncertainty estimators for this harmonically averaged counting rate, such as the standard deviation, must also be suitably transformed to correspond to that for the harmonic mean.
Daily Average Consumption of 2 Long-Acting Opioids: An Interrupted Time Series Analysis
Puenpatom, R. Amy; Szeinbach, Sheryl L.; Ma, Larry; Ben-Joseph, Rami H.; Summers, Kent H.
2012-01-01
Background Oxycodone controlled release (CR) and oxymorphone extended release (ER) are frequently prescribed long-acting opioids, which are approved for twice-daily dosing. The US Food and Drug Administration approved a reformulated crush-resistant version of oxycodone CR in April 2010. Objective To compare the daily average consumption (DACON) for oxycodone CR and for oxymorphone ER before and after the introduction of the reformulated, crush-resistant version of oxycodone CR. Methods This was a retrospective claims database analysis using pharmacy claims from the MarketScan database for the period from January 2010 through March 2011. The interrupted time series analysis was used to evaluate the impact of the introduction of reformulated oxycodone CR on the DACON of the 2 drugs—oxycodone CR and oxymorphone ER. The source of the databases included private-sector health data from more than 150 medium and large employers. All prescription claims containing oxycodone CR and oxymorphone ER dispensed to members from January 1, 2010, to March 31, 2011, were included in the analysis. Prescription claims containing duplicate National Drug Codes, missing member identification, invalid quantities or inaccurate days supply of either drug, and DACON values of <1 and >500 were removed. Results The database yielded 483,063 prescription claims for oxycodone CR and oxymorphone ER from January 1, 2010, to March 31, 2011. The final sample consisted of 411,404 oxycodone CR prescriptions (traditional and reformulated) dispensed to 85,150 members and 62,656 oxymorphone ER prescriptions dispensed to 11,931 members. Before the introduction of reformulated oxycodone CR, DACON values for the highest strength available for each of the 2 drugs were 0.51 tablets higher for oxycodone CR than for oxymorphone ER, with mean DACON values of 3.5 for oxycodone CR and 3.0 for oxymorphone ER (P <.001). The differences of mean DACON between the 2 drugs for all lower strengths were 0.46 tablets, with mean DACON values of 2.7 for oxycodone CR and 2.3 for oxymorphone ER (P <.001). After the introduction of the new formulation, the difference in mean DACON between the 2 drugs was slightly lower: 0.45 tablets for the highest-strength and 0.40 tablets for the lower-strength pairs. Regression analyses showed that the immediate and overall impact of the reformulation of oxycodone CR on the DACON of oxycodone CR was minimal, whereas no changes were seen in the DACON of oxymorphone ER. The estimated DACON for oxycodone CR decreased by 0.1 tablets, or 3.7% (P <.001), 6 months after the new formulation was introduced. Conclusion The mean DACON was 0.4 tablets per day higher for oxycodone CR compared with oxymorphone ER for all dosage strengths for the entire study period. After the introduction of the reformulated oxycodone CR, the DACON for this drug was slightly mitigated; however, there was a minimal impact on the mean differences between oxycodone CR and oxymorphone ER. PMID:24991311
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta
2004-01-01
Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature information could reliably be recovered, however the turbulence estimates contained significant uncertainty. Resulting flow parameter estimates are presented for surveys of Mach 0.6, 0.95, and 1.4 jet flows. Velocity, density, and temperature were determined with accuracies of 5 m/s, 1.5%, and 1%, respectively, in flows with no particles present, and with accuracies of 5 m/s, 1-4%, and 2% in flows with particles. Comparison with hotwire data for the Mach 0.6 condition demonstrated turbulence estimates with accuracies of about 5 m/s outside the jet core where Mie scattering from dust/smoke particulates aided in the estimation of turbulence. Turbulence estimates could not be recovered with any significant accuracy for measurement points where no particles were present.
Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain
NASA Technical Reports Server (NTRS)
Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.
1992-01-01
The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (aspect ratios and slope factors).
Measurement of time averaged power in HITU fields—effects of duty cycle and target distance
NASA Astrophysics Data System (ADS)
Jenderka, K.-V.; Wilkens, V.
2012-10-01
The reliable description of the ultrasonic fields of high-intensity therapeutic ultrasound (HITU) devices is a prerequisite concerning the safe application of the method in the daily clinical routine. Since ultrasonic sensors used for the characterization of diagnostic fields are at high risk of being damaged in the strong therapeutic fields, the measurements are carried out in burst mode to reduce the acting temporal-average intensities. For the thorough investigation of possible differences between the excitation in continuous wave (cw) and burst mode, the temporal-average total acoustic output powers of two types of HITU transducers with f-numbers of approximately 1 and with working frequencies between 1.1 MHz and 3.3 MHz were investigated by means of a radiation force balance. The maximum cw equivalent power level was 300 W the duty cycles varied between 1% and 90%. In addition, the possible effect of the transducer-target distance was investigated. It was found that the different turn-on and turn-off behaviour of the transducers caused variations of the effective duty cycle, depending on the power level and the power amplifier used. The temporal-average power declined with increasing distance as expected, and no focal anomaly was detected.
On the Edge: A Study of Small Private Colleges That Have Made a Successful Financial Turnaround
ERIC Educational Resources Information Center
Carey, Amy Bragg
2014-01-01
This article describes a qualitative study that involved two small private universities, examining their process of transformation from institutions headed toward closure to institutions that underwent a successful turnaround. The primary questions that guided the study included the issues and circumstances that led to the need for a turnaround,…
Meeting the Challenge of School Turnaround: Lessons from the Intersection of Research and Practice
ERIC Educational Resources Information Center
Salmonowicz, Michael
2009-01-01
"Turnaround" has become the new buzzword in education. From states and districts paying for principals to be trained as turnaround specialists to U.S. Secretary of Education Arne Duncan's call to turn around the country's 5,000 lowest-performing schools, the concept of rapid, large-scale reform in individual schools--not just incremental school…
School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform
ERIC Educational Resources Information Center
Peck, Craig; Reitzug, Ulrich C.
2014-01-01
School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…
Evidence of Discrete Scale Invariance in DLA and Time-to-Failure by Canonical Averaging
NASA Astrophysics Data System (ADS)
Johansen, A.; Sornette, D.
Discrete scale invariance, which corresponds to a partial breaking of the scaling symmetry, is reflected in the existence of a hierarchy of characteristic scales l0,l0?,l0?2,…, where ? is a preferred scaling ratio and l0 a microscopic cut-off. Signatures of discrete scale invariance have recently been found in a variety of systems ranging from rupture, earthquakes, Laplacian growth phenomena, "animals" in percolation to financial market crashes. We believe it to be a quite general, albeit subtle phenomenon. Indeed, the practical problem in uncovering an underlying discrete scale invariance is that standard ensemble averaging procedures destroy it as if it was pure noise. This is due to the fact, that while ? only depends on the underlying physics, l0 on the contrary is realization-dependent. Here, we adapt and implement a novel so-called "canonical" averaging scheme which re-sets the l0 of different realizations to approximately the same value. The method is based on the determination of a realization-dependent effective critical point obtained from, e.g., a maximum susceptibility criterion. We demonstrate the method on diffusion limited aggregation and a model of rupture.
Clancy, Ted
Abstract-- Mean electromyogram (EMG) conduction delay is often estimated as the average time delay between two surface EMG recordings arranged along the conduction path. It has previously been shown the "upstream" EMG recording to the "downstream" recording. In this work, we examined regularized least squares
Minsuk Leet; Sang Lyul Mint; Chang Yun Park; Young Hyun Baet; Heonshik Shint; Chong-sang Kim
1993-01-01
One of the obstacles to using RISC processors in ahard real-time environment is the unpredictability ofcaches. This unpredictability stems from basing themon a design that tries to optimize the average case executiontime. In this paper, we propose a dual modeinstruction prefetch scheme as an alternative to instructioncaching schemes. In the proposed scheme, athread is associated with each instruction block. Thethread
Real-Time Forcast Model Analysis of Daily Average Building Load for a Thermal Storage System Control
Song, L.; Joo, I. S.; Guwana, S.
1 Real-Time Forecast Model Analysis of Daily Average Building Load for a Thermal Storage System Control Li Song, Ph.D., P.E. School of Aerospace and Mechanical Engineering University of Oklahoma Ik-Seong Joo, Ph.D., P.E. Subroto...
All-Fiber-Integrated Mid-Infrared Supercontinuum System with 0.7 Watts Time-Averaged Power
C. Xia; M. Kumar; M. N. Islam; A. Galvanauskas; F. L. Terry; M. J. Freeman
2007-01-01
All-fiber-integrated supercontinuum generation is demonstrated from ~0.9-3.6 mum with ~0.7 W time-averaged power by using a telecommunication laser diode, amplified by an erbium\\/ytterbium co-doped cladding-pumped fiber amplifier, and coupled into 35 m ZBLAN fluoride fiber.
Empirical wind model for the middle and lower atmosphere. Part 1: Local time average
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.
Kodra, Evan A [ORNL; Chatterjee, Snigdhansu [University of Minnesota; Ganguly, Auroop R [ORNL
2010-01-01
Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.
Concurrent Average Memory Access Time Xian-He Sun Dawei Wang
Sun, Xian-He
CPU and memory performance. Cache hierarchies are the most effective mechanism for hiding the long exist not only in main memory, but also penetrate into the cache hierarchies. For instance, in the Intel accesses may coexist in the memory hierarchy at the same time. Thus, a single cache miss is no longer
NASA Astrophysics Data System (ADS)
Creeden, Daniel; McCarthy, John C.; Ketteridge, Peter A.; Schunemann, Peter G.; Southward, Timothy; Komiak, James J.; Chicklis, Evan P.
2007-05-01
We have modeled and demonstrated a scalable, compact, fiber-pumped terahertz source based on difference frequency mixing (DFM) in zinc germanium phosphide (ZGP) capable of producing high average and peak-power pulses. Currently, our terahertz source produces 2mW of average THz power and >40W of peak power with sub-nanosecond pulses at a repetition rate of 100kHz in the range of 2-3THz without cryogenic cooling or ultra-fast optics. This high average power laser-based terahertz output enables the real-time imaging of concealed objects using an off-the-shelf uncooled microbolometer focal-plane array. With this THz system, we have imaged objects obscured inside in a common shipping envelope, demonstrating the potential of compact laser-based terahertz sources for use in security screening applications.
How complex is the time-averaged geomagnetic field over the past 5 Myr?
NASA Astrophysics Data System (ADS)
Carlut, Julie; Courtillot, Vincent
1998-08-01
A basic tenet of palaeomagnetism is that the Earth's magnetic field behaves on average like that of a central axial dipole (g01 ). Nevertheless, the question of possible persistent second-order features is still open. Recently Johnson & Constable (1995, 1996) performed a regularized non-linear inversion and found evidence for persistent non-zonal features. Formal uncertainties would indicate that there are significant (non-zero) terms at least up to degree and order 4. Using a recent compilation of two different data sets from lavas (0 to 5 Ma) and the Johnson & Constable codes, we test the robustness of this result. The data set has been divided into three subsets: the Brunhes polarity data (B), all normal polarity data (N) and all reverse data (R). In each subset of data, a prominent g02 , of the order of 5 per cent of g01 , is clearly present, as previously established by several authors. In some subsets, smaller terms appear: g22 and g11 in the Brunhes data, h13 and h12 in N, and h12 , g03 and g33 in R. A threshold under which terms resulting from the inversion cannot yet be considered as robust appears to be of the order of 300 nT. Indeed, tests show that many terms, which are different for each epoch (B, N or R), may be artefacts due to aliasing because of poor site distribution, or due to the underestimation of a priori errors in the data; these could result from undetected tectonic rotations, non-horizontal palaeoslopes, or viscous overprints. Because of these limitations in resolution, it may not yet be possible to identify robustly terms other than the axial dipole and quadrupole. The persistence of high-latitude flux concentrations, hemispheric asymmetry or normal versus reversed field asymmetry cannot yet be considered as demonstrated.
Reimold, M; Mueller-Schauenburg, W; Becker, G A; Reischl, G; Dohmen, B M; Bares, R
2004-04-01
Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential ( BP) calculated with Logan's non-invasive graphical analysis and the "simplified reference tissue method" (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [[(11)C] d- threo-methylphenidate (dMP) and [(11)C]raclopride (RAC) PET]. dMP was not quantified with SRTM since the low k(2) (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [(11)C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K(1)=25%), the BP obtained from average TAC was close to the mean BP (error <5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E= DV(2)- DV' ( DV(2) = distribution volume of the first tissue compartment, DV' = distribution volume of the reference tissue). This can be explained by the fact that the distribution volume ratio ( DVR= DV/DV') obtained from averaged TAC is an approximation for Sigma DV/Sigma DV' rather than for Sigma DVR/ n. We conclude that Logan's non-invasive method and SRTM are suitable for heterogeneous tissues and that discussion of group differences in PET studies generally should include qualitative and quantitative assessment of interindividually averaged TAC. PMID:14689241
Coherent averaging of the passive fathometer response using short correlation time
Gerstoft, Peter
experimental data.1Â3,8,9 The technique relies on surface generated noise from wind and waves.10,11 Ship- ping s and, for correlation times less than a few seconds, the observed sig- nal-to-noise ratio (SNR) agrees-correlation of ambient noise data from a drifting vertical array, has been the subject of much discussion in recent years
NASA Astrophysics Data System (ADS)
Oweis, Ghanem; Steven, Ceccio
2003-11-01
PIV data of the flow field in the immediate vicinity of the trailing edge of a ducted propeller at the tip revealed the existence of multiple vorticity concentrations. The multiple vortices in each instantaneous PIV field were identified and individually characterized. The measurements of the multiple vortices were combined with a Gaussian vortex model to reconstruct the vorticity and velocity fields. The major features of the original experimental field were recovered, and the correlation between the two fields was good. The time averaged field and velocity fluctuations were also measured. We will discuss why the "typical" instantaneous tip vortex and the tip vortex from the time averaged field are substantially different. We attempt to explain the cause of these differences. Knowledge of the instantaneous flow field variability is used to understand the causes of the measured velocity fluctuations. The results from this study have an impact on the understanding of the roll-up of tip vortices, and the dynamics of multiple vortices.
An upper bound to time-averaged space-charge limited diode currents M. E. Griswold,1
An upper bound to time-averaged space-charge limited diode currents M. E. Griswold,1 N. J. Fisch,1ÂLangmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known the space-charged lim- ited current in the classical problem of a one-dimensional 1D diode JCL = 4 9 0 2q m
Dae-Im Kang; Byoung-Duk Lim
1996-01-01
This paper describes the determination of the optimal averaging time for the precision calibration of a build-up system. When a build-up system is calibrated using a deadweight machine, the oscillating signal components of force transducers in the build-up system are strongly related to the motion of the deadweight. We propose a model based on the analysis of oscillating signals of
Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet
NASA Astrophysics Data System (ADS)
Hsu, C. M.; Huang, R. F.
2013-07-01
The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.
NASA Astrophysics Data System (ADS)
Tsai, C.; Liu, C.; Schnurle, P.; Chen, H.
2005-12-01
Marine seismic reflection data collected from offshore southwestern Taiwan show that prominent seismic bottom simulating reflectors (BSRs) are observed that indicate the existence of gas hydrate in the seafloor sediment with free gas zone underneath. We apply a theoretical rock physics model to analyze 2D marine seismic data to determine gas hydrate and free gas saturation. High-porosity marine sediment is modeled as a granular system where the elastic wave velocities are linked to porosity; effective pressure; mineralogy; elastic properties of the pore-filling material; and water, gas and gas hydrate saturation of the pore space. To apply this model to seismic data, we first obtain interval velocity from migration velocity analysis. Next, all input parameters to the rock physics model, except porosity and water, gas and gas hydrate saturation, are estimated from geological information. The saturation estimations are extremely sensitive to the input velocity values. Therefore, accurate velocity determination is crucial for correct reservoir characterization. We apply pre-stack depth migration and amplitude versus angle (AVA) methods to analyze two seismic lines (EW9509-35 and EW9509-46) located in the key area offshore southwestern Taiwan. A more advanced approach to velocity analysis is through the use of iterative migration, also known as migration velocity analysis (MVA). These techniques can overcome the restriction of lateral homogeneity and retrieve information about the velocity. Further more, the incidence and emergence angles of any reflection can be approached more accurately when using ray-tracing methods in depth domain, rather then ray-path approximations based on RMS velocities in time domain. Thus, efficient amplitude versus angle analysis is available within the process of pre-stack depth migration with little additional computing time, and that enables us to quantify further acoustic and shear-wave velocities in the subsurface. Velocity structures derived from pre-stack depth migration and from analyzing the wide-angle seismic data show that the hydrate-bearing sediments generally have velocity ranges from 1750 to 2000 m/s, with most values around 1900 m/s. Low velocity zones observed beneath the gas hydrate bearing sediments clearly indicate the presence of free gas below.
Time series analysis to monitor and assess water resources: a moving average approach.
Reghunath, Rajesh; Murthy, T R Sreedhara; Raghavan, B R
2005-10-01
An understanding of the behavior of the groundwater body and its long-term trends are essential for making any management decision in a given watershed. Geostatistical methods can effectively be used to derive the long-term trends of the groundwater body. Here an attempt has been made to find out the long-term trends of the water table fluctuations of a river basin through a time series approach. The method was found to be useful for demarcating the zones of discharge and of recharge of an aquifer. The recharge of the aquifer is attributed to the return flow from applied irrigation. In the study area, farmers mainly depend on borewells for water and water is pumped from the deep aquifer indiscriminately. The recharge of the shallow aquifer implies excessive pumping of the deep aquifer. Necessary steps have to be taken immediately at appropriate levels to control the irrational pumping of deep aquifer groundwater, which is needed as a future water source. The study emphasizes the use of geostatistics for the better management of water resources and sustainable development of the area. PMID:16240189
NASA Astrophysics Data System (ADS)
Akimoto, Takuma; Shinkai, Soya; Aizawa, Yoji
2015-01-01
In infinite ergodic theory, two distributional limit theorems are well-known. One is characterized by the Mittag-Leffler distribution for time averages of functions, i.e., integrable functions with respect to an infinite invariant measure. The other is characterized by the generalized arc-sine distribution for time averages of non- functions. Here, we provide another distributional behavior of time averages of non- functions in one-dimensional intermittent maps where each has an indifferent fixed point and an infinite invariant measure. Observation functions considered here are non- functions which vanish at the indifferent fixed point. We call this class of observation functions weak non- function. Our main result represents a first step toward a third distributional limit theorem, i.e., a distributional limit theorem for this class of observables, in infinite ergodic theory. To prove our proposition, we propose a stochastic process induced by a renewal process to mimic a Birkoff sum of a weak non- function in the one-dimensional intermittent maps.
P. N. Brandt; A. V. Getling
2008-02-01
The decrease in the rms contrast of time-averaged images with the averaging time is compared between four datasets: (1) a series of solar granulation images recorded at La Palma in 1993; (2) a series of artificial granulation images obtained in numerical simulations by Rieutord et al. (2002); (3) a similar series computed by Steffen and his colleagues (see Wedemeyer et al., 2004}); (4) a random field with some parameters typical of the granulation, constructed by Rast (2002). In addition, (5) a sequence of images was obtained from real granulation images using a temporal and spatial shuffling procedure, and the contrast of the average of n images from this sequence as a function of n is analysed. The series (1) of real granulation images exhibits a considerably slower contrast decrease than do both the series (3) of simulated granulation images and the series (4) of random fields. Starting from some relatively short averaging times t, the behaviour of the contrast in series (3) and (4) resembles the t^{-1/2} statistical law, while the shuffled series (5) obeys the n^{-1/2} law from n = 2 on. Series (2) demonstrates a peculiarly slow decline of contrast, which could be attributed to particular properties of the boundary conditions used in the simulations. Comparisons between the analysed contrast-variation laws indicate quite definitely that the brightness field of solar granulation contains a long-lived component, which could be associated with locally persistent dark intergranular holes and/or with the presence of quasi-regular structures. The suggestion that the random field (4) successfully reproduces the contrast-variation law for the real granulation (Rast, 2002) can be declined.
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2009
2009-01-01
Given the importance of good teaching and leadership for school success, turnaround schools should think carefully about how to structure professional environments that reward and motivate excellence. A system of "Pay-for-Contribution" that includes tools such as hard-to-staff and skill shortage pay, performance pay, and/or retention pay, will…
NASA Astrophysics Data System (ADS)
Oweis, Ghanem F.; Ceccio, Steven L.
2005-05-01
The instantaneous and time-averaged flow fields in the tip region of a ducted marine propulsor are examined. In this flow, a primary tip-leakage vortex interacts with a secondary, co-rotating trailing edge vortex and other co- and counter-rotating vorticity found in the blade wake. Planar particle imaging velocimetry (PIV) is used to examine the flow in a plane approximately perpendicular to the mean axis of the primary vortex. An identification procedure is used to characterize multiple regions of compact vorticity in the flow fields as series of Gaussian vortices. Significant differences are found between the vortex properties from the time-averaged flow fields and the average vortex properties identified in the instantaneous flow fields. Variability in the vortical flow field results from spatial wandering of the vortices, correlated fluctuations of the vortex strength and core size, and both correlated and uncorrelated fluctuations in the relative positions of the vortices. This variability leads to pseudo-turbulent velocity fluctuations. Corrections for some of this variability are performed on the instantaneous flow fields. The resulting processed flow fields reveal a significant increase in flow variability in a region relatively far downstream of the blade trailing edge, a phenomenon that is masked through the process of simple averaging. This increased flow variability is also accompanied by the inception of discrete vortex cavitation bubbles, which is an unexpected result, since the mean flow pressures in the region of inception are much higher than the vapor pressure of the liquid. This suggests that unresolved fine-scale vortex interactions and stretching may be occurring in the region of increased flow variability.
Moskal, P; Silarski, M; Bednarski, T; Bia?as, P; Bu?ka, J; Czerwi?ski, E; Gajos, A; Kami?ska, D; Kap?on, L; Kochanowski, A; Korcyl, G; Kowal, J; Kowalski, P; Kozik, T; Krzemie?, W; Kubicz, E; Nied?wiecki, Sz; Pa?ka, M; Raczy?ski, L; Rudy, Z; Rundel, O; Salabura, P; S?omski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Wi?licki, W; Wochlik, I; Zieli?ski, M; Zo?, N
2015-01-01
In this article we present a novel method of hit time and hit position reconstruction in long scintillator detectors. We take advantage of the fact that for this kind of detectors amplitude and shape of registered signals depends strongly on the position where particle hit the detector. The reconstruction is based on determination of the degree of similarity between measured and averaged signals stored in a library for a set of well-defined positions along the scintillator. Preliminary results of validation of the introduced method with experimental data obtained by means of the double strip prototype of the J-PET detector are presented.
Wiesner, Margit; Silbereisen, Rainer K
2003-12-01
This longitudinal study examined individual, family, and peer covariates of distinctive trajectories of juvenile delinquency, using data from a community sample of 318 German adolescents (mean age at the first wave was 11.45 years). Latent growth mixture modelling analysis revealed four trajectory groups: high-level offenders, medium-level offenders, low-level offenders, and rare offenders. The trajectory groups were discriminated better by time-averaged covariates than by initial status covariates. High peer tolerance of deviance and low parental empathy were consistently linked to varying offending trajectories, whereas gender, low academic achievement, and low parental monitoring appeared to be trajectory-specific covariates. PMID:14643745
Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M
1992-01-01
A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
Fluid trajectory evaluation based on an ensemble-averaged cross-correlation in time-resolved PIV
NASA Astrophysics Data System (ADS)
Jeon, Young Jin; Chatellier, Ludovic; David, Laurent
2014-07-01
A novel multi-frame particle image velocimetry (PIV) method, able to evaluate a fluid trajectory by means of an ensemble-averaged cross-correlation, is introduced. The method integrates the advantages of the state-of-art time-resolved PIV (TR-PIV) methods to further enhance both robustness and dynamic range. The fluid trajectory follows a polynomial model with a prescribed order. A set of polynomial coefficients, which maximizes the ensemble-averaged cross-correlation value across the frames, is regarded as the most appropriate solution. To achieve a convergence of the trajectory in terms of polynomial coefficients, an ensemble-averaged cross-correlation map is constructed by sampling cross-correlation values near the predictor trajectory with respect to an imposed change of each polynomial coefficient. A relation between the given change and corresponding cross-correlation maps, which could be calculated from the ordinary cross-correlation, is derived. A disagreement between computational domain and corresponding physical domain is compensated by introducing the Jacobian matrix based on the image deformation scheme in accordance with the trajectory. An increased cost of the convergence calculation, associated with the nonlinearity of the fluid trajectory, is moderated by means of a V-cycle iteration. To validate enhancements of the present method, quantitative comparisons with the state-of-arts TR-PIV methods, e.g., the adaptive temporal interval, the multi-frame pyramid correlation and the fluid trajectory correlation, were carried out by using synthetically generated particle image sequences. The performances of the tested methods are discussed in algorithmic terms. A high-rate TR-PIV experiment of a flow over an airfoil demonstrates the effectiveness of the present method. It is shown that the present method is capable of reducing random errors in both velocity and material acceleration while suppressing spurious temporal fluctuations due to measurement noise.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
Paulsen, H.; Valland, H. [Univ. of Trondheim (Norway)
1996-09-01
The objective of this paper is to introduce a new method for full-field measurement of local average gas-phase fuel concentration in a transient axisymmetrical gas-jet or evaporating spray. Since the combustion process in a diesel engine is a diffusion flame, the local fuel concentration as a function of time is one of the factors that governs engine efficiency and emissions. The method is utilizing the classic Schlieren technique. A CCD camera and frame-grabber combination is used to record the data. Based on these data and the assumption that the flow-field is axisymmetrical, the local index of light refraction is calculated, and from this an estimate of the local gas-phase fuel concentration is made. Since the flow-field is turbulent, data from a large number of separate injections are used. Therefore the results should represent the development of the average flow-field. Since it is a full-field method, the results can be checked for overall conservation of mass. Included is the theory of the method, a description of the experimental equipment and the results of the experiments. The method has been tested using methane and propane fuels at room temperatures and the results show that the total mass of fuel found in the flow-field is in good agreement with the mass-flow through the nozzle.
Patrick, Mark A; Joye, Colin D; De Lucia, Frank C
2014-10-01
Active illumination is an attractive approach for millimeter and submillimeter wave imaging because of its generally larger signal margins and capacity for range determination. However, the resultant speckle from diffuse targets and the requirement that specular targets have strategic angular alignment are significant shortcomings. As a result, many, if not most, demonstrations of active imaging have involved the use of strategically oriented specular targets. We have previously shown [J. Opt. Soc. Am. A29, 2643 (2012)10.1364/JOSAA.29.002643JOAOD61084-7529] that spatially time-resolved modulated multimode mixing approaches mitigate these issues. In this paper, we extend this work to demonstrate the use of range resolved multimode mixing as an alternative means to provide statistically independent images for speckle reduction averaging. We also demonstrate the use of systematic illumination of spatial modes in a "grassy field" geometry. PMID:25401236
Mumbare, Sachin S; Gosavi, Shriram; Almale, Balaji; Patil, Aruna; Dhakane, Supriya; Kadu, Aniruddha
2014-01-01
Background: India's National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Materials and Methods: Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q) nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA) was applied. Results: Forecasting showed that the replacement level of 2.1 total fertility rate (TFR) will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Conclusion: Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples. PMID:25364146
Time-averaged distributions of solute and solvent motions: exploring proton wires of GFP and PfM2DH.
Velez-Vega, Camilo; McKay, Daniel J J; Aravamuthan, Vibhas; Pearlstein, Robert; Duca, José S
2014-12-22
Proton translocation pathways of selected variants of the green fluorescent protein (GFP) and Pseudomonas fluorescens mannitol 2-dehydrogenase (PfM2DH) were investigated via an explicit solvent molecular dynamics-based analysis protocol that allows for direct quantitative relationship between a crystal structure and its time-averaged solute-solvent structure obtained from simulation. Our study of GFP is in good agreement with previous research suggesting that the proton released from the chromophore upon photoexcitation can diffuse through an extended internal hydrogen bonding network that allows for the proton to exit to bulk or be recaptured by the anionic chromophore. Conversely for PfM2DH, we identified the most probable ionization states of key residues along the proton escape channel from the catalytic site to bulk solvent, wherein the solute and high-density solvent crystal structures of binary and ternary complexes were properly reproduced. Furthermore, we proposed a plausible mechanism for this proton translocation process that is consistent with the state-dependent structural shifts observed in our analysis. The time-averaged structures generated from our analyses facilitate validation of MD simulation results and provide a comprehensive profile of the dynamic all-occupancy solvation network within and around a flexible solute, from which detailed hydrogen-bonding networks can be inferred. In this way, potential drawbacks arising from the elucidation of these networks by examination of static crystal structures or via alternate rigid-protein solvation analysis procedures can be overcome. Complementary studies aimed at the effective use of our methodology for alternate implementations (e.g., ligand design) are currently underway. PMID:25405925
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-01-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
NASA Technical Reports Server (NTRS)
Huff, Edward M.; Mosher, Marianne; Barszcz, Eric
2002-01-01
Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local stationarity diminishes as the temporal duration of the cycle increases. This is most evident for a planetary mesh cycle, which can take several minutes to complete.
BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra
NASA Technical Reports Server (NTRS)
Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.
1996-01-01
We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.
Keith S Cover
2008-03-20
While the multiexponential nature of T2 decays measured in vivo is well known, characterizing T2 decays by a single time constant is still very useful when differentiating among structures and pathologies in MRI images. A novel, robust, fast and very simple method is presented for both estimating and displaying the average time constant for the T2 decay of each pixel from a multiecho MRI sequence. The average time constant is calculated from the average of the values measured from the T2 decay over many echoes. For a monoexponential decay, the normalized decay average varies monotonically with the time constant. Therefore, it is simple to map any normalized decay average to an average time constant. This method takes advantage of the robustness of the normalized decay average to both artifacts and multiexponential decays. Color intensity projections (CIPs) were used to display 32 echoes acquired at a 10ms spacing as a single color image. The brightness of each pixel in each color image was determined by the intensity of the corresponding pixel in the earliest image and the hue was determined by the normalized decay average. Examples demonstrate the effectiveness of using CIPs to display the results of a multiecho sequence for a healthy subject and a multiple sclerosis patient.
Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F. [Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands)
2008-09-15
Purpose: lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. Methods and Materials: 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Results: Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods <0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.
Lakämper, Stefan; Lamprecht, Andreas; Schaap, Iwan A T; Dual, Jurg
2015-01-01
Ultrasonic standing waves are increasingly applied in the manipulation and sorting of micrometer-sized particles in microfluidic cells. To optimize the performance of such devices, it is essential to know the exact forces that the particles experience in the acoustic wave. Although much progress has been made via analytical and numerical modeling, the reliability of these methods relies strongly on the assumptions used, e.g. the boundary conditions. Here, we have combined an acoustic flow cell with an optical laser trap to directly measure the force on a single spherical particle in two dimensions. While performing ultrasonic frequency scans, we measured the time-averaged forces on single particles that were moved with the laser trap through the microfluidic cell. The cell including piezoelectric transducers was modeled with finite element methods. We found that the experimentally obtained forces and the derived pressure fields confirm the predictions from theory and modeling. This novel approach can now be readily expanded to other particle, chamber, and fluid regimes and opens up the possibility of studying the effects of the presence of boundaries, acoustic streaming, and non-linear fluids. PMID:25370872
ERIC Educational Resources Information Center
MacTaggart, Terrence, Ed.
2007-01-01
This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the true…
ERIC Educational Resources Information Center
Herman, Rebecca; Huberman, Mette
2012-01-01
The TALPS study aims to build on the existing research base to develop promising methodologies to identify chronically low-performing and turnaround schools, as well as to identify promising strategies for turning around chronically low-performing schools. By looking specifically at schools identified as turnaround, in comparison to nonturnaround…
NASA Astrophysics Data System (ADS)
Liu, Yang-Wei; Liu, Bao-Jie; Lu, Li-Peng
2012-04-01
The average-passage equation system (APES) provides a rigorous mathematical framework for accounting for the unsteady blade row interaction through multistage compressors in steady state environment by introducing deterministic correlations (DC) that need to be modeled to close the equation system. The primary purpose of this study was to provide insight into the DC characteristics and the influence of DC on the time-averaged flow field of the APES. In Part 2 of this two-part paper, the influence of DC on the time-averaged flow field was systematically studied. Several time-averaging computations were conducted with various boundary conditions and DC for the downstream stator in a transonic compressor stage, by employing the CFD solver developed in Part 1 of this two-part paper. These results were compared with the time-averaged unsteady flow field and the steady one. The study indicated that the circumferential-averaged DC can take into account major part of the unsteady effects on spanwise redistribution of flow fields in compressors. Furthermore, it demonstrated that both deterministic stresses and deterministic enthalpy fluxes are necessary to reproduce the time-averaged flow field.
ERIC Educational Resources Information Center
Burbach, Harold J.; Butler, Alfred R., IV
2005-01-01
The authors of this article were recently privileged to spend five days with a group of 10 talented principals who have been charged with turning around academically low-performing schools in their respective districts. These principals were selected as participants in a Turnaround Specialist Program initiated by Gov. Mark R. Warner of Virginia…
Rural Economic Development Consequences of the Population Turnaround in Northern Lower Michigan.
ERIC Educational Resources Information Center
West, Patrick C.; And Others
Consequences of population turnaround for rural economic development are examined in a 9-county region of Northern Lower Michigan. Data from census reports and 374 usable responses to a questionnaire mailed to a random sample of property owners drawn from 1982 county tax assessment rolls were used to test competing hypotheses about rural…
How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds
ERIC Educational Resources Information Center
Kowal, Julie; Ableidinger, Joe
2011-01-01
In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…
ERIC Educational Resources Information Center
Schmidt-Davis, Jon; Bottoms, Gene
2012-01-01
Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apart…
Intervention or Persuasion? Strategies for Turnaround of Poorly-Performing Councils
Dave Turner; Chris Skelcher; Philip Whiteman; Michael Hughes; Pauline Jas
2004-01-01
This article provides early results from a long-term evaluation of the turnaround strategies by poorly-performing local authorities in England. The history and theory behind central government interventions into local government is reviewed, focusing on the Comprehensive Performance Assessment (CPA). The journeys taken by 10 local authorities, after being labelled as ‘poor’ or ‘weak’, are described and these responses are located
ERIC Educational Resources Information Center
Stuit, David A.
2010-01-01
This study investigates the successes of the charter and district sectors in eliminating bad schools via dramatic turnarounds in performance and/or shutdowns. It identified 2,025 low-performing charter and district schools across ten states, each of which is home to a sizable number of charter schools. These particular schools were tracked from…
Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform
ERIC Educational Resources Information Center
Mette, Ian M.; Scribner, Jay P.
2014-01-01
This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in student…
NASA Astrophysics Data System (ADS)
Foresti, Daniele; Nabavi, Majid; Poulikakos, Dimos
2012-05-01
Acoustic levitation is a physical phenomenon that arises when the acoustic radiation pressure is strong enough to overcome gravitational force. It is a nonlinear phenomenon which can be predicted only if higher order terms are included in the acoustic field calculation. The study of acoustic levitation is usually conducted by solving the linear acoustic equation and bridging the gap with an analytical solution. Only recently, the scientific community has shown interest in the full solution of the Navier-Stokes' equation with the aim of deeply investigating the acoustic radiation pressure. We present herein a numerical model based on Finite Volume Method (FVM) and Dynamic Mesh (DM) for the calculation of the acoustic radiation pressure acting on a rigid sphere inside an axisymmetric levitator which is the most widely used and investigated type of levitators. In this work, we focus on the third resonance mode. The use of DM is new in the field of acoustic levitation, allowing a more realistic simulation of the phenomenon, since no standing wave has to be necessarily imposed as boundary condition. The radiating plate is modeled as a rigid cylinder moving sinusoidally along the central axis. The time-averaged acoustic force exerting on the sphere is calculated for different radii Rs of the sphere (0.025 to 0.5 wavelengths). It is shown that the acoustic force increases proportional to Rs3 for small radii, then decreases when the standing wave condition is violated and finally rises again in the travelling wave radiation pressure configuration. The numerical model is validated for the inviscid case with a Finite Element Method model of the linear acoustic model based on King's approximation.
NASA Astrophysics Data System (ADS)
Yu, C.; Zinniker, D. A.; Moldowan, J.
2010-12-01
Urban air pollution is an ongoing and complicated problem for both residents and policy makers. This study aims to provide a better understanding of the geographic source and fate of organic pollutants in a dynamic urban environment. Natural and artificial hydrophobic substrates were employed for the passive monitoring and mapping of ground-level organic pollutants in the San Francisco Bay area. We focused specifically on volatile and semi-volatile polycyclic aromatic hydrocarbons (PAHs). These compounds are proxies for a broad range of combustion related air pollutants derived from local, regional, and global combustion sources. PAHs include several well-studied carcinogens and can be measured easily and accurately across a broad range of concentrations. Estimates of time-integrated vapor phase and particle deposition were made from measuring accumulated PAHs in the leaves of several widely distributed tree species (including the Quercus agrifolia and Sequoia sempervirens) and an artificial wax film. Samples were designed to represent pollutant exposure over a period of one to several months. The selective sampling and analysis of hydrophobic substrates providess insight into the average geographic distribution of ground-level air pollutants in a simple and inexpensive way. However, accumulated organics do not directly correlated with human exposure and the source signature of PAHs may be obscured by transport, deposition, and flux processes. We attempted to address some of these complications by studying 1) PAH accumulation rates within substrates in a controlled microcosm, 2) differences in PAH abundance in different substrate types at the same locality, and 3) samples near long-term high volume air sampling stations. We also set out to create a map of PAH concentrations based on our measurements. This map can be directly compared with interpolated data from high-volume sampling stations and used to address questions concerning atmospheric heterogeneity of these pollutants (i.e. due to both source localization and dominant wind patterns). Our initial results indicate that exposure to PAHs in the bay area is geographically heterogeneous and individual exposure may vary by more than two orders of magnitude. The signatures of PAH contamination also varies considerably, indicating different sources and differing transportation mechanisms may be important at different sites and times.
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler. However, the effects of temperature and humidity have been observed. Therefore, adjustments of experimental sampling constants at different environmental conditions will be necessary. PMID:22651222
Switzer, P.; Ott, W.
1993-01-01
Previous human activity pattern-exposure models have required improved ways for handling the serial autocorrelation of pollutant concentrations in indoor and in-transit microenvironments. Because existing models often sample concentrations from microenvironmental distributions for different averaging times, one approach for handling this autocorrelation is to develop an averaging time model for each microenvironment of importance. The paper explores a new approach for developing averaging time models: deriving the model theoretically from the mass balance equation, which describes the relationship between the time series of the input and output concentrations of any pollutant introduced into a well-mixed chamber. Beginning with the mass balance equation, the paper derives an averaging time model that predicts the mean, variance, and autocorrelation of the time series of pollutant concentrations in a well-mixed chamber for any averaging time. The paper considers the case of a discrete model in which the input source concentration is a time series of independent, piecewise-constant concentrations of equal duration while the air exchange rate remains fixed. Because the model is derived theoretically, the model is exact for the conditions specified. The goal of the research is to provide human exposure researchers with basic concepts for designing and developing useful, practical algorithms for future exposure and indoor air quality models.
NSDL National Science Digital Library
2014-09-19
In this activity, learners compare the number of letters in the names of the people in their families or group of friends. Learners write out the names in graph paper, with one letter in each square, and cut out the names. Learners sort the names from shortest to longest and then find the average length of the names by cutting and scrambling up the letters. This activity guide contains sample questions to ask, literary connections, extensions, and alignment to local and national standards.
Stafrace, Simon; Lilly, Alan
2008-08-01
This case study demonstrates how leadership was harnessed to turn around a decline in the performance of an aged persons' mental health service - the Namarra Nursing Home at Caulfield General Medical Centre in Melbourne, Australia. In 2000 the nursing home faced a crisis of public confidence due to failings in the management of quality, clinical risk and human resources within the service. These problems reflected structural and operational shortcomings in the clinical directorate and wider organisation. In this article, we detail the process of turnaround from the perspective of senior executive managers with professional and operational responsibility for the service. This turnaround required attention to local clinical accountability and transformation of the mental health program from a collocated but operationally isolated service to one integrated within the governance structures of the auspicing organisation. PMID:18666887
NASA Astrophysics Data System (ADS)
Scarponi, D.; Kaufman, D.; Bright, J.; Kowalewski, M.
2009-04-01
Single fossiliferous beds contain biotic remnants that commonly vary in age over a time span of hundreds to thousands of years. Multiple recent studies suggest that such temporal mixing is a widespread phenomenon in marine depositional systems. This research focuses on quantitative estimates of temporal mixing obtained by direct dating of individual corbulid bivalve shells (Lentidium mediterraneum and Corbula gibba) from Po plain marine units of the Holocene 4th-order depositional sequence, including Transgressive Systems Tract [TST] and Highstand Systems Tract [HST]. These units displays a distinctive succession of facies consisting of brackish to marginal marine retrogradational deposits, (early TST), overlain by fully marine fine to coarse gray sands (late TST), and capped with progradational deltaic clays and sands (HST). More than 300 corbulid specimens, representing 19 shell-rich horizons evenly distributed along the depositional sequence and sampled from 9 cores, have been dated by means of aspartic acid racemization calibrated using 23 AMS-radiocarbon dates (14 dates for Lentidium mediterraneum and 9 dates for Corbula gibba, respectively). The results indicate that the scale of time-averaging is comparable when similar depositional environments from the same systems tract are compared across cores. However, time averaging is notably different when similar depositional environments from TST and HST segments of the sequence are compared. Specifically, late HST horizons (n=8) display relatively low levels of time-averaging: the mean within-horizon range of shell ages is 537 years and standard deviation averages 165 years. In contrast, late TST horizons (n=7) are dramatically more time-averaged: mean range of 5104 years and mean standard deviations of 1420 years. Thus, late TST horizons experience a 1 order of magnitude higher time-averaging than environmentally comparable late HST horizons. In conclusion the HST and TST systems tracts of the Po Plain display dramatically different levels of time-averaging, and therefore, are also likely to differ notably in their taphonomic overprint. The observed patterns are also consistent with the sequence stratigraphic paradigm, which predicts differences in rate of sedimentation and severity of reworking between HST and TST. The results provide a compelling case for applicability of amino acid racemization methods as a tool for evaluating changes in depositional dynamics, sedimentation rates, time-averaging, temporal resolution of the fossil record, and taphonomic overprints across sequence stratigraphic cycles.
Using corporate finance to engineer an organizational turnaround.
Sussman, Jason H; Dziesinski, Ray R
2002-11-01
Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process. PMID:12656032
Catherine Gourlay-Francé; Catherine Lorgeoux; Marie-Hélène Tusseau-Vuillemin
2008-01-01
Semipermeable membrane devices (SPMDs) previously spiked with performance reference compounds were exposed in wastewater. After 6 days of exposure, 13 polycyclic aromatic hydrocarbons (PAHs) were quantified in SPMDs. Exchange rate constants and time-weighted average (TWA) concentrations of SPMD-available PAHs in water were calculated. The bias of using SPMDs to estimate an actual TWA concentration if the concentration in water fluctuates,
Prescot, Andrew P; Richards, Todd; Dager, Stephen R; Choi, Changho; Renshaw, Perry F
2012-11-01
(1) H MRS investigations have reported altered glutamatergic neurotransmission in a variety of psychiatric disorders. The unraveling of glutamate from glutamine resonances is crucial for the interpretation of these observations, although this remains a challenge at clinical static magnetic field strengths. Glutamate resolution can be improved through an approach known as echo time (TE) averaging, which involves the acquisition and subsequent averaging of multiple TE steps. The process of TE averaging retains the central component of the glutamate methylene multiplet at 2.35 ppm, with the simultaneous attenuation of overlapping phase-modulated coupled resonances of glutamine and N-acetylaspartate. We have developed a novel post-processing approach, termed phase-adjusted echo time (PATE) averaging, for the retrieval of glutamine signals from a TE-averaged (1) H MRS dataset. The method works by the application of an optimal TE-specific phase term, which is derived from spectral simulation, prior to averaging over TE space. The simulation procedures and preliminary in vivo spectra acquired from the human frontal lobe at 2.89 T are presented. Three metabolite normalization schemes were developed to evaluate the frontal lobe test-retest reliability for glutamine measurement in six subjects, and the resulting values were comparable with previous reports for within-subject (9-14%) and inter-subject (14-20%) measures. Using the acquisition parameters and TE range described, glutamine quantification is possible in approximately 10 min. The post-processing methods described can also be applied retrospectively to extract glutamine and glutamate levels from previously acquired TE-averaged (1) H MRS datasets. PMID:22407923
Field demonstration of rapid turnaround, multilevel groundwater screening
Tingle, A.R. [Analysas Corp., Oak Ridge, TN (United States); Baker, L. [Automated Sciences Group, Inc., Oak Ridge, TN (United States); Long, D.D. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States). Hazardous Waste Remedial Actions Program; Miracle, M. [Advanced Sciences, Inc., Oak Ridge, TN (United States)
1994-09-01
A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.
NASA Astrophysics Data System (ADS)
Jarochowska, Emilia; Munnecke, Axel
2015-01-01
Stable carbon isotope curves are used as a precise stratigraphic tool in the Paleozoic, even though they are commonly based on shallow-water carbonate record, characterized by low stratigraphic completeness. Identification of episodes of large-scale redeposition and erosion may improve ?13Ccarb-based correlations. Here, a series of at least three episodes of high-energy onshore redeposition are described from the Makarivka Member (new unit) of the Ustya Formation from the Homerian (middle Silurian) of Podolia, Ukraine. The Makarivka Member is emplaced within a tidal flat succession. Its most prominent part is divided into a lower polymictic conglomerate of sand- to boulder-sized clasts representing a range of subtidal facies, and an upper heterolithic unit composed of grainstone and mudstone laminae. The aim of the study is to identify the mechanism of deposition of the allochthonous conglomeratic material in this Member. Based on analogies with recent tsunami deposits, the conglomerate is interpreted to reflect the strongest landward-directed current in the tsunami run-up phase, and the heterolith - alternating high-density landward currents, stagnant intervals allowing mud and land-derived debris to settle, and backwash flows. The tsunamite was deposited during an interval of decreasing isotopic values of the Mulde excursion, a global ?13C excursion reaching + 5.2‰ in the studied sections. Clast redeposition in an interval characterized by rapidly changing ?13Ccarb offers the opportunity to evaluate the degree of temporal and spatial averaging caused by the tsunami. The clasts in the polymictic conglomerate show scattered ?13Ccarb values (- 0.3‰ to + 2.1‰) compared to homogenous (1.3‰ to 1.6‰) values in the matrix. The presence of clasts characterized by low ?13Ccarb values is explained by their decrease with bathymetry rather than erosion of pre-excursion strata, whereas high values characterize material entrained from the sea-floor and strata directly underlying the tsunamite. Close (1.3‰ and 1.5‰) average ?13Ccarb values suggest that the matrix of the conglomerate is potentially a product of clast grinding.
NASA Astrophysics Data System (ADS)
Gardezi, Akber; Alkandri, Ahmed; Birch, Philip; Young, Rupert; Chatwin, Chris
2010-10-01
We propose a space variant Maximum Average Correlation Height (MACH) filter which can be locally modified depending upon its position in the input frame. This can be used to detect targets in an environment from varying ranges and in unpredictable weather conditions using thermal images. It enables adaptation of the filter dependant on background heat signature variances and also enables the normalization of the filter energy levels. The kernel can be normalized to remove a non-uniform brightness distribution if this occurs in different regions of the image. The main constraint in this implementation is the dependence on computational ability of the system. This can be minimized with the recent advances in optical correlators using scanning holographic memory, as proposed by Birch et al. [1] In this paper we describe the discrimination abilities of the MACH filter against background heat signature variances and tolerance to changes in scale and calculate the improvement in detection capabilities with the introduction of a nonlinearity. We propose a security detection system which exhibits a joint process where human and an automated pattern recognition system contribute to the overall solution for the detection of pre-defined targets.
Cut-off and hitting times of a sample of Ornstein-Uhlenbeck processes and its average
B. Lachaud
2005-01-01
A cut-off\\u000aphenomenon is shown to occur\\u000ain a sample of n independent, identically distributed\\u000aOrnstein-Uhlenbeck processes and its\\u000aaverage. Their distributions stay far from\\u000aequilibrium before a certain O(log(n)) time, and converge\\u000aexponentially fast after. Precise estimates show that the total\\u000avariation distance drops from almost 1 to almost 0 over an\\u000ainterval of time of length O(1) around
Jo Vandesompele; Katleen De Preter; Filip Pattyn; Bruce Poppe; Nadine Van Roy; Anne De Paepe; Frank Speleman
2002-01-01
BACKGROUND: Gene-expression analysis is increasingly important in biological research, with real-time reverse transcription PCR (RT-PCR) becoming the method of choice for high-throughput and accurate expression profiling of selected genes. Given the increased sensitivity, reproducibility and large dynamic range of this methodology, the requirements for a proper internal control gene for normalization have become increasingly stringent. Although housekeeping gene expression has
4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan
NASA Astrophysics Data System (ADS)
Kruis, Matthijs F.; van de Kamer, Jeroen B.; Belderbos, José S. A.; Sonke, Jan-Jakob; van Herk, Marcel
2014-09-01
The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (?registration) and the regularity of the deformation vector field (\\overline{\\Delta |J|} ) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (?signal). The two irregularity measures, \\overline{\\Delta |J|} and ?registration, were dependent on each other (p < 0.0001, R2 = 0.80 for P-4D-CT, R2 = 0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower \\overline{\\Delta |J|} and ?registration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal. The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42?mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16?mm in caudal direction (4.1% of the amplitude). Similar relative offsets were found at the diaphragm. We have devised a method to use amplitude binned 4D-CT to construct motion model and generate a mid-position planning CT for radiotherapy treatment purposes. We have decimated the systematic offset of this mid-position model with a motion model derived from P-4D-CT. We found that the A-4D-CT led to a decrease of local artefacts and that this decrease was correlated to the irregularity of the external respiration signal.
Buckley, M.
2011-01-01
Urban Density Is More Sustainable: ? Fiscal efficiency and infrastructure --- optimized city services and concentrated utility demand + control system benefits ? Social Co-Location Synergy ---cultural and educational density... Garland TX Strategy Focus on Strategic Industries --- Identify Footloose Industries +Training /Education Synergy Research Analyze Market + Inventory Sites --- Demographic Shifts + Inventory Underutilized Sites Visioning MXD Visioning Planning...
Shaffer, F.; Massah, H.; Sinclair, J.; Shahnam, M.
1995-04-30
This paper describes an experimental facility to measure time-averaged properties of particle-wall collisions. A particle tracking velocimetry (PTV) system measures particle rebound angles and velocities and impact angles and velocities for specific particles. From these measured properties, values of normal and tangential coefficient of restitution are derived. The PTV system rapidly measures thousands of particle-wall collisions enabling determination of time-averaged properties. In this initial work we use 191 {micro}m glass particles colliding with a glass plate at a fixed angle of 17.5{degree} and a velocity of 17.07 m/s. Observations are also made of aspherical particles colliding with gross sliding. In the future, this experimental system will be used with particles and wall materials of industrial significance.
Felix Zijlstra
Abstract Background Elevated,blood ,glucose ,values ,are ,a prognostic ,factor ,in myocardial ,infarction ,(MI) patients. The unfavorable ,relation between ,hyperglycemia ,and ,outcome ,is known ,for admission,glucose ,and ,fasting ,glucose ,after admission. ,These ,predictors ,are single measurements,and,thus not indicative of overall hyperglycemia.,Increased time-averaged glucose,may,better predict adverse,events in MI patients. Methods Ina,prospective ,study ,of MI ,patients ,treated ,with ,primary ,percutaneous ,coronary intervention (PCI)
Robert H. Litzenberger; Cherukuri U. Rao
1971-01-01
This paper develops an econometric model of the valuation of electric utility shares. This model, based upon the Sharpe-Lintner capital market theory, yields indirect estimates of the marginal rate of time preference and average risk aversion of investors in electric utility shares during the period 1960-66. In general, the empirical findings are consistent with the Sharpe-Lintner positive theory of the
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-07-01
Efficient use of feed resources has become a clear challenge for the U.S. pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio, residual feed intake (RFI), real-time ultrasound back fat thickness (BF), ultrasound muscle depth, intramuscular fat content (IMF), birth weight (BW at birth), and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes B models with 35,140 SNP on 18 autosomes after quality control. Significance of nonoverlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes factor, and nonparametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166 to 140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream of the MC4R gene, was significantly associated with ADFI, ADG, and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN, and PENK. No QTL were identified for RFI, IMF, and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24962532
Mineo, Placido; Vitalini, Daniele; Scamporrino, Emilio; Bazzano, Sebastiano; Alicata, Rossana
2005-01-01
The dependence of the calculated average molecular mass of a polyethylene glycol with a large polydispersity on the instrumental parameters adopted in the acquisition of mass spectra using delayed extraction matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (DE MALDI-TOFMS) was investigated. It has been shown that a combined effect of delay times and potential gradients can act on the ion cloud in the source chamber affecting both mass resolution and average molecular mass value of the analyzed polymeric sample. Also examined was a blend of two different polymers (a PEG and a PMMA commercial sample having a similar average molecular mass), which presents an additional problem concerning the discrimination among the different polymer species as a function of the experimental conditions. In this work, the best instrumental conditions to obtain both good resolution and a correct average molecular mass for the examined polydisperse sample are reported. PMID:16134231
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Hulot, G.; Johnson, C. L.
2013-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.
Numerical study of laminar and turbulent flows inside a turnaround duct with and without guide vanes
NASA Technical Reports Server (NTRS)
Lin, S.-J.; Chang, James L. C.
1987-01-01
The purpose of this study is to examine in detail incompressible laminar and turbulent flows inside a turnaround duct with and without guide vanes and to investigate the effects of vanes on the flow characteristics. To perform this study, an implicit finite difference code cast in general curvilinear coordinates is further developed. The code is based on the method of pseudo-compressibility and utilize ADI or implicit approximate factorization algorithm to achieve computational efficiency. Method of segmental sweeping is developed to overcome the multiple-zone problem due to imposition of guide vanes. In the present study, several test cases have been computed. These test cases include laminar and turbulent flows inside a turnaround duct without and with two or three guide vanes. The study reveals that: (1) there exists large recirculation zones inside the duct if no vanes are present; (2) properly shaped and positional guide vanes are effective in eliminating flow separation; and (3) laminar and turbulent flows have similar flow features. But turbulent flow has less total pressure drop.
NASA Astrophysics Data System (ADS)
Minh, David D. L.; Chodera, John D.
2011-01-01
Recently discovered identities in statistical mechanics have enabled the calculation of equilibrium ensemble averages from realizations of driven nonequilibrium processes, including single-molecule pulling experiments and analogous computer simulations. Challenges in collecting large data sets motivate the pursuit of efficient statistical estimators that maximize use of available information. Along these lines, Hummer and Szabo developed an estimator that combines data from multiple time slices along a driven nonequilibrium process to compute the potential of mean force. Here, we generalize their approach, pooling information from multiple time slices to estimate arbitrary equilibrium expectations. Our expression may be combined with estimators of path-ensemble averages, including existing optimal estimators that use data collected by unidirectional and bidirectional protocols. We demonstrate the estimator by calculating free energies, moments of the polymer extension, the thermodynamic metric tensor, and the thermodynamic length in a model single-molecule pulling experiment. Compared to estimators that only use individual time slices, our multiple time-slice estimators yield substantially smoother estimates and achieve lower variance for higher-order moments.
Sharp, Robert
2002-02-01
Paramagnetic enhancement of NMR relaxation (NMR-PRE) depends on thermal relaxation of the electron spin system. Most previous analyses of experimental NMR-PRE data have relied on Bloembergen--Morgan (B--M) theory to describe the magnetic field dependence of electron spin relaxation in liquid samples. However, B--M theory assumes a Zeeman-limit situation and is not physically appropriate to the common case of S > or = 1 transition metal ions which possess a permanent zero-field splitting (zfs) that is comparable to or larger than the Zeeman splitting. Theory has been needed which (1) includes the effects of the zfs interaction, thus providing a realistic description of the magnetic field dependence of the NMR-PRE outside the Zeeman limit, and (2) describes electron spin relaxation phenomena at a comparable level of complexity to that of B--M theory, i.e., with two magnetic field-dependent electron spin relaxation times, tau(S1) and tau(S2), defined in the laboratory coordinate frame. Theory of this kind is developed. Expressions derived in a previous study (R. R. Sharp and L. L. Lohr, J. Chem. Phys. 115, 5005 (2001).) for level-specific relaxation rates have been averaged over spin eigenstates to give level-averaged quantities, tau(S1,2). This kind of averaging leads to a great simplification in the mathematical form of the results. Simple zfs-limit molecular-frame and laboratory-frame expressions are given for electron spin S=1, 3/2, 2, and 5/2. General expressions, valid for S > or = 1 and for arbitrary magnitudes of the Zeeman and zfs energies, are derived for level-averaged electron spin relaxation times defined in both the laboratory- and the molecule-fixed coordinate frames. The new theory coincides with B--M theory in the Zeeman limit. PMID:11846584
Holyoak, Marcel; Meese, Robert J.; Graves, Emily E.
2014-01-01
In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005–2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we employ should be useful in other metapopulation studies for calculating time-averaged reproductive output for different sites. PMID:24817307
Holyoak, Marcel; Meese, Robert J; Graves, Emily E
2014-01-01
In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we employ should be useful in other metapopulation studies for calculating time-averaged reproductive output for different sites. PMID:24817307
Instability at long averaging times
Hensinger, Winfried
linewidth: nano-Hertz range ! - excitation is accompanied by large light shift due to coupling to other lock PM fiber ECDL 934 nm AOM tuning slave laser diode 934 nm H maser / 871 nm reference frequency comb generator PM fiber SHG cavity pulse switch, fine tuning AOM 467 nm to trap H-C lock DPD slave laser diode
Canciani, B; Mormino, P; Scanavacca, G; Della Rocca, F
1994-02-01
We report on a patient suffering from asymmetrical hypertrophic cardiomyopathy, with alterations occurring in the signal-averaged electrocardiogram (SAQRS). Recordings of 3-lead orthogonal ECG were obtained and analyzed by a Del Mar Avionics 750A Innovator device. The time domain analysis showed late potentials and a slow inscription of the initial portion of the SAQRS, that we called "early potentials". On the frequency domain there was a high degree of spectral turbulence at the beginning and at the end of the SAQRS. We discuss the possibility that the slow and fragmented conduction of the initial portion of the SAQRS could be related to the electrical instability of the disease. PMID:8013767
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
Leite, Argentina; Rocha, Ana Paula; Silva, Maria Eduarda
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation. PMID:23822468
NASA Technical Reports Server (NTRS)
Kim, J. H.; Hudson, R. D.; Thompson, A. M.
1996-01-01
Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass burning in South America and Africa dominates the 03 budget in the tropical southern hemisphere during the study period.
NASA Astrophysics Data System (ADS)
Kosnik, Matthew A.; Hua, Quan; Kaufman, Darrell S.; Zawadzki, Atun
2015-03-01
Carbon-14 calibrated amino acid racemization (14C/AAR) data and lead-210 (210Pb) data are used to examine sediment accumulation rates, stratigraphic order, and the extent of time-averaging in sediments collected from the One Tree Reef lagoon (southern Great Barrier Reef, Australia). The top meter of lagoonal sediment preserves a stratigraphically ordered deposit spanning the last 600 yrs. Despite different assumptions, the 210Pb and 14C/AAR chronologies are remarkably similar indicating consistency in sedimentary processes across sediment grain sizes spanning more than three orders of magnitude (0.1-10 mm). Estimates of long-term sediment accumulation rates range from 2.2 to 1.2 mm yr-1. Molluscan time-averaging in the taphonomically active zone is 19 yrs, whereas below the depth of final burial (~15 cm), it is ~110 yrs/5 cm layer. While not a high-resolution paleontological record, this reef lagoon sediment is suitable for paleoecological studies spanning the period of Western colonization and development. This sedimentary deposit, and others like it, should be useful, albeit not ideal, for quantifying anthropogenic impacts on coral reef systems.
ERIC Educational Resources Information Center
Carlson, Robert V.
This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failed…
ERIC Educational Resources Information Center
Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad
2014-01-01
Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of…
ERIC Educational Resources Information Center
Jones, Stephanie; Clarke, Lane; Enriquez, Grace
2009-01-01
This book demonstrates a five-part framework for teachers, reading specialists, and literacy coaches who want to help their least engaged students become powerful readers. Merging theory and practice, the guide offers successful strategies to reach your "struggling" learners. The authors show how you can "turn-around" your instructional practice,…
ERIC Educational Resources Information Center
Jacobson, Stephen L.; Szczesek, Jill
2013-01-01
This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…
ERIC Educational Resources Information Center
Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan
2013-01-01
Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Technical Reports Server (NTRS)
Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.
1987-01-01
Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.
NASA Technical Reports Server (NTRS)
Heyson, Harry H.
1960-01-01
Measurements of the time-averaged induced velocities were obtained for rotor tip speeds as great as 1,100 feet per second (tip Mach number of 0.98) and measurements of the instantaneous induced velocities were obtained for rotor tip speeds as great as 900 feet per second. The results indicate that the small effects on the wake with increasing Mach number are primarily due to the changes in rotor-load distribution resulting from changes in Mach number rather than to compressibility effects on the wake itself. No effect of tip Mach number on the instantaneous velocities was observed. Under conditions for which the blade tip was operated at negative pitch angles, an erratic circulatory flow was observed.
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta
2004-01-01
A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.
Boyer, Edmond
integrative sampler and the solid-phase extraction for estimating herbicide time-weighted average integrative sampler and the solid-phase extraction for estimating herbicide time-weighted average for detecting episodic or short-term pollution events (e.g. increased herbicide concentrations during a flood
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2007-12-01
We recently introduced a method to rigorously test the statistical compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models against any lava flow paleomagnetic database (Khokhlov et al., 2001, 2006). Applying this method to test (TAF+PSV) models against synthetic data produced from those shows that the method is very efficient at discriminating models, and very sensitive, provided those data errors are properly taken into account. This prompted us to test a variety of published combined (TAF+PSV) models against a test Bruhnes stable polarity data set extracted from the Quidelleur et al. (1994) data base. Not surprisingly, ignoring data errors leads all models to be rejected. But taking data errors into account leads to the stimulating conclusion that at least one (TAF+PSV) model appears to be compatible with the selected data set, this model being purely axisymmetric. This result shows that in practice also, and with the data bases currently available, the method can discriminate various candidate models and decide which actually best fits a given data set. But it also shows that likely non-zonal signatures of non-homogeneous boundary conditions imposed by the mantle are difficult to identify as statistically robust from paleomagnetic directional data sets. In the present paper, we will discuss the possibility that such signatures could eventually be identified as robust with the help of more recent data sets (such as the one put together under the collaborative "TAFI" effort, see e.g. Johnson et al. abstract #GP21A-0013, AGU Fall Meeting, 2005) or by taking additional information into account (such as the possible coincidence of non-zonal time-averaged field patterns with analogous patterns in the modern field).
J. Y. Guo; H. Greiner-Mai; L. Ballani; J. Neumeyer; O. Dierks; C. K. Shum
The folding-averaging algorithm is applied to precisely determining periodical signals which may be present in a time series. The basic principle is to rebuild for every test period a new short time series by cutting the the original times series to shorter ones of which the length is equal to the test period, and then stacking the short time series
Wójcik, J; Kujawska, T; Nowicki, A; Lewin, P A
2008-12-01
The primary goal of this work was to verify experimentally the applicability of the recently introduced time-averaged wave envelope (TAWE) method [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczy?ski, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F=80 mm) square (20 x 20 mm) and rectangular (10 x 25mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczy?ski, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within+/-1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [P.T. Christopher, K.J. Parker, New approaches to nonlinear diffractive field propagation, J. Acoust. Soc. Am. 90 (1) (1991) 488-499.]. The reduction in computation time depends on several parameters, including the source geometry, dimensions, fundamental resonance frequency, excitation level as well as the strength of the medium nonlinearity. For the non-axisymmetric focused transducers mentioned above and excited by a tone burst corresponding to moderately nonlinear and highly nonlinear conditions the execution time of computations was 3 and 12h, respectively, when using a 1.5 GHz clock frequency, 32-bit processor PC laptop with 2 GB RAM memory, only. Such prediction of the full 4D pulsed field is not possible when using conventional, Fourier-series scheme as it would require increasing the RAM memory by at least 2 orders of magnitude. PMID:18474387
Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)
NASA Technical Reports Server (NTRS)
Gross, R. Steven
1996-01-01
This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.
Gourlay-Francé, Catherine; Lorgeoux, Catherine; Tusseau-Vuillemin, Marie-Hélène
2008-11-01
Semipermeable membrane devices (SPMDs) previously spiked with performance reference compounds were exposed in wastewater. After 6 days of exposure, 13 polycyclic aromatic hydrocarbons (PAHs) were quantified in SPMDs. Exchange rate constants and time-weighted average (TWA) concentrations of SPMD-available PAHs in water were calculated. The bias of using SPMDs to estimate an actual TWA concentration if the concentration in water fluctuates, as can be expected in wastewater, was studied with numerical simulations. The bias increased with the exchange rate constant. However, most exchange rate constants evaluated in SPMDs exposed in wastewater were small enough for SPMDs to estimate a TWA concentration of PAHs even when the water concentration varied. TWA-SPMD-available concentrations were always below total dissolved (operationally defined as 0.7 microm) concentrations, indicating that part of the dissolved PAHs was not available for sampling. In situ partitioning coefficients K(DOC) were computed and found to be slightly higher than data from the literature. This confirms that only truly dissolved PAHs should be sampled by SPMDs in wastewater. PMID:18768206
Juarez-Galan, Juan M; Valor, Ignacio
2009-04-10
A new cryogenic integrative air sampler (patent application number 08/00669), able to overcome many of the limitations in current volatile organic compounds and odour sampling methodologies is presented. The sample is spontaneously collected in a universal way at 15 mL/min, selectively dried (reaching up to 95% of moisture removal) and stored under cryogenic conditions. The sampler performance was tested under time weighted average (TWA) conditions, sampling 100L of air over 5 days for determination of NH(3), H(2)S, and benzene, toluene, ethylbenzene and xylenes (BTEX) in the ppm(v) range. Recovery was 100% (statistically) for all compounds, with a concentration factor of 5.5. Furthermore, an in-field evaluation was done by monitoring the TWA inmission levels of BTEX and dimethylethylamine (ppb(v) range) in an urban area with the developed technology and comparing the results with those monitored with a commercial graphitised charcoal diffusive sampler. The results obtained showed a good statistical agreement between the two techniques. PMID:19230895
Nuttall; Wilson
2000-11-01
Realistic simulations with spatially coherent noise have been run in order to compare the performance of adaptive beamforming (ABF), inverse beamforming (IBF), and conventional beamforming (CBF) for the case of finite-averaging times, where the actual spatial coherence of the acoustic field, or covariance matrix, is not known a priori, but must be estimated. These estimation errors cause large errors in the ABF estimate of the directionality of the acoustic field, partly because ABF is a highly nonlinear algorithm. In addition, it is shown that ABF is fundamentally limited in its suppression capability at very low frequency (VLF), based on the sidelobe level of the conventional beampattern in the direction of the noise interferer [G. L. Mohnkern, "Effects of Errors and Limitations on Interference Suppression," NOSC Technical Document 1478, Naval Ocean Systems Center (1989)]. The simulations include a low-level plane wave signal of interest, a stronger noise plane wave interferer, and spatially random background noise. Both IBF and ABF performed significantly better than CBF, and IBF's performance was slightly better than ABF's performance. The performances of IBF and the ABF algorithm, the minimum variance distortionless response (MVDR) [A. H. Nuttall and D. W. Hyde, "Unified Approach to Optimum and Suboptimum Processing for Arrays," USL Report Number 992, Naval Underwater Systems Center, New London, CT (22 April 1969)] were recently compared independently [J. S. D. Solomon, A. J. Knight, and M. V. Greening, "Sonar Array Signal Processing for Sparse Linear Arrays," Defense Science and Technology Organization (DSTO) Technical Report (June 1999)] using measured data, with the result that IBF outperformed MVDR. This result is significant because MVDR requires orders of magnitude more processing power than IBF or CBF. PMID:11108366
NASA Astrophysics Data System (ADS)
Morán-López, J. T.; Schilling, O.
2014-05-01
Turbulent mixing generated by shock-driven acceleration of a perturbed interface is simulated using a new multi-component Reynolds-averaged Navier-Stokes (RANS) model closed with a two-equation - model. The model is implemented in a hydrodynamics code using a third-order weighted essentially non-oscillatory finite-difference method for the advection terms and a second-order central difference method for the gradients in the source and diffusion terms. In the present reshocked Richtmyer-Meshkov instability and mixing study, an incident shock with Mach number is generated in air and progresses into a sulfur hexafluoride test section. The time evolution of the predicted mixing layer widths corresponding to six shock tube test section lengths are compared with experimental measurements and three-dimensional multi-mode numerical simulations. The mixing layer widths are also compared with the analytical self-similar power-law solution of the simplified model equations prior to reshock. A set of model coefficients and initial conditions specific to these six experiments is established, for which the widths before and after reshock agree very well with experimental and numerical simulation data. A second set of general coefficients that accommodates a broader range of incident shock Mach numbers, Atwood numbers, and test section lengths is also established by incorporating additional experimental data for , , and with and with and previous RANS modeling. Terms in the budgets of the turbulent kinetic energy and dissipation rate equations are examined to evaluate the relative importance of turbulence production, dissipation and diffusion mechanisms during mixing. Convergence results for the mixing layer widths, mean fields, and turbulent fields under grid refinement are presented for each of the cases.
P. Welch
1967-01-01
The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections.
Behera, H S; Panda, Bijayalakshmi; 10.5120/2037-2648
2011-01-01
The main objective of the paper is to improve the Round Robin (RR) algorithm using dynamic ITS by coalescing it with Shortest Remaining Time Next (SRTN) algorithm thus reducing the average waiting time, average turnaround time and the number of context switches. The original time slice has been calculated for each process based on its burst time.This is mostly suited for soft real time systems where meeting of deadlines is desirable to increase its performance. The advantage is that processes that are closer to their remaining completion time will get more chances to execute and leave the ready queue. This will reduce the number of processes in the ready queue by knocking out short jobs relatively faster in a hope to reduce the average waiting time, turn around time and number of context switches. This paper improves the algorithm [8] and the experimental analysis shows that the proposed algorithm performs better than algorithm [6] and [8] when the processes are having an increasing order, decreasing order an...
Convergence of polynomial ergodic averages
Bernard Host; Bryna Kra
2005-01-01
We prove theL\\u000a 2 convergence for an ergodic average of a product of functions evaluated along polynomial times in a totally ergodic system.\\u000a For each set of polynomials, we show that there is a particular factor, which is an inverse limit of nilsystems, that controls\\u000a the limit behavior of the average. For a general system, we prove the convergence for
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Luo, J.
1993-01-01
The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.
Temperature: Decadal Averages Map
NSDL National Science Digital Library
California Energy Commission
This interactive map allows students to experiment with decadal average temperature projections. Overall temperatures are expected to rise throughout the century and this tool demonstrates those projected measurements.
Piotr Lubinski
2004-01-27
The methods of obtaining the average spectral shape in a low statistics regime are presented. Different approaches to averaging are extensively tested with simulated spectra, based on the ASCA responses. The issue of binning up the spectrum before fitting is discussed together with the choice of statistic used to model the spectral shape. The best results are obtained with methods in which input data are represented by probability density functions. Application of weights, representing the coverage between the input and output bins, slightly improves the resolution of averaging.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.; Johnson, C. L.
2012-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). Yet, numerous numerical dynamo simulations show that some departures from such a simple geometry is to be expected, not least because of the heterogeneous thermal core-mantle boundary conditions that the convecting mantle imposes on the geodynamo. Indeed, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) would suggest this is the case. However, assessing the significance of such minor departures from the GAD is particularly challenging, because non-linear directional data are sensitive not only to the time-averaged component of the field, but also to its time fluctuating component, known as the paleosecular variation (PSV). This means that in addition to data errors, PSV also must be taken into account when assessing any lava flow directional data based claims of departures of the TAF from the GAD. Furthermore, because of limited age information for these data , it is necessary to assess departures from the GAD by resorting to a statistical approach. We report recent progress using an approach we have suggested and further developed (Khokhlov et al., Geophysical Journal International, 2001, 2006) to test the compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models, against any lava flow paleomagnetic database, asssuming that these TAF and PSV models are defined within the Giant Gaussian Process statistical framework. In particular we will show how sensitive statistical measures of the compatibility of a combined set of TAF and PSV models with a given directional database can be defined. These measures can be used to test published TAF and PSV models with updated 0-5 Ma lava flow paleomagnetic data sets. They also lay the groundwork for designing inverse methods better suited to seek the minimum required departure of the TAF from the GAD.
Yasunari Matsuno; Yoshihiro Adachi; Yasushi Kondo
2006-01-01
- Preamble. In this series of two papers, a methodology to calculate the average number of times a material is used in a society\\u000a from cradle to grave is presented and applied to allocation of environmental impact of virgin material. Part 1 focuses on\\u000a methodology development and shows how the methodology works with hypothetical examples of material flows. Part 2 presents
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-04-28
Efficient use of feed resources has become a clear challenge for the US pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio (FCR), residual feed intake (RFI), real-time ultrasound back fat thickness (BF), muscle depth (MD), intramuscular fat content (IMF), birth weight (BW at birth) and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes-B models with 35,140 SNP on 18 autosomes after quality control. Significance of non-overlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes Factor and non-parametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166-140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream MC4R gene, was significantly associated with ADFI, ADG and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN and PENK. No QTL were identified for RFI, IMF and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24778334
NSDL National Science Digital Library
Roberts, Lila F.
2005-04-21
This demo provides students with a concrete understanding of the average rate of change for physical situations and for functions described in tabular or graphic form. Animations in gif and Quicktime format are available, as well as Excel programs for classroom or student use.
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum
IgG/anti-IgG immunoassay based on a turn-around point long period grating
NASA Astrophysics Data System (ADS)
Chiavaioli, F.; Biswas, P.; Trono, C.; Giannetti, A.; Tombelli, S.; Bandyopadhyay, S.; Basumallick, N.; Dasgupta, K.; Baldini, F.
2014-02-01
Long period fiber gratings (LPFGs) have been proposed as label-free optical biosensor for a few years. Refractive index changes, which modify the fiber transmission spectrum, are still used for evaluating a biochemical interaction that occurs along the grating region. A turn-around point (TAP) LPFG was manufactured for enhancing the refractive index sensitivity of these devices. Considering the simplicity and the fast process with respect to the silanization procedure, the functionalization of the fiber was carried out by Eudragit L100 copolymer. An IgG/anti-IgG immunoassay was implemented for studying the antigen/antibody interaction. A limit of detection lower than 100 ?g L-1 was achieved. Based on the same model assay, we compared the resonance wavelength shifts during the injection of 10 mg L-1 anti-IgG antigen between the TAP LPFG and a standard non-TAP one, in which the coupling occurs with a lower order cladding mode, as performance improvement of the LPFG-based biosensors.
Towards sensitive label-free immunosensing by means of turn-around point long period fiber gratings.
Chiavaioli, F; Biswas, P; Trono, C; Bandyopadhyay, S; Giannetti, A; Tombelli, S; Basumallick, N; Dasgupta, K; Baldini, F
2014-10-15
Long period fiber gratings have been effectively used in the field of biochemical sensing since a few years. Compared to other well-known label-free optical approaches, long period gratings (LPGs) take advantage of the typical peculiarity of optical fibers. Coupling the propagating core mode with a high-order cladding mode near its turn-around point (TAP) was the strategy adopted to achieve good performances without additional coatings, except for the sensing and selective biolayer deposited on the fiber. Both the modeling and manufacturing of TAP LPGs were discussed. After the functionalization of the fiber surface with the deposition of a Eudragit L100 copolymer layer followed by immunoglobulin G (IgG) covalent immobilization, an IgG/anti-IgG bioassay was implemented along the grating region and the kinetics of antibody/antigen interaction was analyzed. A quantitative comparison between a TAP LPG and a non-TAP LPG was carried out to highlight the improvement of the proposed immunosensor. The real effectiveness and feasibility of an LPG-based biosensor were demonstrated by using a complex matrix consisting of human serum, which also confirmed the specificity of the assay, and a limit of detection of 70 ?g L(-1) (460 pM) was achieved. PMID:24835405
ERIC Educational Resources Information Center
Conrad, Jill K.
2013-01-01
This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,…
Galles, David
required (4 years) to complete. Female Male First Time Full Time Freshmen Athletes on Athletic Aid Page 1 Asian Non-Resident African Native Hispanic White Unkown Total Race Female 116 110 10 630 320 896 225 years) American American Asian Non-Resident African Native Hispanic White Unkown Total Race Female 77 71
Galles, David
required (4 years) to complete. Female Male First Time Full Time Freshmen Athletes on Athletic Aid Page 1 Asian Non-Resident African Native Hispanic White Unkown Total Race Female 107 80 17 583 237 856 146 2026) American American Asian Non-Resident African Native Hispanic White Unkown Total Race Female 69 59 8 431 161
Galles, David
required (4 years) to complete. Female Male First Time Full Time Freshmen Athletes on Athletic Aid Sports Asian Non-Resident African Native Hispanic White Unkown Total Race Female 103 97 13 620 276 875 188 2172) American American Asian Non-Resident African Native Hispanic White Unkown Total Race Female 65 67 6 455 189
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
Snowpack: Decadal Averages Map
NSDL National Science Digital Library
California Energy Commission
This is an interactive map of California and the Sierra Nevada mountains, showing projected variations in water stored in snowpack, from 1950 to 2090, assuming low or high emission scenarios over that period of time. Interactive can be adjusted to show different months of the year and various climate models, graphed by site.
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg [Institut fuer Theoretische Physik, Philosophenweg 16, 69120 Heidelberg (Germany)] [Institut fuer Theoretische Physik, Philosophenweg 16, 69120 Heidelberg (Germany); Behrend, Juliane, E-mail: I.Brown@thphys.uni-heidelberg.de, E-mail: G.Robbers@thphys.uni-heidelberg.de, E-mail: Juliane.Behrend@uni-ulm.de [Institut fuer Theoretische Physik, Albert-Einstein-Allee 11, 89069 Ulm (Germany)] [Institut fuer Theoretische Physik, Albert-Einstein-Allee 11, 89069 Ulm (Germany)
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
ERIC Educational Resources Information Center
Olson, Cathy Applefeld
2010-01-01
It's true that each school and each classroom presents its own unique challenges. But Kevin Mixon--who in October traded his instrumental music instructor duties at Blodgett School on the west side of Syracuse, New York, for a job as fine arts coordinator for Syracuse City Schools--knows from experience that some schools are far more challenging…
ERIC Educational Resources Information Center
Hood, Lucy
2006-01-01
This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…
ERIC Educational Resources Information Center
Purinton, Ted; Azcoitia, Carlos
2011-01-01
Chilean educator and poet Gabriela Mistral warned that children's needs are immediate and comprise more than just academic concerns. Implementing comprehensive community schools is an increasingly successful approach to taking her warning to heart, particularly in neighborhoods with large immigrant populations. The reason is simple: education does…
Arithmetic averaging: A versatile technique for smoothing and trend removal
Clark, E.L.
1993-12-31
Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris, E-mail: umeobinna@gmail.com, E-mail: julien.larena@gmail.com, E-mail: chris.clarkson@uct.ac.za [Astrophysics, Cosmology and Gravity Center and Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701 (South Africa)
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.
2012-01-01
ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .
Kuss Middle School: Expanding Time to Accelerate School Improvement
ERIC Educational Resources Information Center
Massachusetts 2020, 2012
2012-01-01
In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning, deepen…
Interlibrary Loan Time and Motion Study, Colorado Western Slope.
ERIC Educational Resources Information Center
Thomas, Sharon D.
This report, which investigates turnaround time for interlibrary loans, presents a 1-month study of the interlibrary loan (ILL) process operating in the Western Slope areas of Colorado during 1980. It comprises introductory material presenting the importance, scope and limitations of the study, problem statement, hypothesis and term definitions; a…
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
McDowell, Perry
Public reporting burden for this collection of information is estimated to average 1 hour per and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions
Averaging of Temporal Memories by Rats
Swanton, Dale N.; Gooch, Cynthia M.; Matell, Matthew S.
2009-01-01
Rats were trained on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10 s and stimulus B (the other stimulus) indicated food availability after 20 s. Testing consisted of non-reinforced probe trials in which the stimulus was A, B, or the compound AB. On single-stimulus trials, rats responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats showed a single scalar peak of responding at a time midway between those for stimulus A and B. These results suggest that when provided with discrepant information regarding the temporal predictability of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use this average to generate an expectation of reward for the compound stimuli. PMID:19594288
Topological quantization of ensemble averages
Emil Prodan
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schrödinger operators. The generalization given here may be
Averaging of globally coupled oscillators
NASA Astrophysics Data System (ADS)
Swift, James W.; Strogatz, Steven H.; Wiesenfeld, Kurt
1992-03-01
We study a specific system of symmetrically coupled oscillators using the method of averaging. The equations describe a series array of Josephson junctions. We concentrate on the dynamics near the splay-phase state (also known as the antiphase state, ponies on a merry-go-round, or rotating wave). We calculate the Floquet exponents of the splay-phase periodic orbit in the weak-coupling limit, and find that all of the Floquet exponents are purely imaginary; in fact, all the Floquet exponents are zero except for a single complex conjugate pair. Thus, nested two-tori of doubly periodic solutions surround the splay-phase state in the linearized averaged equations. We numerically integrate the original system, and find startling agreement with the averaging results on two counts: The observed ratio of frequencies is very close to the prediction, and the solutions of the full equations appear to be either periodic or doubly periodic, as they are in the averaged equations. Such behavior is quite surprising from the point of view of generic dynamical systems theory-one expects higher-dimensional tori and chaotic solutions. We show that the functional form of the equations, and not just their symmetry, is responsible for this nongeneric behavior.
Bayesian Model Averaging: A Tutorial
Jennifer A. Hoeting; David Madigan; Adrian E. Raftery; Chris T. Volinsky
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-confident in- ferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent
Average of Synthetic Exact Filters
David S. Bolme; Bruce A. Draper; J. Ross Beveridge
2009-01-01
This paper introduces a class of correlation filters called Average of Synthetic Exact Filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as Synthetic Discriminant Functions (SDFs) which only spec- ify a single output value per training image. Advantages of ASEF training include: insenitivity to over-fitting,
Average luminosity distance in inhomogeneous universes
Kostov, Valentin, E-mail: valentin@uchicago.edu [University of Chicago, Department of Physics, 5640 S. Ellis Ave., AAC 020, Chicago IL 60637 (United States)
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Estimating averages from distributions of tone durations.
Schweickert, Richard; Han, Hye Joo; Yamaguchi, Motonori; Fortin, Claudette
2014-02-01
We examined whether estimating average duration was influenced by the distribution peak location. We presented participants with samples of various tone durations and then presented comparison tone durations. Participants judged whether each comparison duration was longer than the average sample duration. Estimates of the averages were inferred from the psychophysical functions. The durations were sampled from three distributions: one positively skewed, one symmetric, and one negatively skewed. In Experiment 1, every participant was presented with every distribution. Estimates of the averages were unbiased for the symmetric distribution but were biased toward the long tail of each skewed distribution. This would occur if participants combined the sample to be judged with the previous, irrelevant samples, or with the comparison durations. In Experiment 2, each participant was presented with samples from only one of the distributions. Estimates of the averages were still biased toward the long tails of the skewed distributions. This would occur if participants combined the sample to be judged with the comparison durations, which were the same for the three distributions. In Experiment 3, each participant was presented with only one distribution, and each distribution was tested with its own comparison durations, selected as percentiles from the distribution. The estimates were accurate for the smallest population mean (positively skewed distribution) but underestimated the larger means. These results could be explained by subjective shortening of the durations in memory, with a simple equation from scalar timing theory. This equation correctly predicted two results: The estimated averages were a linear function of the stimulus means, and the variances were a linear function of the squared stimulus means. Neither prediction was dependent on the skewness of the stimulus durations. PMID:24264515
Double-averaged velocity profiles over fixed dune shapes
NASA Astrophysics Data System (ADS)
McLean, Stephen R.; Nikora, Vladimir I.; Coleman, Stephen E.
2008-09-01
Spatially averaged profiles of time averaged velocity, using integrals over thin horizontal slabs (Cartesian double average), are employed in characterizing the flow over fixed dune shapes. For comparison the spatial averaging method of Smith and McLean (1977) that averages along lines at constant distance from the local bed elevation is also investigated. The Cartesian double averaged profiles of the inverse of the velocity shear are nearly constant below the crest elevation, but increase rapidly above the crest level. This results in velocity profiles that increase linearly with distance from the bed below the crest. Above the crest it can be argued that the velocity increases logarithmically, but a power law profile can also be argued. Spatially averaged eddy viscosity profiles are calculated by multiplying the average Reynolds stress by the inverse shear. The resulting profile is more complex than the uniform flow counterpart.
Wm. G. Hoover; Carol G. Hoover; Janka Petravic
2008-07-19
Homogeneous shear flows (with constant strainrate du/dy) are generated with the Doll's and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, P(xx) - P(yy), depend on both the thermostat type and the chosen shearflow algorithm. The Doll's and Sllod algorithms predict opposite signs for this stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures, T(xx) > T(zz) > T(yy).
Detrending moving average algorithm for multifractals
Gu, Gao-Feng; 10.1103/PhysRevE.82.011136
2010-01-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of non-stationary time series and the long-range correlations of fractal surfaces, which contains a parameter $\\theta$ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward ($\\theta=0$), centered ($\\theta=0.5$), and forward ($\\theta=1$) detrending windows. We find that the estimated multifractal scaling exponent $\\tau(q)$ and the singularity spectrum $f(\\alpha)$ are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling ...
Liang, Yinwen; Pang, Yunong; Wu, Qiong; Hu, Zhangfeng; Han, Xue; Xu, Yisheng; Deng, Haiteng; Pan, Junmin
2014-09-01
The assembly and maintenance of cilia depends on intraflagellar transport (IFT). Activated IFT motor kinesin-II enters the cilium with loaded IFT particles comprising IFT-A and IFT-B complexes. At the ciliary tip, kinesin-II becomes inactivated, and IFT particles are released. Moreover, the rate of IFT entry is dynamically regulated during cilium assembly. However, the regulatory mechanism of IFT entry and loading/unloading of IFT particles remains elusive. We show that the kinesin-II motor subunit FLA8, a homolog of KIF3B, is phosphorylated on the conserved S663 by a calcium-dependent kinase in Chlamydomonas. This phosphorylation disrupts the interaction between kinesin-II and IFT-B, inactivates kinesin-II and inhibits IFT entry, and is also required for IFT-B unloading at the ciliary tip. Furthermore, our data suggest that the IFT entry rate is controlled by regulation of the cellular level of phosphorylated FLA8. Therefore, FLA8 phosphorylation acts as a molecular switch to control IFT entry and turnaround. PMID:25175706
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
ERIC Educational Resources Information Center
Circle, David
2005-01-01
When a teacher gives their time to a student, it is more significant to that student than anything else one could do for him or her. Music teachers deal with time all the time. Someone once said that "time is like money: we never have enough." This may seem true; however, time is not like money. One can make more money, but one cannot "make time."…
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Disk-averaged synthetic spectra of Mars
Giovanna Tinetti; Victoria S. Meadows; David Crisp; William Fong; Thangasamy Velusamy; Heather Snively
2004-08-20
The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-curves and the spectral variability at visible + mid-IR wavelengths for Mars as a function of viewing angle, illumination, season. We also considered the appearance of an increasingly frozen Mars and simulated its detection versus real Mars with TPF-C and TPF-I as a function of spectral resolving power, signal-to-noise, integration time.
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P. [Oregon State Univ., Corvallis, OR (United States)
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
Conditionally averaged vorticity field and turbulence modeling
NASA Astrophysics Data System (ADS)
Mui, R. C. Y.; Dommermuth, D. G.; Novikov, E. A.
1996-03-01
The conditionally averaged vorticity (CAV) field with fixed vorticity in a point is obtained from a direct numerical simulation of isotropic turbulence. The characteristic attenuation scale for the twisting and hyperboloidal CAV components is found to be of order ten times greater than the Kolmogorov microscale. A simple analytical model qualitatively agrees with the obtained CAV. For turbulent free-surface flows, the twisting part of CAV is expected to connect to the free surface. An alternative type of subgrid-scale modeling of turbulence, based on CAV, is suggested for the large-eddy simulations.
Conditionally averaged vorticity field and turbulence modeling
Mui, R.C.; Dommermuth, D.G.; Novikov, E.A. [Science Applications International Corporation, San Diego, California 92121 (United States)] [Science Applications International Corporation, San Diego, California 92121 (United States); [Institute for Nonlinear Science, University of California, San Diego, La Jolla, California 92093 (United States)
1996-03-01
The conditionally averaged vorticity (CAV) field with fixed vorticity in a point is obtained from a direct numerical simulation of isotropic turbulence. The characteristic attenuation scale for the twisting and hyperboloidal CAV components is found to be of order ten times greater than the Kolmogorov microscale. A simple analytical model qualitatively agrees with the obtained CAV. For turbulent free-surface flows, the twisting part of CAV is expected to connect to the free surface. An alternative type of subgrid-scale modeling of turbulence, based on CAV, is suggested for the large-eddy simulations. {copyright} {ital 1996 The American Physical Society.}
Monthly Average Temperature for Boston, MA
NSDL National Science Digital Library
The phenomenon is monthly average temperature data for Boston, MA from March 1872 until September 2000. In addition to monthly averages, the National Weather Service table also shows the yearly average temperature.
Self-similarity of higher-order moving averages
NASA Astrophysics Data System (ADS)
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Thomas M. Smith; C. F. Ropelewski; R. W. Reynolds
1994-01-01
Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest
Delineating the Average Rate of Change in Longitudinal Models
ERIC Educational Resources Information Center
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
Real-Time Patient Survey Data During Routine Clinical Activities for Rapid-Cycle Quality Improvement
Jones, Robert E
2015-01-01
Background Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. Objective We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. Methods We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Results Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)—8.0, 11.8, 16.8, seconds, respectively. Conclusions This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health care system. PMID:25768807
Conditionally-averaged structures in wall-bounded turbulent flows
NASA Technical Reports Server (NTRS)
Guezennec, Yann G.; Piomelli, Ugo; Kim, John
1987-01-01
The quadrant-splitting and the wall-shear detection techniques were used to obtain ensemble-averaged wall layer structures. The two techniques give similar results for Q4 events, but the wall-shear method leads to smearing of Q2 events. Events were found to maintain their identity for very long times. The ensemble-averaged structures scale with outer variables. Turbulence producing events were associated with one dominant vortical structure rather than a pair of counter-rotating structures. An asymmetry-preserving averaging scheme was devised that allowed a picture of the average structure which more closely resembles the instantaneous one, to be obtained.
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
A visibility graph averaging aggregation operator
NASA Astrophysics Data System (ADS)
Chen, Shiyu; Hu, Yong; Mahadevan, Sankaran; Deng, Yong
2014-06-01
The problem of aggregation is of considerable importance in many disciplines. In this paper, a new type of operator called visibility graph averaging (VGA) aggregation operator is proposed. This proposed operator is based on the visibility graph which can convert a time series into a graph. The weights are obtained according to the importance of the data in the visibility graph. Finally, the VGA operator is used in the analysis of the TAIEX database to illustrate that it is practical and compared with the classic aggregation operators, it shows its advantage that it not only implements the aggregation of the data purely, but also conserves the time information. Meanwhile, the determination of the weights is more reasonable.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
On the Choice of Average Solar Zenith Angle
Cronin, Timothy W.
Idealized climate modeling studies often choose to neglect spatiotemporal variations in solar radiation, but doing so comes with an important decision about how to average solar radiation in space and time. Since both ...
Adaptive common average filtering for myocontrol applications.
Rehbaum, Hubertus; Farina, Dario
2015-02-01
The use of electromyography (EMG) for the control of upper-limb prostheses has received great interest in neurorehabilitation engineering since decades. Important advances have been performed in the development of machine learning algorithms for myocontrol. This paper describes a novel adaptive filter for EMG preprocessing to be applied as conditioning stage for optimal subsequent information extraction. The aim of this filter is to improve both the quality (signal-to-noise ratio) and the selectivity of the EMG recordings. The filter is based on the classic common average reference (CAR), often used in EEG processing. However, while CAR is stationary, the proposed filter, which is referred to as adaptive common average reference (ACAR), is signal-dependent and its spatial transfer function is adapted over time. The ACAR filter is evaluated in this study for noise reduction and selectivity. Furthermore, it is proven that its application improves the performance of both pattern recognition and regression methods for myoelectric control. It is concluded that the proposed novel filter for EMG conditioning is a useful preprocessing tool in myocontrol applications. PMID:25388778
LONG TERM AVERAGE CONTROL OF A LOCAL TIME PROCESS
Stockbridge, Richard H.
is obtained from a oneÂdegreeÂofÂfreedom shock absorber system with dry friction in which y = x 1 (t.3) In this system, the control v is the shock absorber damping constant, Ky + F sign( â?? y) represents the restoring
Predictabilty of time averages: The influence of the boundary forcing
NASA Technical Reports Server (NTRS)
Shukla, J.
1982-01-01
The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.
Research Papers Detecting time-averaging and spatial mixing
Schöne, Bernd R.
mixing. We devised eight metrics to quantitatively compare sclerochronologically calibrated annual d18 O Current address: Institute and Museum for Geology and Paleontology, Johann Wolfgang Goethe University
Research Papers Detecting time-averaging and spatial mixing
Arizona, University of
the same temperature and salinity regimes and will have identical annual oxygen isotope (d18 O) profiles or amino-acid racemization dating. Application of the technique to a Holocene deposit with shells in life
Critical Averaging Time for Atmospheric Boundary Layer Fluxes
H. Holmes
2005-01-01
Calculation of heat and momentum fluxes in the Atmospheric Boundary Layer (ABL) requires separating the turbulent signal into mean and fluctuating components. Since the ABL is not statistically stationary, separation of these components depends on the inherent scales of motion in the flow. A new method is presented that utilizes energy spectra and cospectra analyses of raw velocity and temperature
Turnaround Project Signs Six States
ERIC Educational Resources Information Center
Maxwell, Lesli A.
2010-01-01
Dozens of schools are slated for aggressive interventions over the next three years under a new, multistate effort that aims to clear hurdles that have hindered previous attempts to improve underperforming schools. Education officials in Colorado, Delaware, Illinois, Louisiana, Massachusetts, and New York have agreed to partner with Mass Insight…
Abstract Average Distances of Pyramid Networks
Hsien-jone Hsieh; Dyi-rong Duh
For an interconnection network, calculating average distance of it is in general more difficult than determining its diameter. Diameters of pyramid networks are well known. This study calculates average distances of pyramid networks.
The method of calculating average skidding distance
Zhang Zhixian; Feng Zhili
1997-01-01
By analyzing the existing average skidding distance formulae and the shape of the landing area, the authors put forward that\\u000a the average skidding distance is the shortest when the ratio of length and width is 1, and the landing collection area is\\u000a in proportion to of average geometrical skidding distance. The new models of calculating average distance are presented.
Average-Case Analysis Using Kolmogorov Complexity
Ming Li; Paul M. B. Vitányi
1997-01-01
This expository paper demonstrates how to use Kolmogorov complexityto do the average-case analysis via four examples, and exhibits a surprisingproperty of the celebrated associated universal distribution. The four examplesare: average case analysis of Heapsort [17, 15], average nni-distance betweentwo binary rooted leave-labeled trees [20], compact routing in computernetworks [3], average-case analysis of an adder algorithm [4]. The propertyis that the
Averaging in LRS class II spacetimes
NASA Astrophysics Data System (ADS)
Kašpar, Petr; Svítek, Otakar
2015-02-01
We generalize Buchert's averaged equations (Gen Relativ Gravit 32; 105, 2000; Gen Relativ Gravit 33; 1381, 2001) to LRS class II dust model in the sense that all Einstein equations are averaged, not only the trace part. We derive the relevant averaged equations and we investigate backreaction on expansion and shear scalars in an approximate LTB model. Finally we propose a way to close the system of averaged equations.
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Averaging underwater noise levels for environmental assessment of shipping.
Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John
2012-10-01
Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575
Polynomial averages converge to the product of integrals
Nikos Frantzikinakis; Bryna Kra
2005-01-01
We answer a question posed by Vitaly Bergelson, showing that in a totally ergodic system, the average of a product of functions\\u000a evaluated along polynomial times, with polynomials of pairwise differing degrees, converges inL\\u000a 2 to the product of the integrals. Such averages are characterized by nilsystems and so we reduce the problem to one of uniform\\u000a distribution of polynomial
Dennis L. Molfese; A. F. Key; S. Kelly; N. Cunningham; S. Terrell; M. Ferguson; V. J. Molfese; T. Bonebright
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between chil- dren with above-average, average, and below-average reading skills. ERP amplitudes and peak latencies decreased as reading skills in- creased. Furthermore, hemisphere differences
NASA Technical Reports Server (NTRS)
Moore, Franklin K; Ostrach, Simon
1957-01-01
The time-average characteristics of boundary layers over a flat plate in nearly quasi-steady flow are determined. The plate may be either insulated or isothermal. The time averages are found without specifying the plate velocity explicitly except that it is positive and has an average value.
Average power scaling of copper hybrid lasers
L. Little; Chris E. Little
1997-01-01
Copper lasers are the highest efficiency high average power visible lasers (510.6 and 578.2 nm). The most efficient of them all is the copper hybrid laser, which has been demonstrated to produce 200 W average power with 1.9% efficiency. The prospects for increasing the average output power from single or double tube devices into the kilowatt regime are discussed. With
Averaging in cosmology based on Cartan scalars
Petr Kaspar; Otakar Svitek
2014-05-22
We present a new approach for averaging in general relativity and cosmology. After a short review of the theory originally taken from the equivalence problem, we consider two ways how to deal with averaging based on Cartan scalars. We apply the theory for two different LTB models. In the first one, correlation term behaves as a positive cosmological constant, in the second example leading correlation term behaves like spatial curvature. We also show nontriviality of averaging for linearized monochromatic gravitational wave.
Cell averaging Chebyshev methods for hyperbolic problems
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Harten, Ami
1990-01-01
A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.
Moving average process underlying the holographic-optical-tweezers experiments
NASA Astrophysics Data System (ADS)
?lezak, Jakub; Drobczy?ski, S?awomir; Weron, Karina; Masajada, Jan
2013-12-01
We study statistical properties of the recordings which contain time-dependent positions of a bead trapped in optical tweezers. Analysis of such a time series indicates that the commonly accepted model, i.e., the autoregressive process of first order, is not sufficient to fit the data. We show a presence of the first-order moving average part in the dynamical model of the system. We explain origin of this part as an influence of the high frequency CCD camera on the measurements. The proposed autoregressive moving average model appears to reflect perfectly all statistical features of the high-frequency recording data.
Probing turbulence intermittency via autoregressive moving-average models
NASA Astrophysics Data System (ADS)
Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele
2014-12-01
We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index ? that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that ? is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that ? is a suitable index to reconstruct intermittency in experimental turbulent fields.
Probing turbulence intermittency via autoregressive moving-average models.
Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele
2014-12-01
We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index ? that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that ? is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that ? is a suitable index to reconstruct intermittency in experimental turbulent fields. PMID:25615037
Bayesian Model Averaging Using Ensemble Particle Filtering
NASA Astrophysics Data System (ADS)
Rings, J.; Vrugt, J. A.; Huisman, J. A.; Schoups, G.; Vereecken, H.
2010-12-01
Conceptual watershed models are a valuable tool for streamflow prediction, but it is also acknowledged that no single model structure can capture all the details of a watershed. Therefore, ensembles of models are employed, and Bayesian model averaging (BMA) is increasingly being used to combine the predictions of multiple different models into a single forecast that is supposed to exhibit better predictive capability then any of the individual models. Successful implementation of BMA depends on the choice of the conditional distribution used to specify uncertainty of each ensemble member. Most often this distribution is assumed Gaussian. Here we introduce a four step approach that retrieves the conditional distribution for each model and time. First, we create a suite of watershed models by calibrating one conceptual model to different parts of the hydrograph. Then, a particle filter is used for each model to recursively derive the posterior probability density function of streamflow. The particle filter explictly incorporates uncertainty in measurement and model states. Then, a cross-entropy method is employed to retrieve closed form mathematical descriptions of these respective probability distributions. Finally, the BMA weights are estimated from these closed-form distributions using the DREAM algorithm. For the extremely diverse suite of watershed models, the RMSE for the BMA model is not necessarily better then that of the single best model. The treatment of model and measurement uncertainties in the particle filter, however, allows much better predictions than the calibrated models alone can provide.
Average oxidation state of carbon in proteins.
Dick, Jeffrey M
2014-11-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Generalized averaging method for power conversion circuits
Seth R. Sanders; J. M. Noworolski; X. Z. Liu; G. C. Verghese
1991-01-01
A more general averaging procedure that encompasses state-space averaging and that is potentially applicable to a much broader class of circuits and systems is presented. Examples of its application in resonant and PWM power convertors are presented. The technique is shown to be effective on a number of examples. including resonant type converters. The approach offers refinements to the theory
Temperature averages and rates of stratospheric reactions
D. M. Murphy; A. R. Ravishankara
1994-01-01
Use of zonal average temperatures is common in numerical modeling of stratospheric chemistry. In most cases, this approach is accurate. The exceptions are when a heterogeneous reaction or a thermal process which is highly non-linear in temperature is involved. The differences between use of an average temperature and more realistic temperature are presented for cases where temperature fluctuations are of
INVERSIONS FOR AVERAGE SUPERGRANULAR FLOWS USING FINITE-FREQUENCY KERNELS
Svanda, Michal, E-mail: michal@astronomie.cz [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)
2012-11-10
I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.
GROUP ACTION INDUCED AVERAGING FOR HARDI PROCESSING
Çetingül, H. Ertan; Afsari, Bijan; Wright, Margaret J.; Thompson, Paul M.; Vidal, Rene
2012-01-01
We consider the problem of processing high angular resolution diffusion images described by orientation distribution functions (ODFs). Prior work showed that several processing operations, e.g., averaging, interpolation and filtering, can be reduced to averaging in the space of ODFs. However, this approach leads to anatomically erroneous results when the ODFs to be processed have very different orientations. To address this issue, we propose a group action induced distance for averaging ODFs, which leads to a novel processing framework on the spaces of orientation (the space of 3D rotations) and shape (the space of ODFs with the same orientation). Experiments demonstrate that our framework produces anatomically meaningful results. PMID:22903055
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811
DEFORMOTION Deforming Motion, Shape Average and the Joint
Soatto, Stefano
average" in order to track moving and deforming objects through time. Fig. 1. A jellyfish is "moving while direction while "locally deforming." The jellyfish in Fig. 1 is just another example to illustrate the same our intuition? For instance, in Fig. 6, how do we describe the "motion" of a jellyfish? Or in Fig. 5
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
Ensemble Averages when ?is a Square Integer
Christopher D. Sinclair
2010-08-25
We give a hyperpfaffian formulation of partition functions and ensemble averages for Hermitian and circular ensembles when L is an arbitrary integer and \\beta=L^2 and when L is an odd integer and \\beta=L^2 +1.
Reasonable Averages That Give Wrong Answers
NSDL National Science Digital Library
Shahani, A. K. (Arjan Kewalram)
Averages are meant to convey the essential features of a set of data, or a random variable, in a simple and a concise way. Like any other summary, an average can be misleading, misused and abused; there is a fair amount of literature on this aspect of averages, the book by D. Huff(1973) being a particularly readable account. In one intuitive use of averages there is a source of error which can be quite serious and which is often not recognized. This source of error is illustrated below by a quality control problem, a project, an experiment and a game. A Taylor series expansion gives an insight into the nature of the error.
Selling Geothermal Systems The "Average" Contractor
Selling Geothermal Systems #12;The "Average" Contractor Â· History of sales procedures Â· Manufacturer Driven Procedures Â· What makes geothermal technology any harder to sell? #12;"It's difficult to sell a geothermal system." Â· It should
STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES
CALIFORNIA ENERGY COMMISSION STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES 2005 TO 2018 Mignon Marks Principal Author Mignon Marks Project Manager David Ashuckian Manager ELECTRICITY ANALYSIS OFFICE Sylvia Bender Acting Deputy Director ELECTRICITY SUPPLY DIVISION B.B. Blevins Executive Director
Averaging Sampled Sensor Outputs To Detect Failures
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.
1990-01-01
Fluctuating signals smoothed by taking consecutive averages. Sampling-and-averaging technique processes noisy or otherwise erratic signals from number of sensors to obtain indications of failures in complicated system containing sensors. Used under both transient and steady-state conditions. Useful in monitoring automotive engines, chemical-processing plants, powerplants, and other systems in which outputs of sensors contain noise or other fluctuations in measured quantities.
A Laterally Averaged Nonhydrostatic Ocean Model
Daniel Bourgault; Dan E. Kelley
2004-01-01
Laterally-averaged dynamical equations are often used in numerical models of narrow coastal regions such as fjords and estuaries. Such models are preferable to 2D formulations, because they can handle variable-width effects such as flow acceleration at coastal constrictions. They also have a considerable speed advantage over 3D models. However, most laterally-averaged models share a significant weakness -- they rely on
Applications of high average power nonlinear optics
Velsko, S.P.; Krupke, W.F.
1996-02-05
Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.
Self-averaging characteristics of spectral fluctuations
NASA Astrophysics Data System (ADS)
Braun, Petr; Haake, Fritz
2015-04-01
The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second, a small imaginary part of the quasi-energy. Self-averaging universal (like the circular unitary ensemble (CUE) average) behavior is found for the smoothed correlator, apart from noise which shrinks like 1/\\sqrt{N} as the dimension N of the quantum Hilbert space grows. There are periodically repeated quasi-energy windows of correlation decay and revival wherein the smoothed correlation remains finite as N\\to ? such that the noise is negligible. In between those windows (where the CUE averaged correlator takes on values of the order 1/{{N}2}) the noise becomes dominant and self-averaging is lost. We conclude that the noise forbids distinction of CUE and GUE-type behavior. Surprisingly, the underlying smoothed generating function does not enjoy any self-averaging outside the range of its variables relevant for determining the two-point correlator (and certain higher-order ones). We corroborate our numerical findings for the noise by analytically determining the CUE variance of the smoothed single-matrix correlator.
Average diurnal variation of summer lightning over the Florida peninsula
NASA Technical Reports Server (NTRS)
Maier, L. M.; Krider, E. P.; Maier, M. W.
1984-01-01
Data derived from a large network of electric field mills are used to determine the average diurnal variation of lightning in a Florida seacoast environment. The variation at the NASA Kennedy Space Center and the Cape Canaveral Air Force Station area is compared with standard weather observations of thunder, and the variation of all discharges in this area is compared with the statistics of cloud-to-ground flashes over most of the South Florida peninsula and offshore waters. The results show average diurnal variations that are consistent with statistics of thunder start times and the times of maximum thunder frequency, but that the actual lightning tends to stop one to two hours before the recorded thunder. The variation is also consistent with previous determinations of the times of maximum rainfall and maximum rainfall rate.
Neutron average cross sections of {sup 237}Np
Noguere, G. [Atomic Energy Commission (CEA), DEN Cadarache, F-13108 Saint Paul Les Durance (France)
2010-04-15
This work reports {sup 237}Np neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the {sup 237}Np neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width
Averaging procedure in variable- G cosmologies
NASA Astrophysics Data System (ADS)
Cardone, Vincenzo F.; Esposito, Giampiero
2010-02-01
Previous work in the literature had built a formalism for spatially averaged equations for the scale factor, giving rise to an averaged Raychaudhuri equation and averaged Hamiltonian constraint, which involve a backreaction source term. The present paper extends these equations to include models with variable Newton parameter and variable cosmological term, motivated by the nonperturbative renormalization program for quantum gravity based upon the Einstein-Hilbert action. We focus on the Brans-Dicke form of the renormalization-group improved action functional. The coupling between backreaction and spatially averaged three-dimensional scalar curvature is found to survive, and a variable- G cosmic quintet is found to emerge. Interestingly, under suitable assumptions, an approximate solution can be found where the early universe tends to a Friedmann-Lemaitre-Robertson-Walker model, while keeping track of the original inhomogeneities through three effective fluids. The resulting qualitative picture is that of a universe consisting of baryons only, while inhomogeneities average out to give rise to the full dark-side phenomenology.
Exact Averaging of Stochastic Equations for Flow in Porous Media
Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi
2008-03-15
It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation of global 21-cm signal measurements are detections of Lyman Alpha Emitters at high redshifts and constraints on the midpoint of reionization, both of which are among the primary science objectives of ongoing or near-future experiments.
Perceiving the average hue of color arrays
Webster, Jacquelyn; Kay, Paul; Webster, Michael A.
2014-01-01
The average of a color distribution has special significance for color coding (e.g. to estimate the illuminant) but how it depends on the visual representation (e.g. perceptual vs. cone-opponent) or nonlinearities (e.g. categorical coding) is unknown. We measured the perceived average of two colors shown alternated in spatial arrays. Observers adjusted the components until the average equaled a specified reference hue. Matches for red, blue-red, or yellow-green were consistent with the arithmetic mean chromaticity, while blue-green settings deviated toward blue. The settings show little evidence for categorical coding, and cannot be predicted from the scaled appearances of the individual components. PMID:24695184
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Global average ozone change from November 1978 to May 1990
NASA Technical Reports Server (NTRS)
Herman, J. R.; Mcpeters, R.; Stolarki, R.; Larko, D.; Hudson, R.
1991-01-01
A recent recalibration and reprocessing of the total ozone mapping spectrometer (TOMS) data have made possible a new determination of the global average total ozone decrease of 3.5 percent over the 11-year period, January 1, 1979, to December 31, 1989, with a 2 sigma error of 1.4 percent. The revised TOMS ozone trend data are in agreement, within error limits, with the average of 39 ground-based Dobson stations and with the world standard Dobson spectrometer 83 at Mauna Loa, Hawaii. Superimposed on the 11-year ozone trend is a possible solar cycle effect, quasi-biennial oscillation (QBO), annual, and semiannual cycles. Using solar 10-7-cm flux data and 30-mbar Singapore wind data (QBO), a time series has been constructed that reproduces the long-term behavior of the globally averaged ozone. Removal of the apparent solar cycle effect from the global average reduces the net ozone loss to 2.66 + or - 1.4 percent per decade. The precise value of the global average ozone trend depends on the latitude range selected, with ranges greater than + or - 69 emphasizing the larger variations at high latitudes.
Polarized electron beams at milliampere average current
NASA Astrophysics Data System (ADS)
Poelker, M.
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ˜ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
A singularity theorem based on spatial averages
NASA Astrophysics Data System (ADS)
Senovilla, J. M. M.
2007-07-01
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Average magnitude difference function pitch extractor
M. Ross; H. Shaffer; A. Cohen; R. Freudberg; H. Manley
1974-01-01
This paper describes a method for using the average magnitude difference function (AMDF) and associated decision logic to estimate the pitch period of voiced speech sounds. The AMDF is a variation on autocorrelation analysis where, instead of correlating the input speech at various delays (where multiplications and summations are formed at each value of delay), a difference signal is formed
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Topological quantization of ensemble averages Emil Prodan
Topological quantization of ensemble averages Emil Prodan Department of Physics, Yeshiva University looking for novel manifestations of the topological quantization. As a new application, we show the formalism can be used to probe the existence of edge states. #12;Topological quantization of ensemble
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Laser Fusion Energy The High Average Power
Laser Fusion Energy and The High Average Power Program John Sethian Naval Research Laboratory Dec for Inertial Fusion Energy with lasers, direct drive targets and solid wall chambers Lasers DPPSL (LLNL) Kr posters Snead Payne #12;Laser(s) Goals 1. Develop technologies that can meet the fusion energy
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Geographic gossip: Efficient averaging for sensor networks
Dimakis, Alexandros G.; Sarwate, Anand D.; Wainwright, Martin J.
2008-01-01
processing, gossip algorithms, message-passing al- gorithms, random geometric graphs,graphs, such as the single cycle and regular grid shown in IEEE TRANSACTIONS ON SIGNAL PROCESSING,graph topology. The averaging problem is an archetypal instance of dis- tributed signal processing,
Science of NHL Hockey: Statistics & Averages
NSDL National Science Digital Library
NBC Learn
2010-10-07
Being a top goalie in the NHL takes more than quick reflexes and nerves of steel, it also requires a firm grip on the numbers. Namely, the key averages and statistics of goaltending. "Science of NHL Hockey" is a 10-part video series produced in partnership with the National Science Foundation and the National Hockey League.
NASA Technical Reports Server (NTRS)
1994-01-01
Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
Average position in quantum walks with a U(2) coin
Min Li; YOng-Sheng Zhang; Guang-Can Guo
2012-10-11
We investigated discrete-time quantum walks with an arbitary unitary coin. Here we discover that the average position $ =\\max( \\sin(\\alpha+\\gamma)$, while the initial state is $1/\\sqrt{2}(\\mid0L>+i\\mid0R>)$. We prove the result and get some symmetry properties of quantum walks with a U(2) coin with $\\mid0L>$ and $\\mid0R>$ as the initial state.
Average and most-probable photon paths in random media.
Polishchuk, A Y; Dolne, J; Liu, F; Alfano, R R
1997-04-01
Time-resolved experiments have revealed that, in contrast to the predictions of conventional diffusion theory, photons select certain curvilinear paths to travel between a source and a detector. Concepts of the average photon paths and Fermat paths are introduced on the basis of the non-Euclidean diffusion equation (NED) to explain experimental results. Comparison of the theory and the experiment demonstrates the potential of the NED to describe nondiffusive features of photon migration in the multiple-scattering regime. PMID:18183224
High average power diode pumped solid state lasers for CALIOPE
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.
On the determination of local instantaneous averages in particulate flow measurements
NASA Technical Reports Server (NTRS)
Vandewall, R. E.; Soo, S. L.
1993-01-01
Determination of instantaneous local average particle density of a gas-particle suspension requires satisfying both the time scale relation and the volume scale relation or its continuum counter part of time averaging. This procedure was validated by comparing simultaneous velocity and mass flux measurements and the laser phase Doppler measurements.
Average Consensus on General Strongly Connected Digraphs
Cai, Kai
2012-01-01
We study the average consensus problem of multi-agent systems for general network topologies with unidirectional information flow. We propose two (linear) distributed algorithms, deterministic and gossip, respectively for the cases where the inter-agent communication is synchronous and asynchronous. Our contribution is that in both cases, the developed algorithms guarantee state averaging on arbitrary strongly connected digraphs; in particular, this graphical condition does not require that the network be balanced or symmetric, thereby extending many previous results in the literature. The key novelty of our approach is to augment an additional variable for each agent, called "surplus", whose function is to locally record individual state updates. For convergence analysis, we employ graph-theoretic and nonnegative matrix tools, with the eigenvalue perturbation theory playing a crucial role.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119
jModelTest: phylogenetic model averaging.
Posada, David
2008-07-01
jModelTest is a new program for the statistical selection of models of nucleotide substitution based on "Phyml" (Guindon and Gascuel 2003. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst Biol. 52:696-704.). It implements 5 different selection strategies, including "hierarchical and dynamical likelihood ratio tests," the "Akaike information criterion," the "Bayesian information criterion," and a "decision-theoretic performance-based" approach. This program also calculates the relative importance and model-averaged estimates of substitution parameters, including a model-averaged estimate of the phylogeny. jModelTest is written in Java and runs under Mac OSX, Windows, and Unix systems with a Java Runtime Environment installed. The program, including documentation, can be freely downloaded from the software section at http://darwin.uvigo.es. PMID:18397919
Stochastic Games with Average Payoff Criterion
Ghosh, M. K. [Department of Mathematics, Indian Institute of Science, Bangalore 560012 (India); Bagchi, A. [Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Method of Successive Weighted Averages (MSWA) and Self-Regulated Averaging Schemes for Solving
Levinson, David M.
patterns from the later iterations. We further develop a self-regulated averaging method, in which the step and auxiliary point. The proposed step size sequences in both MSWA and self-regulated averaging method satisfy by minimizing Fisk's (1980) objective function. Maher (1998) developed alternative optimized step size choice
Code of Federal Regulations, 2013 CFR
2013-07-01
... false Calculation of average fuel economy and average carbon-related exhaust emissions...AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF...Determining Manufacturer's Average Fuel Economy and Manufacturer's Average...
Code of Federal Regulations, 2012 CFR
2012-07-01
... false Calculation of average fuel economy and average carbon-related exhaust emissions...AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF...Determining Manufacturer's Average Fuel Economy and Manufacturer's Average...
The average free volume model for liquids
Yang Yu; Reinhard Krause-Rehberg
2014-04-02
In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.
Path-average kernels for long wavelength traveltime tomography
NASA Astrophysics Data System (ADS)
Mosca, I.; Trampert, J.
2009-05-01
Although much effort goes into improving the resolution of tomographic models, investigating their quality has only just started. Probabilistic tomography provides a framework for the quantitative assessment of uncertainties of long-wavelength tomographic models. So far, this technique has been used to invert maps of surface wave phase velocities and normal-mode splitting functions. Including body waves would substantially increase the depth resolution in the lowermost mantle. In surface wave tomography, the construction of phase velocity maps and splitting functions is a well-defined inverse problem, and the depth inversion is less well constrained but characterized by a small number of dimensions suitable for a Monte Carlo search. Traveltime tomography is mostly based on ray theory that covers the 3-D Earth, thus the dimension of the inverse problem is too large for a Monte Carlo search. The ray-mode duality suggests to apply the path-average approximation to body wave traveltimes. In this way the measured traveltime residual as a function of ray parameter can be inverted using path-average kernels, which depend on depth only, similar to surface wave tomography. We investigate the validity of the path-average approximation for delay times in both the forward and the inverse problem using the velocity model S20RTS as well as random models. We numerically illustrate the precision of such kernels compared with ray-theoretic and finite-frequency ones. We further invert traveltime residuals, calculated from Fermat rays, using the path-average kernels. We find that the agreement between classical ray theory and path-average theory is good for long wavelength structures. We suggest that for mapping long wavelength structures, body waves can be inverted in two steps, similar to surface waves, where the ray parameter and the vertical traveltime play the role of frequency and phase velocity, respectively.
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Group Averaging for de Sitter free fields
Donald Marolf; Ian A. Morrison
2008-10-28
Perturbative gravity about global de Sitter space is subject to linearization-stability constraints. Such constraints imply that quantum states of matter fields couple consistently to gravity {\\it only} if the matter state has vanishing de Sitter charges; i.e., only if the state is invariant under the symmetries of de Sitter space. As noted by Higuchi, the usual Fock spaces for matter fields contain no de Sitter-invariant states except the vacuum, though a new Hilbert space of de Sitter invariant states can be constructed via so-called group-averaging techniques. We study this construction for free scalar fields of arbitrary positive mass in any dimension, and for linear vector and tensor gauge fields in any dimension. Our main result is to show in each case that group averaging converges for states containing a sufficient number of particles. We consider general $N$-particle states with smooth wavefunctions, though we obtain somewhat stronger results when the wavefunctions are finite linear combinations of de Sitter harmonics. Along the way we obtain explicit expressions for general boost matrix elements in a familiar basis.
Radoslav Bozinoski
2009-01-01
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the
The Multidimensional Self-Concept: A Comparison of Gifted and Average-Ability Adolescents
Michael C. Pyryt; Sal Mendaglio
1994-01-01
The purpose of this paper is to describe the self-concept of gifted and average-ability adolescents from a multidimensional perspective. Four dimensions of self-concept (academic, social, athletic, and social) were examined using an instrument that incorporates three theoretical perspectives (reflected appraisals, social comparison, and attribution). Ninety-eight junior high school students from a large urban centre in Western Canada participated in the
Tongue Motion Averaging from Contour Sequences
ERIC Educational Resources Information Center
Li, Min; Kambhamettu, Chandra; Stone, Maureen
2005-01-01
In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned…
Average Transverse Momentum Quantities Approaching the Lightfront
NASA Astrophysics Data System (ADS)
Boer, Daniël
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the p T broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.
Average Gait Differential Image Based Human Recognition
Chen, Jinyan; Liu, Jiansheng
2014-01-01
The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648
Average gait differential image based human recognition.
Chen, Jinyan; Liu, Jiansheng
2014-01-01
The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648
Dissociable effects of attention and crowding on orientation averaging
Dakin, Steven C.; Bex, Peter J.; Cass, John R.; Watt, Roger J.
2010-01-01
It has been proposed that visual crowding—the breakdown in recognition that occurs when objects are presented in cluttered scenes—reflects a limit imposed by visual attention. We examined this idea in the context of an orientation averaging task, having subjects judge the mean orientation of a set of oriented signal elements either in isolation, or “crowded” by nearby randomly oriented elements. In some conditions, subjects also had to perform an attentionally demanding secondary task. By measuring performance at different levels of signal orientation variability, we show that crowding increases subjects’ local uncertainty (about the orientation of individual elements) but that diverting attention reduces their global efficiency (the effective number of elements they can average over). Furthermore, performance with the same stimulus-sequence, presented multiple times, reveals that crowding does not induce more stimulus-independent variability (as would be predicted by some accounts based on attention). We conclude that crowding and attentional load have dissociable perceptual consequences for orientation averaging, suggesting distinct neural mechanisms for both. For the task we examined, attention can modulate the effects of crowding by changing the efficiency with which information is analyzed by the visual system but since crowding changes local uncertainty, not efficiency, crowding does not reflect an attentional limit. PMID:20053091
Averaged null energy condition in Loop Quantum Cosmology
Li-Fang Li; Jian-Yang Zhu
2008-12-18
Wormhole and time machine are very interesting objects in general relativity. However, they need exotic matters which are impossible in classical level to support them. But if we introduce the quantum effects of gravity into the stress-energy tensor, these peculiar objects can be constructed self-consistently. Fortunately, loop quantum cosmology (LQC) has the potential to serve as a bridge connecting the classical theory and quantum gravity. Therefore it provides a simple way for the study of quantum effect in the semiclassical case. As is well known, loop quantum cosmology is very successful to deal with the behavior of early universe. In the early stage, if taken the quantum effect into consideration, inflation is natural because of the violation of every kind of local energy conditions. Similar to the inflationary universe, the violation of the averaged null energy condition is the necessary condition for the traversable wormholes. In this paper, we investigate the averaged null energy condition in LQC in the framework of effective Hamiltonian, and find out that LQC do violate the averaged null energy condition in the massless scalar field coupled model.
Spatially-averaged oscillatory flow over a rough bed
NASA Astrophysics Data System (ADS)
Coleman, Stephen E.; Nikora, Vladimir I.; Schlicke, Ted
2008-09-01
A rigorous framework involving flow decomposition and averaging is presented, within which the mechanics of rough-(e.g., rippled-) bed oscillatory flows can be better interpreted and understood. Spatiallyaveraged equations for conservation of fluid mass and momentum are developed for analyses of rapidly-changing bed conditions, e.g., for growing ripples. Where repeated observations of the changing bed conditions are available, the ensemble and spatially-averaged versions of these equations can be used for more detailed analyses of the flow dynamics. The double-averaged (in space and phase or time) equations of mass and momentum conservation are shown to be appropriate for analyses of flows over fixed rough beds and equilibrium ripples. The value of the present framework is highlighted herein by its application to PIV-measured oscillatory-flow velocities, stresses and vorticities over growing and equilibrium wave-induced intermediate-depth orbital-vortex ripples. In particular, discussions are provided regarding the mechanisms by which gravity-induced and pressure-gradient-induced momentum is transferred to the bed, with the analysis framework naturally and explicitly including the combination of the full range of fluid stresses and boundary form and skin friction drag that is important in defining the flow mechanics.
MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Constructing the Average Natural History of HIV-1 Infection
NASA Astrophysics Data System (ADS)
Diambra, L.; Capurro, A.; Malta, C. P.
2007-05-01
Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2010 CFR
2010-04-01
...2010-04-01 false Average System Cost methodology functionalization. 301...MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO...POWER ACT § 301.7 Average System Cost methodology functionalization....
40 CFR 1033.710 - Averaging emission credits.
Code of Federal Regulations, 2013 CFR
2013-07-01
...2013-07-01 false Averaging emission credits. 1033.710 Section 1033.710 Protection of Environment ENVIRONMENTAL...Averaging, Banking, and Trading for Certification § 1033.710 Averaging emission credits. (a)...
40 CFR 1033.710 - Averaging emission credits.
Code of Federal Regulations, 2010 CFR
2010-07-01
...2010-07-01 false Averaging emission credits. 1033.710 Section 1033.710 Protection of Environment ENVIRONMENTAL...Averaging, Banking, and Trading for Certification § 1033.710 Averaging emission credits. (a)...
40 CFR 1033.710 - Averaging emission credits.
Code of Federal Regulations, 2012 CFR
2012-07-01
...2012-07-01 false Averaging emission credits. 1033.710 Section 1033.710 Protection of Environment ENVIRONMENTAL...Averaging, Banking, and Trading for Certification § 1033.710 Averaging emission credits. (a)...
40 CFR 1033.710 - Averaging emission credits.
Code of Federal Regulations, 2011 CFR
2011-07-01
...2011-07-01 false Averaging emission credits. 1033.710 Section 1033.710 Protection of Environment ENVIRONMENTAL...Averaging, Banking, and Trading for Certification § 1033.710 Averaging emission credits. (a)...
40 CFR 1033.710 - Averaging emission credits.
Code of Federal Regulations, 2014 CFR
2014-07-01
...2014-07-01 false Averaging emission credits. 1033.710 Section 1033.710 Protection of Environment ENVIRONMENTAL...Averaging, Banking, and Trading for Certification § 1033.710 Averaging emission credits. (a)...
Light-cone averages in a swiss-cheese universe
Valerio Marra; Edward W. Kolb; Sabino Matarrese
2007-12-21
We analyze a toy swiss-cheese cosmological model to study the averaging problem. In our model, the cheese is the EdS model and the holes are constructed from a LTB solution. We study the propagation of photons in the swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the the expansion scalar is unaffected by the inhomogeneities. This is because of spherical symmetry. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the concordance model. Although the sole source in the swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w_0 and w_a follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.
Light-cone averages in a Swiss-cheese universe
NASA Astrophysics Data System (ADS)
Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino
2008-01-01
We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaître-Tolman-Bondi solution of Einstein’s equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ?CDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w0 and wa follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.
40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity
Code of Federal Regulations, 2012 CFR
2012-07-01
... 13 2012-07-01 2012-07-01 false Data Summary Sheet for Determination of Average Opacity 1 Figure 1 to Subpart...Fig. 1 Figure 1 to Subpart QQQ of Part 63—Data Summary Sheet for Determination of Average Opacity Clock time Number...
40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity
Code of Federal Regulations, 2014 CFR
2014-07-01
... 13 2014-07-01 2014-07-01 false Data Summary Sheet for Determination of Average Opacity 1 Figure 1 to Subpart...Fig. 1 Figure 1 to Subpart QQQ of Part 63—Data Summary Sheet for Determination of Average Opacity Clock time Number...
40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity
Code of Federal Regulations, 2013 CFR
2013-07-01
... 13 2013-07-01 2012-07-01 true Data Summary Sheet for Determination of Average Opacity 1 Figure 1 to Subpart...Fig. 1 Figure 1 to Subpart QQQ of Part 63—Data Summary Sheet for Determination of Average Opacity Clock time Number...
The Event-Related Low-Frequency Activity of Highly and Average Intelligent Children
ERIC Educational Resources Information Center
Liu, Tongran; Shi, Jiannong; Zhao, Daheng; Yang, Jie
2008-01-01
Using time-frequency analysis techniques to investigate the event-related low-frequency (delta: 0.5-4 Hz; theta: 4-8 Hz) activity of auditory event-related potentials (ERPs) data of highly and average intelligent children, 18 intellectually gifted children, and 18 intellectually average children participated the present study. Present findings…
A comparison of spatial averaging and Cadzow's method for array wavenumber estimation
Harris, D.B.; Clark, G.A.
1989-10-31
We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.
University of Waterloo Weather Station Summary May 2013 A warm month with average precipitation
Waterloo, University of
University of Waterloo Weather Station Summary Â May 2013 A warm month with average precipitation January. Most of the precipitation for the month happened during just 3 days (the 10th , 28th and 29th). Even with this average month, we are still about 100 mm over the precipitation we expect at this time
Average structure of incommensurately modulated monoclinic lazurite
Bolotina, N. B.; Rastsvetaeva, R. K. [Russian Academy of Sciences, Shubnikov Institute of Crystallography (Russian Federation)], E-mail: rast@ns.crys.ras.ru; Sapozhnikov, A. N. [Russian Academy of Sciences, Vinogradov Institute of Geochemistry, Siberian Division (Russian Federation)
2006-07-15
The average structure of the monoclinic modification of lazurite Ca{sub 1.26}Na{sub 6.63}K{sub 0.04}[Al{sub 6}Si{sub 6}O{sub 24}](SO{sub 4}){sub 1.53}S{sub 0.99}Cl{sub 0.05} (discovered in the Lake Baikal region) incommensurately modulated along the c axis is solved by the single-crystal X-ray diffraction method. The unit-cell parameters are a = 9.069(1) A, b = 12.868(1) A, c = 12.872(1) A, {gamma} = 90.19(1) deg., sp. gr. Pa, R = 6.9%, 2057 reflections. The AlO{sub 4} and SiO{sub 4} tetrahedra form a partially ordered framework. The positions in the cavities of the framework are split and randomly occupied by Na and Ca atoms and the SO{sub 4}, S{sub 2}, S{sub 3}, and SO{sub 2} anionic groups. The structure of the mineral is compared with the superstructure of triclinic lazurite. Conclusions are drawn about the causes of the incommensurate modulation in monoclinic and other lazurites.
Go with the Flow, Average Holographic Universe
George F. Smoot
2010-03-31
Gravity is a macroscopic manifestation of a microscopic quantum theory of space-time, just as the theories of elasticity and hydrodynamics are the macroscopic manifestation of the underlying quantum theory of atoms. The connection of gravitation and thermodynamics is long and deep. The observation that space-time has a temperature for accelerating observers and horizons is direct evidence that there are underlying microscopic degrees of freedom. The equipartition of energy, meaning of temperature, in these modes leads one to anticipate that there is also an entropy associated. When this entropy is maximized on a volume of space-time, then one retrieves the metric of space-time (i.e. the equations of gravity, e.g. GR). Since the metric satisfies the extremum in entropy on the volume, then the volume integral of the entropy can readily be converted to surface integral, via Gauss's Theorem. This surface integral is simply an integral of the macroscopic entropy flow producing the mean entropy holographic principle. This approach also has the added value that it naturally dispenses with the cosmological constant/vacuum energy problem in gravity except perhaps for second order quantum effects on the mean surface entropy.
AverageAggression Collection and Housing Methods
Childress, Michael J.
to understand how behavior varies throughout the lifespan of an individual (behavioral ontogeny) and whether in their behaviors across situations and contexts (Sih et al. 2004a). Behavioral ontogeny occurs when behaviors change for individuals through time. Variation due to behavioral ontogeny is often shaped by natural
Global Average Brightness Temperature for April 2003
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Figure 1
This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.
The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.
NASA Astrophysics Data System (ADS)
Maleika, Wojciech
2015-02-01
The paper presents a new method of digital terrain model (DTM) estimation based on modified moving average interpolation. There are many methods that can be employed in DTM creation, such as kriging, inverse distance weighting, nearest neighbour and moving average. The moving average method is not as precise as the others; hence, it is not commonly comprised in scientific work. Considering the high accuracy, the relatively low time costs, and the huge amount of measurement data collected by multibeam echosounder, however, the moving average method is definitely one of the most promising approaches. In this study, several variants of this method are analysed. An optimization of the moving average method is proposed based on a new module of selecting neighbouring points during the interpolation process—the "growing radius" approach. Tests experiments performed on various multibeam echosounder datasets demonstrate the high potential of this modified moving average method for improved DTM generation.
Measurement properties of painDETECT by average pain severity
Cappelleri, Joseph C; Bienen, E Jay; Koduru, Vijaya; Sadosky, Alesia
2014-01-01
Background Since the burden of neuropathic pain (NeP) increases with pain severity, it is important to characterize and quantify pain severity when identifying NeP patients. This study evaluated whether painDETECT, a screening questionnaire to identify patients with NeP, can distinguish pain severity. Materials and methods Subjects (n=614, 55.4% male, 71.8% white, mean age 55.5 years) with confirmed NeP were identified during office visits to US community-based physicians. The Brief Pain Inventory – Short Form stratified subjects by mild (score 0–3, n=110), moderate (score 4–6, n=297), and severe (score 7–10, n=207) average pain. Scores on the nine-item painDETECT (seven pain-symptom items, one pain-course item, one pain-irradiation item) range from ?1 to 38 (worst NeP); the seven-item painDETECT scores (only pain symptoms) range from 0 to 35. The ability of painDETECT to discriminate average pain-severity levels, based on the average pain item from the Brief Pain Inventory – Short Form (0–10 scale), was evaluated using analysis of variance or covariance models to obtain unadjusted and adjusted (age, sex, race, ethnicity, time since NeP diagnosis, number of comorbidities) mean painDETECT scores. Cumulative distribution functions on painDETECT scores by average pain severity were compared (Kolmogorov–Smirnov test). Cronbach’s alpha assessed internal consistency reliability. Results Unadjusted mean scores were 15.2 for mild, 19.8 for moderate, and 24.0 for severe pain for the nine items, and 14.3, 18.6, and 22.7, respectively, for the seven items. Adjusted nine-item mean scores for mild, moderate, and severe pain were 17.3, 21.3, and 25.3, respectively; adjusted seven-item mean scores were 16.4, 20.1, and 24.0, respectively. All pair-wise comparisons of scores between pain-severity groups showed sizable and statistically significant differences (P<0.0001). Cumulative distribution functions showed distinct separation between severity (P<0.0001). Cronbach’s alphas were 0.76 and 0.80 for the nine- and seven-item scales, respectively. Conclusion This study provides strong psychometric evidence on the validity and reliability of painDETECT for distinguishing average pain severity in patients with NeP. PMID:25395867
Waterloo, University of
for both temperature and precipitation The daily high temperatures of May were just a little below average, it was still within the average range). Also on the edge of the average range was the precipitation (63.6 mm) for the month, but this time on the low end of the range. About 40 mm of this precipitation came down between
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
Khrennikov, Andrei [International Center for Mathematical Modeling in Physics and Cognitive Sciences, University of Vaexjoe, Vaexjoe S-35195 (Sweden)
2007-01-15
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'.
Investigating spatial specificity and data averaging in MEG
Brookes, Matthew J.; Zumer, Johanna M.; Stevenson, Claire M.; Hale, Joanne R.; Barnes, Gareth R.; Vrba, Jiri; Morris, Peter G.
2010-01-01
This study shows that the spatial specificity of MEG beamformer estimates of electrical activity can be affected significantly by the way in which covariance estimates are calculated. We define spatial specificity as the ability to extract independent timecourse estimates of electrical brain activity from two separate brain locations in close proximity. Previous analytical and simulated results have shown that beamformer estimates are affected by narrowing the time frequency window in which covariance estimates are made. Here we build on this by both experimental validation of previous results, and investigating the effect of data averaging prior to covariance estimation. In appropriate circumstances, we show that averaging has a marked effect on spatial specificity. However the averaging process results in ill-conditioned covariance matrices, thus necessitating a suitable matrix regularisation strategy, an example of which is described. We apply our findings to an MEG retinotopic mapping paradigm. A moving visual stimulus is used to elicit brain activation at different retinotopic locations in the visual cortex. This gives the impression of a moving electrical dipolar source in the brain. We show that if appropriate beamformer optimisation is applied, the moving source can be tracked in the cortex. In addition to spatial reconstruction of the moving source, we show that timecourse estimates can be extracted from neighbouring locations of interest in the visual cortex. If appropriate methodology is employed, the sequential activation of separate retinotopic locations can be observed. The retinotopic paradigm represents an ideal platform to test the spatial specificity of source localisation strategies. We suggest that future comparisons of MEG source localisation techniques (e.g. beamformer, minimum norm, Bayesian) could be made using this retinotopic mapping paradigm. PMID:19635575
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Multifractal detrending moving-average cross-correlation analysis.
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents h(xy) extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of h(xy)(q) since its h(xy)(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature. PMID:21867256
Solar flare proton spectrum averaged over the last 5000 years
NASA Technical Reports Server (NTRS)
Hoyt, H. P., Jr.; Walker, R. M.; Zimmerman, D. W.
1973-01-01
The solar flare differential energy spectrum and integral proton flux above 10 MeV averaged over the last several thousand years are derived from thermoluminescence (TL) data. The dose-rate depth profile is obtained using a new TL equilibrium technique which is independent of the TL decay kinetics. The dose-rate depth profile produced by solar flare protons with a differential energy spectrum of the form dJ/dE = KE to the minus gamma power is calculated for arbitrary gamma. The best fit to the TL data in rock 14310 is obtained for gamma = 2.3 plus or minus 0.2 and an omnidirectional (4 pi) integral flux above 10 MeV of 40 to 80 prot/sq cm/sec. The TL half-life is determined to be 2000 yr. These results are compared to those for Na-22 (half-life of. 2.6 yr) and Al-26 (half-life of 740,000 yr) obtained by Wahlen et al. (1972) and Rancitelli et al. (1972), and it is concluded that the spectral shape and flux of protons in the interval from 25 to 100 MeV is the same within experimental errors when averaged over these three very different time periods.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Averaging techniques for steady and unsteady calculations of a transonic fan stage
NASA Technical Reports Server (NTRS)
Wyss, M. L.; Chima, R. V.; Tweedt, D. L.
1993-01-01
It is often desirable to characterize a turbomachinery flow field with a few lumped parameters such as total pressure ratio or stage efficiency. Various averaging schemes may be used to compute these parameters. The momentum, energy, and area averaging schemes are described and compared. The schemes were compared for two computed solutions of the midspan section of a transonic fan stage: a steady averaging-plane solution in which average rotor outflow conditions were used as stator inflow conditions, and an unsteady rotor-stator interaction solution. The solutions were computed on identical grids using similar Navier-Stokes codes and an algebraic turbulence model. The unsteady solution is described, some unsteady flow phenomena are discussed, and the steady pressure distributions are compared. Despite large unsteady pressure fluctuations on the stator surface, the steady pressure distribution matched the average unsteady distribution almost exactly. Stator wake profiles, stator loss coefficient, and stage efficiency were computed for the two solutions with the three averaging schemes and are compared. In general, the energy averaging scheme gave good agreement between the averaging-plane solution and the time-averaged unsteady solution, even though certain phenomena due to unsteady wake migration were neglected.
Who is the average patient presenting with prostate cancer?
Greene, Kirsten L; Cowan, Janet E; Cooperberg, Matthew R; Meng, Maxwell V; DuChane, Janeen; Carroll, Peter R
2005-11-01
Prostate cancer screening, diagnosis, and treatment have changed dramatically in the last 20 years. Patients with newly diagnosed prostate cancer have many treatment options available. We attempted to determine how patient demographics and quality of life (QOL) have changed, and we describe the average patient with newly diagnosed prostate cancer in the early 21st century. From the Cancer of the Prostate Strategic Urologic Research Endeavor (CaPSURE) we identified 3003 men with prostate cancer diagnosed between 1997 and 2003 for whom pretreatment demographic and QOL data were available. All patients completed both the University of California-Los Angeles Prostate Cancer Index (UCLA-PCI) and the Rand Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) as self-administered questionnaires at the time of diagnosis. We compared demographic variables (age at diagnosis, race/ethnicity, education, number of comorbidities, body mass index [BMI], and insurance type), treatment choice, and pretreatment QOL scores on the SF-36 and UCLA-PCI scales for the periods 1997 to 1999 or 2000 to 2003. Stratified analysis by risk category was performed for demographic and QOL data for the 2 periods. Race/ethnicity and insurance demographics were statistically different for the 2 periods. Low-risk patients also showed a statistically increased BMI in the 2000 to 2003 period. Risk category predicted performance on both inventories, with low-risk patients having better function than intermediate-risk patients and high-risk patients in the areas of urinary bother, bowel function and bother, and sexual function and bother, as well as in many general well-being and emotional health scales on the SF-36. We conclude that the "average" prostate cancer patient is white, 65 years of age, overweight, educated at a college level, and has 1 to 2 comorbidities. Patients report average or above-average pretreatment health-related QOL for all scales based on 2 validated instruments. In this cohort, more patients chose radical prostatectomy than any other form of treatment. PMID:16194711
Control of average spacing of OMCVD grown gold nanoparticles
NASA Astrophysics Data System (ADS)
Rezaee, Asad
Metallic nanostructures and their applications is a rapidly expanding field. Nobel metals such as silver and gold have historically been used to demonstrate plasmon effects due to their strong resonances, which occur in the visible part of the electromagnetic spectrum. Localized surface plasmon resonance (LSPR) produces an enhanced electromagnetic field at the interface between a gold nanoparticle (Au NP) and the surrounding dielectric. This enhanced field can be used for metal-dielectric interfacesensitive optical interactions that form a powerful basis for optical sensing. In addition to the surrounding material, the LSPR spectral position and width depend on the size, shape, and average spacing between these particles. Au NP LSPR based sensors depict their highest sensitivity with optimized parameters and usually operate by investigating absorption peak: shifts. The absorption peak: of randomly deposited Au NPs on surfaces is mostly broad. As a result, the absorption peak: shifts, upon binding of a material onto Au NPs might not be very clear for further analysis. Therefore, novel methods based on three well-known techniques, self-assembly, ion irradiation, and organo-meta1lic chemical vapour deposition (OMCVD) are introduced to control the average-spacing between Au NPs. In addition to covalently binding and other advantages of OMCVD grown Au NPs, interesting optical features due to their non-spherical shapes are presented. The first step towards the average-spacing control is to uniformly form self-assembled monolayers (SAMs) of octadecyltrichlorosilane (OTS) as resists for OMCVD Au NPs. The formation and optimization of the OTS SAMs are extensively studied. The optimized resist SAMs are ion-irradiated by a focused ion beam (Fill) and ions generated by a Tandem accelerator. The irradiated areas are refilled with 3-mercaptopropyl-trimethoxysilane (MPTS) to provide nucleation sites for the OMCVD Au NP growth. Each step during sample preparation is monitored by using surface characterization methods such as contact angle measurements, ellipsometry, X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), atomic force microscopy (AFM), Rutherford backscattering spectroscopy (RBS), UV-Visible spectroscopy, and time-of-flight secondary ion mass spectroscopy (ToF-SIMS). Keywords: Absorption, Array, Average Spacing, Binary Mixture, Density, Deposition, Dose, Fm, Gold Nanoparticle, Growth, Ion Irradiation, LSPR, Nanolithography, Nearest Neighbour Distance, OMCVD, Optical Response, OTS, Polarization, Refilling, Resist, SAM, Self-assembly, SEM Image Analysis, Sensing, Surface, Thin Film, Transparent Substrate.
Perception of Average Value in Multiclass Scatterplots
Gleicher, Michael; Correll, Michael; Nothelfer, Christine; Franconeri, Steven
2014-01-01
The visual system can make highly efficient aggregate judgements about a set of objects, with speed roughly independent of the number of objects considered. While there is a rich literature on these mechanisms and their ramifications for visual summarization tasks, this prior work rarely considers more complex tasks requiring multiple judgements over long periods of time, and has not considered certain critical aggregation types, such as the localization of the mean value of a set of points. In this paper, we explore these questions using a common visualization task as a case study: relative mean value judgements within multi-class scatterplots. We describe how the perception literature provides a set of expected constraints on the task, and evaluate these predictions with a large-scale perceptual study with crowd-sourced participants. Judgements are no harder when each set contains more points, redundant and conflicting encodings, as well as additional sets, do not strongly affect performance, and judgements are harder when using less salient encodings. These results have concrete ramifications for the design of scatterplots. PMID:24051798
Glencross, Deborah K.; Coetzee, Lindi M.; Cassim, Naseem
2014-01-01
Background The South African National Health Laboratory Service (NHLS) responded to HIV treatment initiatives with two-tiered CD4 laboratory services in 2004. Increasing programmatic burden, as more patients access anti-retroviral therapy (ART), has demanded extending CD4 services to meet increasing clinical needs. The aim of this study was to review existing services and develop a service-model that integrated laboratory-based and point-of-care testing (POCT), to extend national coverage, improve local turn-around/(TAT) and contain programmatic costs. Methods NHLS Corporate Data Warehouse CD4 data, from 60–70 laboratories and 4756 referring health facilities was reviewed for referral laboratory workload, respective referring facility volumes and related TAT, from 2009–2012. Results An integrated tiered service delivery model (ITSDM) is proposed. Tier-1/POCT delivers CD4 testing at single health-clinics providing ART in hard-to-reach areas (<5 samples/day). Laboratory-based testing is extended with Tier-2/POC-Hubs (processing ?30–40 CD4 samples/day), consolidating POCT across 8–10 health-clinics with other HIV-related testing and Tier-3/‘community’ laboratories, serving ?40 health-clinics, processing ?150 samples/day. Existing Tier-4/‘regional’ laboratories serve ?100 facilities and process <350 samples/day; Tier-5 are high-volume ‘metro’/centralized laboratories (>350–1500 tests/day, serving ?200 health-clinics). Tier-6 provides national support for standardisation, harmonization and quality across the organization. Conclusion The ITSDM offers improved local TAT by extending CD4 services into rural/remote areas with new Tier-3 or Tier-2/POC-Hub services installed in existing community laboratories, most with developed infrastructure. The advantage of lower laboratory CD4 costs and use of existing infrastructure enables subsidization of delivery of more expensive POC services, into hard-to-reach districts without reasonable access to a local CD4 laboratory. Full ITSDM implementation across 5 service tiers (as opposed to widespread implementation of POC testing to extend service) can facilitate sustainable ‘full service coverage’ across South Africa, and save>than R125 million in HIV/AIDS programmatic costs. ITSDM hierarchical parental-support also assures laboratory/POC management, equipment maintenance, quality control and on-going training between tiers. PMID:25490718
Evaluation of ensemble averaging methods in 3D ballistocardiography.
Lejeune, L; Caiani, E G; Prisk, G K; Migeotte, P-F
2014-08-01
Ballistocardiography (BCG) is a non-invasive technique which measures the acceleration of a body induced by cardiovascular activity, namely the force exerted by the beating heart. Measuring a BCG in a gravity-free environment provides ideal conditions where the subject is completely decoupled from its environment. Furthermore, because gravity constrains the motion in two dimensions, the non-negligible accelerations taking place in the third dimension are lost. In every experimental situation, the measured BCG signal contains artifacts pertaining to different causes. One of them is the undesirable involuntary movements of the subject. Ensemble averaging (EA) tackles the issue of constructing a typical one cardiac cycle BCG signal which best represents a longer recording. The present work compares state-of-the-art EA methods and proposes two novel techniques, one taking into account the ECG sub-intervals and the other one based on Dynamic Time Warping. The effects of lung volume are also assessed. PMID:25571159
Local versus average field failure criterion in amorphous polymers
NASA Astrophysics Data System (ADS)
Xie, Yuesong; Mao, Yunzhe; Sun, Lin; Koslowski, Marisol
2015-03-01
There is extensive work developing laws that predict yielding in amorphous polymers, ranging from the pioneer experimental work of Sternstein et al (1968 Appl. Polym. Symp. 7 175–99) to the novel molecular dynamics simulations of Jaramillo et al (2012 Phys. Rev. B 85 024114). While atomistic models render damage criteria in terms of local values of the stress and strain fields, experiments provide yield conditions in terms of the average values of these fields. Unfortunately, it is not possible to compare these results due to the differences in time and length scales. Here, we use a micromechanical phase-field damage model with parameters calculated from atomistic simulations to connect atomistic and macroscopic scale experiments. The phase-field damage model is used to study failure in composite materials. We find that the yield criterion should be described in terms of local stress and strains fields and cannot be extended directly from applied stress field values to determine yield conditions.
Predicting global average thermospheric temperature changes resulting from auroral heating
NASA Astrophysics Data System (ADS)
Weimer, D. R.; Bowman, B. R.; Sutton, E. K.; Tobiska, W. K.
2011-01-01
The total Poynting flux flowing into both polar hemispheres as a function of time, computed with an empirical model, is compared with measurements of neutral densities in the thermosphere at two altitudes obtained from accelerometers on the CHAMP and GRACE satellites. The Jacchia-Bowman 2008 empirical thermospheric density model (JB2008) is used to facilitate the comparison. This model calculates a background level for the “global nighttime minimum exospheric temperature,” Tc, from solar indices. Corrections to this background level due to auroral heating, ?Tc, are presently computed from the Dst index. A proxy measurement of this temperature difference, ?Tc, is obtained by matching the CHAMP and GRACE density measurements with the JB2008 model. Through the use of a differential equation, the ?Tc correction can be predicted from IMF values. The resulting calculations correlate very well with the orbit-averaged measurements of ?Tc, and correlate better than the values derived from Dst. Results indicate that the thermosphere cools faster following time periods with greater ionospheric heating. The enhanced cooling is likely due to nitric oxide (NO) that is produced at a higher rate in proportion to the ionospheric heating, and this effect is simulated in the differential equations. As the ?Tc temperature correction from this model can be used as a direct substitute for the Dst-derived correction that is now used in JB200, it could be possible to predict ?Tc with greater accuracy and lead time.
Industry`s turnaround looks real
NONE
1997-08-01
The paper discusses the industry outlook for North American gas and oil industries. In a robust Canada, land sales are setting records, drilling is up, and output is rising beyond last year`s 21% growth. A perception among US operators that wellhead prices will remain stable is translating to increased spending. The USA, Canada, Mexico, Cuba are evaluated separately, with brief evaluations of Greenland, Guatemala, Belize, and Costa Rico. Data are presented on drilling activities.
Back on Track: Small-College Turnarounds.
ERIC Educational Resources Information Center
Townsley, Michael K.
2002-01-01
Describes the experience of two small colleges, Chatham and Wesley, that survived the financial distress common to small school today. Discusses their lessons: reinforce strengths, rebuild the strongest financial source, diversify income, manage debt, and run a disciplined budget and financial management system. (EV)
Why Community Engagement Matters in School Turnaround
ERIC Educational Resources Information Center
McAlister, Sara
2013-01-01
Research shows that an authentically engaged community improves schools--not just by participating in school events, but also by helping to shape reform. Family and community engagement is a proven strategy for strengthening schools. There is also ample evidence that schools serving large populations of students of color and students living in…
"Turnaround" Work Needs Rethinking, New Report Says
ERIC Educational Resources Information Center
Gewerts, Catherine
2007-01-01
According to a new report by Mass Insight Education and Research Institute, a Boston-based research and advocacy group, state, district, and school leaders must link arms to create a different model for turning around the worst-performing schools, including a "protected space" free from many traditional rules. Mass Insight envisions a broad-based…
School Turnaround Success: Focus on Implementation
ERIC Educational Resources Information Center
Manwaring, Robert
2011-01-01
Reform efforts require stakeholder support, but more importantly, a strong leader to see them through. In this article, the author addresses the importance of strong leadership and the role of principals in effecting change. He describes the success story of Harborside Elementary School in Chula Vista, California as one of a growing group of…
Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1992-01-01
Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.
Generation of high average power supercontinuum involve visible spectrum
NASA Astrophysics Data System (ADS)
Zhao, Lei; Liang, Xiaobao; Li, Chao; Wang, Shiwei; Huang, Zhihua; Wang, Jianjun; Zhu, Qihua
2015-02-01
A high average-power all-fiber supercontinuum laser source is constructed. By integrating series techniques together, the output average power achieves 65W with the spectrum range covering two octaves from 540nm to 2200nm. To our knowledge, there has been never reported similar supercontinuum source with such high average power, broadband spectrum and picosecond pulse width.
Interaction between directional epistasis and average mutational effects
Claus O. Wilke; Christoph Adami
2000-01-01
We investigate the relationship between the average fitness decay due to single mutations and the strength of epistatic interactions in genetic sequences. We observe that epistatic interactions between mutations are correlated to the average fitness decay, both in RNA secondary structure prediction as well as in digital organisms replicating in silico. This correlation implies that during adaptation, epistasis and average
On averaging force curves over heterogeneous surfaces in atomic force microscopy.
Sokolov, I; Kalaparthi, V; Kreshchuk, M; Dokukin, M E
2012-10-01
Atomic force microscopy (AFM) can be used to study mechanics at the nanoscale. Biological surfaces and nanocomposites have typically heterogeneous surfaces, both mechanically and chemically. When studying such surfaces with AFM, one needs to collect a large amount of data to make statistically sound conclusions. It is time- and resource-consuming to process each force curve separately. The analysis of an averaged raw force data is a simple and time saving option, which also averages out the noise and measurement artifacts of the force curves being analyzed. Moreover, some biomedical applications require just an average number per biological cell. Here we investigate such averaging, study the possible artifacts due to the averaging, and demonstrate how to minimize or even to avoid them. We analyze two ways of doing the averaging: over the force data for each particular distance (method 1, the most commonly used way), and over the distances for each particular force (method 2). We derive the errors of the methods in finding to the true average rigidity modulus. We show that both methods are accurate (the error is <2%) when the heterogeneity of the surface rigidity is small (<50%). When the heterogeneity is large (>100×), method 2 underestimates the average rigidity modulus by a factor of 2, whereas the error of method 1 is only 15%. However, when analyzing the different surface chemistry, which reveals itself in the changing long-range forces, the accuracy of the methods behave oppositely: method 1 can produce a noticeable averaging artifact in the deriving of the long-range forces; whereas method 2 can be successfully used to derive the averaged long-range force parameters without artifacts. We exemplify our conclusions by the study of human cervical cancer and normal epithelial cells, which demonstrate different degrees of heterogeneity. PMID:22917859
P. K. Kundu; T. L. Bell
2002-01-01
A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model
Code of Federal Regulations, 2010 CFR
2010-07-01
... false Calculation of average fuel economy and average carbon-related exhaust...CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for Model Year 1978...
Code of Federal Regulations, 2011 CFR
2011-07-01
... false Calculation of average fuel economy and average carbon-related exhaust...CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for Model Year 1978...
Code of Federal Regulations, 2014 CFR
2014-07-01
...refinery or importer average and corporate pool average sulfur level determined? 80.205 Section 80.205... REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual...
Code of Federal Regulations, 2013 CFR
2013-07-01
...refinery or importer average and corporate pool average sulfur level determined? 80.205 Section 80.205... REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual...
Code of Federal Regulations, 2012 CFR
2012-07-01
...refinery or importer average and corporate pool average sulfur level determined? 80.205 Section 80.205... REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual...
A frequency averaging framework for the solution of complex dynamic systems
Lecomte, Christophe
2014-01-01
A frequency averaging framework is proposed for the solution of complex linear dynamic systems. It is remarkable that, while the mid-frequency region is usually very challenging, a smooth transition from low- through mid- and high-frequency ranges is possible and all ranges can now be considered in a single framework. An interpretation of the frequency averaging in the time domain is presented and it is explained that the average may be evaluated very efficiently in terms of system solutions. PMID:24910518
Statistical properties of the moving average price in dollar-yen exchange rates
NASA Astrophysics Data System (ADS)
Ohnishi, Takaaki; Mizuno, Takayuki; Aihara, Kazuyuki; Takayasu, Misako; Takayasu, Hideki
2004-12-01
We introduce a weighted-moving-average analysis for the tick-by-tick data of yen-dollar exchange rates. The weights are determined automatically for given data by applying the Yule-Walker formula for autoregressive model. Although the data are non-stationary, the resulting moving average gives a quite nice property that the deviation around the moving-average becomes a white noise. The weights decay exponentially with time scale less than 2 min implying that dealers are watching only very recent market state.
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
NASA Astrophysics Data System (ADS)
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.
Water Triple-Point Comparisons: Plateau Averaging or Peak Value?
NASA Astrophysics Data System (ADS)
Steur, P. P. M.; Dematteis, R.
2014-04-01
With a certain regularity, national metrology institutes conduct comparisons of water triple-point (WTP) cells. The WTP is the most important fixed point for the International Temperature Scale of 1990 (ITS-90). In such comparisons, it is common practice to simply average all the single measured temperature points obtained on a single ice mantle. This practice is quite reasonable whenever the measurements show no time dependence in the results. Ever since the first Supplementary Information for the International Temperature Scale of 1990, published by the Bureau International des Poids et Mesures in Sèvres, it was strongly suggested to wait at least 1 day before taking measurements (now up to 10 days), in order for a newly created ice mantle to stabilize. This stabilization is accompanied by a change in temperature with time. A recent improvement in the sensitivity of resistance measurement enabled the Istituto Nazionale di Ricerca Metrologica to detect more clearly the (possible) change in temperature with time of the WTP on a single ice mantle, as for old borosilicate cells. A limited investigation was performed where the temperature of two cells was monitored day-by-day, from the moment of mantle creation, where it was found that with (old) borosilicate cells it may be counterproductive to wait the usual week before starting measurements. The results are presented and discussed, and it is suggested to adapt the standard procedure for comparisons of WTP cells allowing for a different data treatment with (old) borosilicate cells, because taking the temperature dependence into account will surely reduce the reported differences between cells.
Miller, A.R.
1987-01-01
Increases in high-speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs,generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics are examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical-path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented that balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters.
Miller, Allan Ray
1987-05-01
Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics are examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.
Water Molecule Residence Times
NSDL National Science Digital Library
Sill - Earth Systems Science
2010-11-16
How long will a molecule of Water stay in a particular reservoir? What is the average time a molecule of Water will stay in an ocean? What is the average time a molecule of water will stay in a river? A lake? As groundwater? A glacier? How long will a water vapor molecule stay suspended in the atmosphere? Why is the residence ...
Average daily variations in the magnetic field as observed by ATS-5
NASA Technical Reports Server (NTRS)
Skillman, T. L.
1974-01-01
Hourly averages of the magnetic field components are determined and analyzed using the measurements of the magnetic field monitor aboard the ATS-5. The data covering the time period of September 1969 through September 1971 are sorted and analyzed for various Kp values, geomagnetic latitude of the subsolar point, and local time. Local time variations are harmonically analyzed, and amplitudes and phases are given up to the fourth harmonic.
Average daily variations in the magnetic field as observed by ATS-5
NASA Technical Reports Server (NTRS)
Skillman, T. L.
1972-01-01
Hourly averages of the magnetic field components are determined and analyzed using the measurements, by the magnetic field monitor (MFM) aboard the ATS-5 satellite. The data covering the time period of September 1969 through September 1971 are sorted and analyzed for various Kp values, geomagnetic latitude of the subsolar point, and local time. Local time variations are harmonically analyzed, and amplitudes and phases are given up to the fourth harmonic.
An average interstellar extinction curve for the Large Magellanic Cloud
NASA Technical Reports Server (NTRS)
Fitzpatrick, Edward L.
1986-01-01
LMC regions located more than 0.5 deg from the center of the 30 Doradus nebula have an average UV extinction curve that is about 2 units higher in the far UV than the average Galactic curve. Within the 30 Doradus region, the normalized far-UV extinction is much steeper than the average Galactic or average LMC curves, yet has been considered to be representative of the LMC in general in the past. The present results show typical LMC extinction properties to have much more in common with those in the Milky Way.
Averaging Schemes for Solving Fived Point and Variational Inequality Problems
Magnanti, Thomas L.
We develop and study averaging schemes for solving fixed point and variational inequality problems. Typically, researchers have established convergence results for solution methods for these problems by establishing ...
Average g-Factors of Anisotropic Polycrystalline Samples
Fishman, Randy Scott [ORNL; Miller, Joel S. [University of Utah
2010-01-01
Due to the lack of suitable single crystals, the average g-factor of anisotropic polycrystalline samples are commonly estimated from either the Curie-Weiss susceptibility or the saturation magnetization. We show that the average g-factor obtained from the Curie constant is always greater than or equal to the average g-factor obtained from the saturation magnetization. The average g-factors are equal only for a single crystal or an isotropic polycrystal. We review experimental results for several compounds containing the anisotropic cation [Fe(C5Me5)2]+ and propose an experiment to test this inequality using a compound with a spinless anion.
Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.
1994-01-01
The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL`s). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL`s which are appropriate for material processing applications, low and intermediate average power DPSSL`s are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications.
Averaging Theory for Description of Environmental Problems: What Have We Learned?
Miller, Cass T.; Schrefler, Bernhard A.
2012-01-01
Advances in Water Resources has been a prime archival source for implementation of averaging theories in changing the scale at which processes of importance in environmental modeling are described. Thus in celebration of the 35th year of this journal, it seems appropriate to assess what has been learned about these theories and about their utility in describing systems of interest. We review advances in understanding and use of averaging theories to describe porous medium flow and transport at the macroscale, an averaged scale that models spatial variability, and at the megascale, an integral scale that only considers time variation of system properties. We detail physical insights gained from the development and application of averaging theory for flow through porous medium systems and for the behavior of solids at the macroscale. We show the relationship between standard models that are typically applied and more rigorous models that are derived using modern averaging theory. We discuss how the results derived from averaging theory that are available can be built upon and applied broadly within the community. We highlight opportunities and needs that exist for collaborations among theorists, numerical analysts, and experimentalists to advance the new classes of models that have been derived. Lastly, we comment on averaging developments for rivers, estuaries, and watersheds. PMID:23393409
Chernyak, Vladimir Y; Klein, John R; Sinitsyn, Nikolai A
2012-04-21
This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects. PMID:22519315
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Turkel, Eli
2006-01-01
We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for the simulation of a synthetic jet created by a single diaphragm piezoelectric actuator in quiescent air. This configuration was designated as Case 1 for the CFDVAL2004 workshop held at Williamsburg, Virginia, in March 2004. Time-averaged and instantaneous data for this case were obtained at NASA Langley Research Center, using multiple measurement techniques. Computational results for this case using one-equation Spalart-Allmaras and two-equation Menter's turbulence models are presented along with the experimental data. The effect of grid refinement, preconditioning and time-step variation are also examined in this paper.
Reach-averaged sediment routing model of a canyon river
NASA Astrophysics Data System (ADS)
Wiele, S. M.; Wilcock, P. R.; Grams, P. E.
2007-02-01
Spatial complexity in channel geometry indicates that accurate prediction of sediment transport requires modeling in at least two dimensions. However, a one-dimensional model may be the only practical or possible alternative, especially for longer river reaches of practical concern in river management or landscape modeling. We have developed a one-dimensional model of the Colorado River through upper Grand Canyon that addresses this problem by reach averaging the channel properties and predicting changes in sand storage using separate source and sink functions coupled to the sand routing model. The model incorporates results from the application of a two-dimensional model of flow, sand transport, and bed evolution, and a new algorithm for setting the near-bed sand boundary condition for sand transported over an exposed bouldery bed. Model predictions were compared to measurements of sand discharge during intermittent tributary inputs and varying discharges controlled by dam releases. The model predictions generally agree well with the timing and magnitude of measured sand discharges but tend to overpredict sand discharge during the early stages of a high release designed to redistribute sand to higher-elevation deposits.
Instantaneous Average Active Power Measurement For Building Energy Management System
R. Micheletti; R. Pieri
Fast and accurate measurement of the instantaneous average active power is useful for building Energy Management System (EMS) in order to assure quality of service such as continuity, optimize energy consumption and reduce carbon dioxide emission. In this paper the problems connected to the measurement of the instantaneous average active power for energy usage improving are discussed, also as the
14-Day Boxcar averaged Terra-CERES (Reflected Solar Radiation)
NSDL National Science Digital Library
Tom Bridgman
2001-06-20
This animation displays one year of Reflected Solar Radiation (RSR) Terra-CERES data (March 1, 2000 to May 25, 2001) with a 14-day boxcar average. Endpoints have the average re-weighted for the smaller amount of data. The data are 2.5 degree resolution.
The influence of averageness on children's judgments of facial attractiveness.
Vingilis-Jaremko, Larissa; Maurer, Daphne
2013-08-01
We examined developmental changes in the influence of averageness on judgments of facial attractiveness by showing adults and children pairs of individual faces in which one face was transformed 50% toward its group average, whereas the other face was transformed 50% away from that average. In one comparison, adults and 5-year-olds rated the more average faces as more attractive whether the faces were of adult females, 5-year-old boys, or 5-year-old girls. The influence of averageness, however, was weaker in 5-year-olds than in adults. In another comparison, a new group of adults and 9-year-olds rated the more average faces as more attractive for male and female faces of adults, 9-year-olds, and 5-year-olds. The influence of averageness was again weaker for children than for adults, although the strength of 9-year-olds' preference was greater than that of 5-year-olds. Developmental changes may reflect the refinement of an average face prototype as children are exposed to more faces, increased sensitivity as visual perception develops, and/or the greater salience of attractiveness after puberty. PMID:23708730
Using Multiple Representations To Improve Conceptions of Average Speed.
ERIC Educational Resources Information Center
Reed, Stephen K.; Jazo, Linda
2002-01-01
Discusses improving mathematical reasoning through the design of computer microworlds and evaluates a computer-based learning environment that uses multiple representations to improve undergraduate students' conception of average speed. Describes improvement of students' estimates of average speed by using visual feedback from a simulation.…
Proximal projection methods for variational inequalities and Cesáro averaged approximations
Ya. I. Alber
2002-01-01
In this paper, we prove convergence and stability of Cesáro averages generated by the proximal projection method applied to nonlinear equations and variational inequalities in uniformly convex and uniformly smooth Banach spaces. We first consider the stability of the approximations with respect to perturbations of the operator and constraint sets. Weak convergence of Cesáro averages is shown to hold with
US average Phoenix Area(1000sq.km.)
Hall, Sharon J.
0 200 400 600 800 1000 1200 1400 US average Phoenix average Closest to Phoenix Phoenix contiguous that autumnal flooding may be partially responsible for this shift. P h h 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0
The global warming signal is the average of
Jones, Peter JS
The global warming signal is the average of years 70-80 in the increasing CO2 run minus the average represent significant uncertainty in the global warming signal (Fig. 5). The differences at high latitudes, uncertainty in the isopycnal diffusivity causes uncertainty of up to 50% in the global warming signal
7 CFR 5.2 - Marketing season average price data.
Code of Federal Regulations, 2014 CFR
2014-01-01
...2014-01-01 false Marketing season average price data. 5.2 Section 5.2 Agriculture...Secretary of Agriculture DETERMINATION OF PARITY PRICES § 5.2 Marketing season average price data. It is hereby found that it is...
7 CFR 5.2 - Marketing season average price data.
Code of Federal Regulations, 2013 CFR
2013-01-01
...2013-01-01 false Marketing season average price data. 5.2 Section 5.2 Agriculture...Secretary of Agriculture DETERMINATION OF PARITY PRICES § 5.2 Marketing season average price data. It is hereby found that it is...
Local linear convergence for alternating and averaged nonconvex projections
Paris-Sud XI, UniversitÃ© de
Local linear convergence for alternating and averaged nonconvex projections A.S. Lewis D.R. Luke J. Malick September 16, 2008 Key words: alternating projections, averaged projections, linear conver- gence collection of closed sets having "linearly regular intersection" at a point is crucial in variational
Average Current Mode Control of Switching Power Supplies
Lloyd Dixon
1990-01-01
Current mode control as usually implemented in switching power supplies actually senses and controls peak inductor current. This gives rise to many serious problems, including poor noise immunity, a need for slope compensation, and peak-to-average current errors which the inherent- ly low current loop gain cannot correct. Average current mode control eliminates these problems and may be used effectively to
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
Timing control improves seabed survey data quality
Green, R. [Seateam, Den Helder (Netherlands)
1996-04-01
Seateam has completed development of and field-proven the Dolphin data acquisition and timing system for high-density surveys offshore. The Dolphin project was initiated to improve quality control of survey sensor data and ensure time synchronization, thus leading to faster turnaround of seabed terrain information. Data received from survey sensors is asynchronous, so the system must provide for data correlation. This includes establishment of data latency, i.e., the time difference between data creation and timing of the message at first-byte arrival at the recording system. Until recently, asynchronous data from multiple sensors was collected by a single computer, regardless of whether it had additional intelligent or non-intelligent serial cards. This computer was fully responsible for time stamping all incoming data, plus associated storage and distribution. Though this initially sufficed and is still applicable to low-density data, increasingly larger data volumes required an associated boost in the capability to time stamp data prior to eventual correction.
LANDSAT-4 horizon scanner full orbit data averages
NASA Technical Reports Server (NTRS)
Stanley, J. P.; Bilanow, S.
1983-01-01
Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.
Analysis of grain boundary dynamics using event detection and cumulative averaging.
Gautam, A; Ophus, C; Lançon, F; Denes, P; Dahmen, U
2015-04-01
To analyze extended time series of high resolution images, we have employed automated frame-by-frame comparisons that are able to detect dynamic changes in the structure of a grain boundary in Au. Using cumulative averaging of images between events allowed high resolution measurements of the atomic relaxation in the interface with sufficient accuracy for comparison with atomistic models. Cumulative averaging was also used to observe the structural rearrangement of atomic columns at a moving step in the grain boundary. The technique of analyzing changing features in high resolution images by averaging between incidents can be used to deconvolute stochastic events that occur at random intervals and on time scales well beyond that accessible to single-shot imaging. PMID:25498139
Optimal Estimation of the Average Areal Rainfall and Optimal Selection of Rain Gauge Locations
NASA Astrophysics Data System (ADS)
Bastin, G.; Lorent, B.; Duqué, C.; Gevers, M.
1984-04-01
We propose a simple procedure for the real-time estimation of the average rainfall over a catchment area. The rainfall is modeled as a two-dimensional random field. The average areal rainfall is computed by a linear unbiased minimum variance estimation method (kriging) which requires knowledge of the variogram of the random field. We propose a time-varying estimator for the variogram which takes into account the influences of both the seasonal variations and the rainfall intensity. Our average areal rainfall estimator has been implemented in practice. We illustrate its application to real data in two river basins in Belgium. Finally, it is shown how the method can be used for the optimal selection of the rain gauge locations in a basin.
Long term average rates of large-volume explosive volcanism are not average
NASA Astrophysics Data System (ADS)
Connor, C.; Kiyosugi, K.
2011-12-01
How good are our estimates of long term recurrence rates of large magnitude explosive volcanic eruptions? To investigate this question, we created a data set of all known explosive eruptions in Japan since 1.8 Ma and VEI magnitude 4 or greater. This data set contains 696 explosive eruptions. We use this data set to consider the change in apparent recurrence rate of large volume explosive eruptions through time. Assuming there has been little change in recurrence rate of volcanism since 2.25 Ma, apparent changes are due to erosion of explosive eruption deposits and a lower rate of identification of older deposits preserved in the geologic record. Surprisingly, one half of the eruptions in the data set occurred within the last 65 ka. 77% of the total eruptions occurred since 200 ka; the oldest eruption in the database is 2.25 Ma. Overall, there is a roughly exponential decrease in the numbers of eruptions of a given magnitude identified in the geological record as a function of time. This result clearly indicates that even large magnitude eruptions are significantly under-reported. In addition, percentages of explosive eruptions in the entire data set by eruption magnitude are: VEI 4 (40%), VEI 5 (42%), VEI 6 (13%) and VEI 7 (5%). Because it is reasonable to assume that smaller eruptions occur much more frequently, fewer VEI 4 eruptions than VEI 5 eruptions indicates that small eruptions are missing in this data set. We quantify these variations by plotting survivor functions, noting that there is little change in apparent rate of activity (or the preservation potential of deposits) with geographic and tectonic setting in Japan. These data indicate that eruption probabilities based on long term recurrence rate may underestimate rates of activity. This result also indicates there is considerable uncertainty about the future recurrence rate of large magnitude eruptions, as our best estimates of frequency are based on an unrealistically short record.
ERIC Educational Resources Information Center
Saleh, Mohammad; Lazonder, Ard W.; Jong, Ton de
2007-01-01
Average-ability students often do not take full advantage of learning in mixed-ability groups because they hardly engage in the group interaction. This study examined whether structuring collaboration by group roles and ground rules for helping behavior might help overcome this participatory inequality. In a plant biology course, heterogeneously…
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
Mariana Gagea; Alina Mariuca Ionescu
2008-01-01
At present, both at European Union and world level, experts are preoccupied to find the best method for the deseasonalisation of a time series that should assure the comparability of statistical data. The present paper follows the line of these researches. In the study, we undertake a comparison of the most representative methods based on moving average filter: moving average
Burgan, R.E.; Hartford, R.A.; Eidenshink, J.C.
1996-04-01
Satellite-derived vegetation greenness maps of the contiguous United States have been available to fire managers since 1989. This report describes a new map, departure from average, which is designed to compare current-year vegetation greenness to average greenness for the same time of year and describes it relationship to fire business.
Thomson scattering in the average-atom approximation.
Johnson, W R; Nilsen, J; Cheng, K T
2012-09-01
The average-atom model is applied to study Thomson scattering of x-rays from warm dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave functions, and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Applications are given to dense hydrogen, beryllium, aluminum, and titanium plasmas. In the case of titanium, bound states are predicted to modify the spectrum significantly. PMID:23031036
Definition of average path and relativity parameter computation in CASA
NASA Astrophysics Data System (ADS)
Wu, Dawei; Huang, Yan; Chen, Xiaohua; Yu, Chang
2001-09-01
System CASA (computer-assisted semen analysis) is a medical applicable system which gets the sperm motility and its parameters using image processing method. But there is no any authoritative administration or academic organization gives a set of criterion for CASA now result in lowering the effective compare of work between the labs or researchers. The average path and parameters relative to it as average path velocity, amplitude of lateral head displacement and beat cross frequency are often unable to compare between systems because of different algorithm. The paper presents a new algorithm that could define the average path uniquely and compute those 3 parameters above quickly and handy from any real path.
Experimental demonstration of squeezed-state quantum averaging
Lassen, Mikael; Madsen, Lars Skovgaard; Andersen, Ulrik L. [Department of Physics, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Sabuncu, Metin [Max-Planck-Institute for the Science of Light, Guenther-Scharowsky-Strasse 1, D-91058 Erlangen (Germany); Filip, Radim [Department of Optics, Palacky University, 17 Listopadu 50, CZ-772 07 Olomouc (Czech Republic)
2010-08-15
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented harmonic mean yields a lower value than the corresponding value obtained for the standard arithmetic-mean strategy. The effect of quantum averaging is experimentally tested for squeezed and thermal states as well as for uncorrelated and partially correlated noise sources. The harmonic-mean protocol can be used to efficiently stabilize a set of squeezed-light sources with statistically fluctuating noise levels.
Interaction between directional epistasis and average mutational effects
Claus O. Wilke; Christoph Adami
2001-06-28
We investigate the relationship between the average fitness decay due to single mutations and the strength of epistatic interactions in genetic sequences. We observe that epistatic interactions between mutations are correlated to the average fitness decay, both in RNA secondary structure prediction as well as in digital organisms replicating in silico. This correlation implies that during adaptation, epistasis and average mutational effect cannot be optimized independently. In experiments with RNA sequences evolving on a neutral network, the selective pressure to decrease the mutational load then leads to a reduction of the amount of sequences with strong antagonistic interactions between deleterious mutations in the population.
Averaging Inhomogeneous Universes: Volume, Angle, Line of Sight
Eric V. Linder
1998-01-14
Cosmologies that match in a volume averaged sense need not generally have the same light propagation behaviors. In particular a universe with inhomogeneity may not demonstrate the Friedmann-Robertson-Walker distance-redshift relation even after volume averaging to FRW spacetime. Even the Dyer-Roeder prescription for incorporating inhomogeneity within a universe equivalent to FRW in an angle averaged sense does not guarantee FRW behavior in general. To legitimately use the FRW distance-redshift relation to interpret observations, the physical conditions must match in a line of sight sense (defined herein: most observations do), since light probes the mass distribution or geometry on all length scales.
Crash involvement rates by driver gender and the role of average annual mileage
Dawn L. Massie; Paul E. Green; Kenneth L. Campbell
1997-01-01
The effects of four predictor variables—driver age, driver gender, time of day, and average annual mileage—on crash involvement rates were estimated through the use of multivariate modelling techniques. Separate models were developed for fatal, injury, and property damage only crashes. All four predictor variables proved to be highly significant in explaining variations in observed rates. Rates predicted by the models
Sea Surface Temperature Climatology: Averages of the same month over a number of years.
NSDL National Science Digital Library
IRI/LDEO Climate Data Library
This site presents an interactive animation of the changes in global sea surface temperature averaged over twelve months time. By clicking on an area of the map, users may view a particular month for a specific region, change display parameters and select image output formats.
Rate-Optimal Power Adaptation in Average and Peak Power Constrained Fading Channels
Øien, Geir E.
known water-pouring power adaptation strategy which is optimal from a capacity point of view in the case that the optimal power adaptation strategy is water-pouring in time, and proposed a variable-rate, variable-powerRate-Optimal Power Adaptation in Average and Peak Power Constrained Fading Channels S´ebastien de
Monitoring European average temperature based on the E-OBS gridded data set
NASA Astrophysics Data System (ADS)
Schrier, G.; Besselaar, E. J. M.; Klein Tank, A. M. G.; Verver, G.
2013-06-01
A European average temperature with monthly resolution is constructed based on the E-OBS daily data set with near real-time updates for monitoring. Taken together, the European average temperature and the associated gridded daily maps of surface temperature from the E-OBS data set provide a detailed record of European climate variability and change since 1950. Both are based on validated station data provided by the European National Meteorological and Hydrological Services. A quantitative analysis of the uncertainty sources to the European average temperature indicates that the uncertainties due to urbanization, statistical interpolation, and the potential inhomogeneities in the input records to E-OBS dominate the total uncertainty estimate. In the aggregation of the interpolation uncertainty from the daily to the monthly level and then to a European averaged value, the effective sample size and the effective spatial degrees of freedom are estimated to account for spatial and temporal coherency in the uncertainty estimates. The European average temperature shows that 7 years of the top 10 warmest years are from the period starting as recent as the year 2000 and a clear upward trend in annual average temperatures over the last few decades is visible. The most recent year in the top 10 coldest years is 1987. It also shows that warming in Europe is accelerating compared to the warming over the global land masses and to a lesser extent compared to the Northern Hemisphere land masses over the period 1980-2010.
The tidally averaged momentum balance in a partially and periodically stratified estuary
Stacey, M.T.; Brennan, M.L.; Burau, J.R.; Monismith, S.G.
2010-01-01
Observations of turbulent stresses and mean velocities over an entire spring-neap cycle are used to evaluate the dynamics of tidally averaged flows in a partially stratified estuarine channel. In a depth-averaged sense, the net flow in this channel is up estuary due to interaction of tidal forcing with the geometry of the larger basin. The depth-variable tidally averaged flow has the form of an estuarine exchange flow (downstream at the surface, upstream at depth) and varies in response to the neap-spring transition. The weakening of the tidally averaged exchange during the spring tides appears to be a result of decreased stratification on the tidal time scale rather than changes in bed stress. The dynamics of the estuarine exchange flow are defined by a balance between the vertical divergence of the tidally averaged turbulent stress and the tidally averaged pressure gradient in the lower water column. In the upper water column, tidal stresses are important contributors, particularly during the neap tides. The usefulness of an effective eddy viscosity in the tidally averaged momentum equation is explored, and it is seen that the effective eddy viscosity on the subtidal time scale would need to be negative to close the momentum balance. This is due to the dominant contribution of tidally varying turbulent momentum fluxes, which have no specific relation to the subtidal circulation. Using a water column model, the validity of an effective eddy viscosity is explored; for periodically stratified water columns, a negative effective viscosity is required. ?? 2010 American Meteorological Society.
Design of experiments with very low average replication
Bailey, R. A.
the average replication is much less than two. 2/29 #12;Agricultural plant-breeding trials In breeding trials new varieties, with very little seed of each. 3/29 #12;Agricultural plant-breeding trials In breeding
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
Tamiflu Cuts 1 Day Off Average Flu Bout, Study Finds
... this page, please enable JavaScript. Tamiflu Cuts 1 Day Off Average Flu Bout, Study Finds Researchers reviewed ... the length of flu symptoms by about a day, and reduces the risk of flu-related complications ...
Does subduction zone magmatism produce average continental crust
NASA Technical Reports Server (NTRS)
Ellam, R. M.; Hawkesworth, C. J.
1988-01-01
The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.
Maxwell, Bruce D.
") or withdrew from failing (Withdraw Failing "FW") are included in the total credit sum. Credits for Pass/Fail (Passing). . . . . . . . . . . . . . . 0 WP (Withdraw Passing). . . . . . 0 WF (Withdraw FailingCumulative Undergraduate Grade Point Average (GPA) Potential Student Name