These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.

In order to increase satisfaction and improve department performance, Florida Hospital Memorial Medical Center focused on decreasing report turnaroundtime with a goal of two hours from completion of the examination by the technologist to a final signed report by the radiologist. Through the introduction of voice recognition software, not only were efficiencies gained, but there was a dramatic reduction in transcription costs--from $30,000 per month to less than $300. Turnaroundtimes were ultimately reduced, thus improving performance and increasing satisfaction. This was made evident by the tracking over time of physician satisfaction scores and HCAHPS scores rating patient satisfaction. PMID:22043733

Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaroundtime. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaroundtime, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaroundtime. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaroundtime and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaroundtime and costs.

...false Indicator #1, vacancy rate and unit turnaroundtime. 901.10...10 Indicator #1, vacancy rate and unit turnaroundtime. This indicator...actual and adjusted vacancy rate (and, if applicable, unit turnaroundtime), the...

This paper presents steps currently being implemented to reduce the processing time of the Orbiter Main Propulsion System. Given the projected increase in launch rates of the Space Transportation System (STS) in the operational era, averageturnaroundtime from arrival at Kennedy Space Center to subsequent launch will have to be significantly reduced. In many STS flows to date, a pacing system has been the Main Propulsion System consisting of the three Space Shuttle Main Engines (SSME's) and the Main Propulsion Subsystem (MPS) connecting the SSME's to the Orbiter/ground and Orbiter/External Tank interfaces. This paper summarizes procedural, hardware, software, and requirements changes being incorporated at Kennedy Space Center (KSC) to reduce the processing time required for the MPS/SSME systems. Specific examples from each category are cited to illustrate the impact of the improvements on MPS/SSME processing.

Analysis of Defense Waste Processing Facility (DWPF) samples as slurries rather than as dried or vitrified samples is an effective way to reduce sample turnaroundtimes. Slurries can be dissolved with a mixture of concentrated acids to yield solutions for elemental analysis by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). Slurry analyses can be performed in eight hours, whereas analyses of

C. J. Coleman; N. E. Bibler; D. M. Ferrara; M. S. Hay

At the time of the initial Crime Laboratory Improvement (CLIP) grant application, the Biological Science Section of the New York State Police Forensic Investigation Center was in the process of implementing a program with dual objectives which were to (1)...

We prospectively compared image and report delivery times in our Urgent Care Center (UCC) during a film-based practice (1995)\\u000a and after complete implementation of an electronic imaging practice in 1997. Before switching to a totally electronic and\\u000a filmless practice, multiple time periods were consistently measured during a 1-week period in May 1995 and then again in a\\u000a similar week in

Christopher W. T. Mattern; Bernard F. King; Nicholas J. Hangiandreou; Allan Swenson; Lisa L. Jorgenson; William E. Webbles; Trice W. Okrzynski; Bradley J. Erickson; Byrn Williamson; Glenn S. Forbes

Background: Overcrowding and prolonged length of stay in emergency departments (ED) are increasing problems in hospitals. Rapid availability of all laboratory results has an impact on clinical decision-making, admissions or discharge decisions and resource utilisation. Increasing number of our urinary drugs of abuse (DOA) screens had a turnaroundtime (TAT) of up to 33 days after the discharge of the patient. Materials and methods: Following an audit and a consultation period with clinicians using the service, a policy change was implemented to reduce the use of gas chromatography mass spectroscopy (GCMS): all requests would have a standard immunoassay (IA) test panel undertaken unless specifically they requested GCMS (including medico-legal) analysis. Results: Almost all of the clinicians interviewed had no understanding of the DOA screening or the difference in the information generated between a confirmatory GCMS urine toxicology screen and IA DOA panel. It appeared none of the patients surveyed in the audit would have had a different clinical decision made if a GCMS had not been undertaken. Post change audit showed only 4.3% of drug requests for IA also received a confirmatory GCMS testing. The estimated saving post change implementation was $127,000 (AU $) in test costs alone over a two year period. The TAT of GCMS results was reduced to 3–4 days. Conclusion: A laboratory-led behavioural change in test requesting is possible and sustainable provided the reason is clinically sound and accompanied by consultation and availability of advice by phone when requested on test requesting or interpretation. PMID:24266298

Dimeski, Goce; Silvester, Breeann; Ungerer, Jacobus; Johnson, Leslie; Martin, Jennifer H.

Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion. PMID:17359014

We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the timeaveraged MSD are calculated. In the ballistic case, the deviations of the timeaveraged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the timeaverages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the timeaveraged and ensemble averaged MSDs.

The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaroundtime of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical

A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite timeaveraged Strehl ratios is being used to assess the performance of an airborne laser system.

Objective: Folk wisdom has it that early rising is associated with being “healthy, wealthy and wise.” A physiologic explanation may be Wiegand's “Depressiogenic Theory of Sleep,” which posits that excessive REM sleep causes depression. Sleeping late increases REM sleep, and thus may increase depression risk. Published depression prevalence research does not use arising time, but average sunrise time (AST) for

While reauthorization of the Safe Drinking Water Act is pending, many water utilities are preparing to monitor and regulate levels of distribution system constituents that affect water quality. Most frequently, utilities are concerned about average concentrations rather than about tracing a particular constituent`s path. Mathematical and computer models, which provide a quick estimate of average concentrations, could play an important role in this effort. Most water quality models deal primarily with isolated events, such as tracing a particular constituent through a distribution system. This article proposes a simple, time-averaging model that obtains average, maximum, and minimum constituent concentrations and ages throughout the network. It also computes percentage flow contribution and percentage constituent concentration. The model is illustrated using two water distribution systems, and results are compared with those obtained using a dynamic water quality model. Both models predict average water quality parameters with no significant deviations; the time-averaging approach is a simple and efficient alternative to the dynamic model.

Reddy, L.S.; Ormsbee, L.E.; Wood, D.J. [Univ. of Kentucky, Lexington, KY (United States). Dept. of Civil Engineering

We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

We consider the problem of nonpreemptive scheduling to minimize average (weighted) completion time, allowing for release dates, parallel machines, and precedence constraints. Recent work has led to constant-factor approximations for this problem, based on solving a preemptive or linear programming relaxation and then using the solution to get an ordering on the jobs. We introduce several new techniques which generalize this basic paradigm. We use these ideas to obtain improved approximation algorithms for one-machine scheduling to minimize average completion time with release dates. In the process, we obtain an optimal randomized on-line algorithm for the same problem that beats a lower bound for deterministic on-line algorithms. We consider extensions to the case of parallel machine scheduling, and for this we introduce two new ideas: first, we show that a preemptive one-machine relaxation is a powerful tool for designing parallel machine scheduling algorithms that simultaneously produce good approximations and have small running times; second, we show that a non-greedy {open_quotes}rounding{close_quotes} of the relaxation yields better approximations than a greedy one. We also prove a general theorem relating the value of one-machine relaxations to that of the schedules obtained for the original m-machine problems. This theorem applies even when there are precedence constraints on the jobs. We apply this result to precedence graphs such as in-trees, out-trees, and series- parallel graphs; these are of particular interest in compiler applications that partly motivated our work.

Chekuri, C.; Motwani, R. [Stanford Univ., CA (United States); Natarajan, B. [Hewlett Packard Labs., Palo Alto, CA (United States); Stein, C. [Dartmouth College, Hannover, NH (United States)

When the Minneapolis Public Schools first set out to hire turnaround school principals, administrators followed their usual process--which focused largely on reputation and anecdotal support and considered mainly internal candidates. Yet success at the complicated task of turning around the fortunes of a failing school depends on exceptionally…

Despite reports of higher profitability in recent years, hospitals are failing at a faster rate than ever before. Although many hospitals leave decisions regarding revenues and costs to chief financial officers and their staff, this is a recipe for disaster. From research conducted over the last 4 years on hospital bankruptcies and turnarounds, the author found that a common series of actions will help organizations evade collapse. The author explored these turnaround strategies through research and analysis of a variety of hospitals and health systems that had a high probability of immediate financial crisis or collapse. His continued observation and analysis of these hospitals in subsequent years showed that most hospitals never emerge from their bleak financial conditions. However, a few hospital administrations have successfully turned around their organizations. PMID:18450557

An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented.

A review on the main results concerning the algebraic and differential properties of the averaging and coordination operators and the properties of the space-timeaverages of macroscopic gravity is given. The algebraic and differential properties of the covariant space-timeaveraging procedure by means of using the parallel transportation averaging bivector operator are analyzed. The structure of the pseudo-Riemannian space-time manifolds of general relativity averaged by means of this procedure is discussed. A comparison of both procedures is given and the directions of further development of space-timeaveraging procedures of the physical classical fields are outlined.

Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…

A simple method for approximating the variance of meteorological timeaverages is presented. Graphs of the characteristic time between independent estimates and the ratio of the variance of time-averaged data to that of unaveraged data for a first-order autoregressive process are shown.

An algorithm for the construction of hash function based on optical timeaveraging techniques is proposed in this paper. Algebraic structures of grayscale color functions and timeaverage operators are constructed. Properties of timeaverage operators and effects of digital image representation are explored. The fact that the inverse problem of identification of the original grayscale color function from its timeaveraged image is an ill-posed problem helps to construct an efficient algorithm for the construction of a new class of one-way collision free hash functions. The algorithm can be generalized to 2D or even 3D problem.

In many cases, the orbits of deterministic systems displaying highly irregular oscillations yield smoothly converging timeaverages. It may happen, however, that these timeaverages do not converge and themselves display wild oscillations. This is analyzed for heteroclinic attractors and hyperbolic strange attractors.

Gemini's Fast Turnaround program is intended to greatly decrease the time from having an idea to acquiring the supporting data. The scheme will offer monthly proposal submission opportunities, and proposals will be reviewed by the principal investigators or co-investigators of other proposals submitted during the same round. Here, we set out the design of the system and outline the plan for its implementation, leading to the launch of a pilot program at Gemini North in January 2015.

Mason, R. E.; Côté, S.; Kissler-Patig, M.; Levenson, N. A.; Adamson, A.; Emmanuel, C.; Crabtree, D.

Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in timeaverage TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the timeaverage reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a timeaverage microscopic TV holography system.

We investigate "the Wojcik model" introduced and studied by Wojcik et al., which is a one-defect quantum walk (QW) having a single phase at the origin. They reported that giving a phase at one point causes an astonishing effect for localization. There are three types of measures having important roles in the study of QWs: time-averaged limit measure, weak limit measure, and stationary measure. The first two measures imply a coexistence of localized behavior and the ballistic spreading in the QW. As Konno et al. suggested, the time-averaged limit and stationary measures are closely related to each other for some models. In this paper, we focus on a relation between the two measures for the Wojcik model. The stationary measure was already obtained by our previous work. Here, we get the time-averaged limit measure by several methods. Our results show that the stationary measure is a special case of the time-averaged limit measure.

The characterization of a stochastic process from its single random realization is a challenging problem for most single-particle tracking techniques which survey an individual trajectory of a tracer in a complex or viscoelastic medium. We consider two quadratic functionals of the trajectory: the time-averaged mean-square displacement (MSD) and the time-averaged squared root mean-square displacement (SRMS). For a large class of stochastic processes governed by the generalized Langevin equation with arbitrary frictional memory kernel and harmonic potential, the exact formulas for the mean and covariance of these functionals are derived. The formula for the mean value can be directly used for fitting experimental data, e.g., in optical tweezers microrheology. The formula for the variance (and covariance) allows one to estimate the intrinsic fluctuations of measured (or simulated) time-averaged MSD or SRMS for choosing the experimental setup appropriately. We show that the time-averaged SRMS has smaller fluctuations than the time-averaged MSD, in spite of much broader applications of the latter one. The theoretical results are successfully confirmed by Monte Carlo simulations of the Langevin dynamics. We conclude that the use of the time-averaged SRMS would result in a more accurate statistical analysis of individual trajectories and more reliable interpretation of experimental data. PMID:21797312

Introduction HIV/ AIDS clinics in resource limited settings (RLS) face increasing numbers of patients and workforce shortage [1, 2]. To address these challenges, efficient models of care like pharmacy only visits (POV) and nurse only visits (NOV) are recommended [3]. The Makerere University Joint AIDS Program (MJAP), a PEPFAR funded program providing care to over 42,000 HIV infected adults has implemented the POV model since 2009. In this model, stable patients on antiretroviral therapy (ART) with adherence to ART >95% and Karnofsky score >90% are reviewed by a doctor every four months but visit pharmacy for ART re-fills every two months. A study conducted in August 2011 showed low retention on the POV program with symptomatic diseases, pending CD4 count, complete blood count results, and poor adherence to ART as the major reasons for the non-retention in the POV program. To improve retention on POV, the TAT (TurnaroundTime) for laboratory results (the main reason for non-retention in the previous study) was reduced from one month to one week. In August 2012, the study was repeated to assess the effect of reducing TAT on improving retention one year after patients were placed on POV. Materials and Methods A cohort analysis of data from patients in August 2011 and in August 2012 on POV was done. We compared retention of POV before and after reducing the TAT for laboratory results. Results Retention on POV was 12.0% (95% CI 9.50–14.7) among 619 patients in 2011, (70% Females), mean age was 33 years, Standard Deviation (SD) 8.5 compared to 11.1% (95% CI 9.15–13.4) among 888 patients (70% Females), mean age 38.3 years, SD 8.9 in 2012 (p=0.59). The main reasons for non-retention on the POV program in 2012 were poor adherence to ART (23%) and missed clinic appointments (14%). Conclusions Reducing TAT for laboratory test results did not improve retention of stable HIV-infected adults on POV in our clinic. Strategies for improving adherence to ART and keeping clinic appointments need to be employed to balance workload and management of patients without compromising quality of care, patients’ clinical, immunological and adherence outcome. PMID:25394111

Maselle, Edna; Muhanguzi, Asaph; Muhumuza, Simon; Nansubuga, Jeniffer; Nawavvu, Cecilia; Namusobya, Jeniffer; Kamya, Moses R; Semitala, Fred C

We investigate the long-run behavior of a stochastic replicator process, which describes game dynamics for a symmetric two-player game under aggregate shocks. We establish an averaging principle that relates timeaverages of the process and Nash equilibria of a suitably modified game. Furthermore, a sufficient condition for transience is given in terms of mixed equilibria and definiteness of the payoff matrix. We also present necessary and sufficient conditions for stochastic stability of pure equilibria.

We report an experimental realization of ultracold atoms confined in a time-averaged, adiabatic potential (TAAP). This trapping technique involves using a slowly oscillating ({approx}kHz) bias field to time-average the instantaneous potential given by dressing a bare magnetic potential with a high-frequency ({approx}MHz) magnetic field. The resultant potentials provide a convenient route to a variety of trapping geometries with tunable parameters. We demonstrate the TAAP trap in a standard time-averaged orbiting potential trap with additional Helmholtz coils for the introduction of the radio frequency dressing field. We have evaporatively cooled 5x10{sup 4} atoms of {sup 87}Rb to quantum degeneracy and observed condensate lifetimes of longer than 3 s.

Gildemeister, M.; Nugent, E.; Sherlock, B. E.; Kubasik, M.; Sheard, B. T.; Foot, C. J. [Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU (United Kingdom)

We report an experimental realization of ultracold atoms confined in a time-averaged, adiabatic potential (TAAP). This trapping technique involves using a slowly oscillating (~kHz) bias field to time-average the instantaneous potential given by dressing a bare magnetic potential with a high-frequency (~MHz) magnetic field. The resultant potentials provide a convenient route to a variety of trapping geometries with tunable parameters. We demonstrate the TAAP trap in a standard time-averaged orbiting potential trap with additional Helmholtz coils for the introduction of the radio frequency dressing field. We have evaporatively cooled 5×104 atoms of Rb87 to quantum degeneracy and observed condensate lifetimes of longer than 3 s.

Gildemeister, M.; Nugent, E.; Sherlock, B. E.; Kubasik, M.; Sheard, B. T.; Foot, C. J.

Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

The general analytic expression for the temporal spectrum of atmospheric scintillation under weak turbulence condition is derived. It was employed to study the characteristics of the temporal spectra for horizontal uniform path and whole layer atmospheric non-uniform path. In the non-uniform path, the H-V turbulence model and the Gaussian wind speed model are utilized. It has been found that when the receiver diameter is larger than Fresnel scale (?L)1/2, the temporal spectrum of the plane wave have a power law behavior with a scaling index - 17/3 in high-frequency range. The change of the turbulence strength has little influence on the shape of the temporal spectrum. Based on the characteristics of the temporal spectrum, the aperture-averaging and time-averaging effects on scintillation were analyzed in the frequency domain.

Until 1972 Mexico's officials seemed to believe that the annual population growth rate of 3.5% was not really a problem as long as the economic development rate could be kept ahead of the population growth rate. The General Law of Population of 1947 was actually promoting population growth. It was only during the 1960s that elite opinion shifted, and privately funded family planning programs became increasingly active. By 1972 the population had reached 54 million, from 20 million in 1940. President Echevarria, till then a traditional pronatalist, announced in 1972 the creation of a national family planning program, and, in 1974, the creation of a national population council. Since then the Mexican government has embarked on ambitious programs of mass communication and sex education to make the population receptive to its new attitudes. The plans have worked: by mid-1979 estimates put the population growth rate at 2.9%, within sight of the 2.5% target set for 1982 by new president Lopez Portillo. According to a survey conducted by the National Fertility Survey, by late 1976 41.50% of Mexican women of childbearing age were practicing contraception. Within the 41.50% of active contraceptors, 14.9% use the pill, and 7.8% the IUD. New channels of information and of contraceptive delivery are being explored to reach Mexico's widely scattered rural population, and to bring the average family size down to 2 children from the average of 6.5 children per woman for 1973-1975. The government goal is of a 1% population increase by the year 2000. PMID:12260566

A method is described that allows experimental [Formula: see text] order parameters to be enforced as a time-averaged quantity in molecular dynamics simulations. The two parameters that characterize time-averaged restraining, the memory relaxation time and the weight of the restraining potential energy term in the potential energy function used in the simulation, are systematically investigated based on two model systems, a vector with one end restrained in space and a pentapeptide. For the latter it is shown that the backbone N-H order parameter of individual residues can be enforced such that the spatial fluctuations of quantities depending on atomic coordinates are not significantly perturbed. The applicability to realistic systems is illustrated for the B3 domain of protein G in aqueous solution. PMID:25312596

Hansen, Niels; Heller, Fabian; Schmid, Nathan; van Gunsteren, Wilfred F

Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.

We report the experimental realization of a versatile ring trap for ultracold atoms. The ring geometry is created by the time-averaged adiabatic potential resulting from the application of an oscillating magnetic bias field to a rf-dressed quadrupole trap. Lifetimes for a Bose-Einstein condensate in the ring exceed 11s and the ring radius was continuously varied from 50?m to 262?m. An efficient method of loading the ring from a conventional time-averaged orbiting potential trap is presented together with a rotation scheme which introduces angular momentum into the system. The ring presents an opportunity to study the superfluid properties of a condensate in a multiply connected geometry and also has applications for matter-wave interferometry.

Sherlock, B. E.; Gildemeister, M.; Owen, E.; Nugent, E.; Foot, C. J.

We report the experimental realization of a versatile ring trap for ultracold atoms. The ring geometry is created by the time-averaged adiabatic potential resulting from the application of an oscillating magnetic bias field to a rf-dressed quadrupole trap. Lifetimes for a Bose-Einstein condensate in the ring exceed 11s and the ring radius was continuously varied from 50 {mu}m to 262 {mu}m. An efficient method of loading the ring from a conventional time-averaged orbiting potential trap is presented together with a rotation scheme which introduces angular momentum into the system. The ring presents an opportunity to study the superfluid properties of a condensate in a multiply connected geometry and also has applications for matter-wave interferometry.

Sherlock, B. E.; Gildemeister, M.; Owen, E.; Nugent, E.; Foot, C. J. [Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU (United Kingdom)

NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…

To identify the changes associated with the school turnaround process, this article reviewed 15 case studies of elementary school turnaround initiatives that sustained improvements for at least two years. Changes were clustered into eight categories: leadership, school policy, programs, organizational processes, staffing, classroom practices,…

"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…

The U.S. textile industry may serve as a classic study on regeneration through market forces. The industry has recently made a turnaround in profits after having been recognized as an industry that was losing most of its profits to overseas producers. The reasons for the emerging strength of the industry is that it began to innovate after a…

Both state and local school systems, recognizing the strong and growing evidence of pre-kindergarten's effectiveness in closing achievement gaps, are implementing early learning programs as part of their education reform efforts. In low-performing districts, pre-k has emerged as a promising turnaround strategy, reaching children before they become…

...calculate the daily geometric average concentrations of sulfur...determine the daily geometric average percent reduction of potential... (c) If you operate a Class I municipal waste combustion...calculate the daily arithmetic average for concentrations of...

...calculate the daily geometric average concentrations of sulfur...determine the daily geometric average percent reduction of potential... (c) If you operate a Class I municipal waste combustion...calculate the daily arithmetic average for concentrations of...

School turnaround has become a popular strategy for improvement of chronically low-performing schools. Research related to school turnaround has relied substantially upon case studies. These case studies often focus on successful turnarounds and report immediate outcomes, failing to provide information about the sustainability of the results. In…

The mandates of the federal No Child Left Behind Law, including the threat of closing a school for underperformance, have led to multiple public school turnaround attempts. Because turnaround is a relatively new area of focus in education, there is limited research on what does and does not work, and even the definition of turnaround is a work in…

Turnaround analysis added a new dimension to the Space Program with the advent of the Space Shuttle. The requirement to turn the flight hardware around in 160 working hours from landing to launch was a significant design driver and a useful tool in forcing the integration of flight and ground systems design to permit an efficient ground operation. Although there was concern that time constraints might increase program costs, the result of the analysis was to minimize facility requirements and simplify operations with resultant cost savings.

The non-uniform heat generation in a cylindrical resistor crossed by an alternating electric current is considered. The timeaveraged and dimensionless temperature distribution in the resistor is analytically evaluated. Two dimensionless functions are reported in tables which allow one to determine the timeaveraged temperature field for arbitrarily chosen values of the physical properties and of the radius of the

Distributions of timeaveraged observables are investigated using deterministic maps with N indifferent fixed points and N-state continuous time random walk processes associated with them. In a weakly chaotic phase, namely when separation of trajectories is subexponential, maps are characterized by an infinite invariant density. We find that the infinite density can be used to calculate the distribution of timeaverages of integrable observables with a formula recently obtained by Rebenshtok and Barkai. As an example we calculate distributions of the average position of the particle and average occupation fractions. Our work provides the distributional limit theorem for timeaverages for a wide class of nonintegrable observables with respect to the infinite invariant density, in other words it deals with the situation where the Darling-Kac-Aaronson theorem does not hold.

all curves in a time-warped space, compute the sample mean of the curves in the time-warped spaceOn estimating functional average breakthrough curve using time-warping technique and perturbation directly. In this study, we consider each BTC as a random curve, and use time-warping techniques to align

Objective The number of desaturations determined in recordings of pulse oximeter saturation (SpO2) primarily depends on the time over which values are averaged. As the averagingtime in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averagingtimes for various desaturation levels and minimal durations. Methods Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2–4 s averaging mode. Using 7 different averagingtimes between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D). The whole procedure was carried out for 7 different minimal desaturation durations (?1, ?5, ?10, ?15, ?20, ?25, ?30 s) below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. Results Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averagingtime, the conversion formula is: D2?=?D1 (T2/T1)c, where D2 is the desaturation rate for the desired averagingtime T2, and D1 is the desaturation rate for the original averagingtime T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. Conclusion This formula enables the conversion of desaturation rates between different averagingtimes for various desaturation thresholds and minimal desaturation durations. PMID:24489887

Vagedes, Jan; Bialkowski, Anja; Wiechers, Cornelia; Poets, Christian F.; Dietz, Klaus

Process Unit startup problems can be minimized by having technical service or process engineers perform equipment turnaround inspections. Most plants have inspectors who are specifically trained to identify corrosion, fatigue and other conditions that lead to deterioration of equipment mechanical integrity. Inspectors are also concerned with the proper assembly of equipment to ensure that all construction agrees with the field drawings. Training and knowledge of how process equipment actually operates often varies among individual inspectors. This, unfortunately, can lead to process equipment performance problems after startup with equipment that is reportedly in good mechanical condition.

After defining the requirements for future vertical takeoff, recoverable/reusable vehicle launch pads on the basis of a functionl analysis, the conceptual design of a high turnaround/low maintenance facility has been performed. The results obtained show that performance goals are achievable if specific efforts are directed to the optimization of the launch pad's jet deflector. Potential damage is restricted to the support/release jaws and the jet deflector; a maintenance-free 200-launch life may be obtained through steel cladding of critical deflector areas.

Evidence collected over the last 30 years suggests that effective school leaders significantly influence student learning and other aspects of school performance. Documented experience also indicates that individual leaders in failing organizations in various sectors, including education, can effect rapid, dramatic improvements. School turnaround…

The Lévy walk model is a stochastic framework of enhanced diffusion with many applications in physics and biology. Here we investigate the time-averaged mean squared displacement ?2¯ often used to analyze single particle tracking experiments. The ballistic phase of the motion is nonergodic and we obtain analytical expressions for the fluctuations of ?2¯. For enhanced subballistic diffusion we observe numerically apparent ergodicity breaking on long time scales. As observed by Akimoto [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.108.164101 108, 164101 (2012)], deviations of temporal averages ?2¯ from the ensemble average depend on the initial preparation of the system, and here we quantify this discrepancy from normal diffusive behavior. Time-averaged response to a bias is considered and the resultant generalized Einstein relations are discussed.

Simulated contaminant breakthrough curves (BTC) are often used to predict mass arrival at compliance boundaries at waste storage sites. In numerical simulations that involve uncertainties on input parameters such as randomly heterogeneous rock properties, Monte Carlo simulations are commonly utilized and the mean breakthrough curve is often calculated from the arithmetic average of all realizations. The arithmetic mean breakthrough curve in general overestimates the mass flow rate at early and late time but underestimates the peak mass flow rate. The averaged breakthrough curve usually does not resemble any of individual breakthrough curves. The reason is that BTCs vary not only on amplitude but also on dynamics (time) and therefore it is not appropriate to take the arithmetic average directly. In this study, we consider each BTC as a random curve, and use time-warping techniques to align all curves in a time-warped space, compute the sample mean of the curves in the time-warped space, and transform the means back to the original time space. We show that all BTCs are aligned based on the percentile of mass reaching the compliance boundary, and the functional average is the percentile average of all BTCs. The confidence interval of the sample mean curve is estimated using the perturbation approach. The functional average provides an additional metric that can be used to characterize the breakthrough behavior in addition to more traditional median and arithmetic average curves. The method is illustrated using transport simulations at the Material Disposal Area G, Los Alamos National Laboratory (LANL) in New Mexico.

We consider the problem of scheduling jobs with re- lease dates on machines so as to minimize their average weighted completion time. We present the first known poly- nomial time approximation schemes for several variants of this problem. Our results include PTASs for the case of identical parallel machines and a constant number of unre- lated machines with and without

Foto N. Afrati; Evripidis Bampis; Chandra Chekuri; David R. Karger; Claire Kenyon; Sanjeev Khanna; Ioannis Milis; Maurice Queyranne; Martin Skutella; Clifford Stein; Maxim Sviridenko

The definition of the covariant space-timeaveraging scheme for the objects (tensors, geometric objects, etc.) on differentiable metric manifolds with a volume n-form, which has been proposed for the formulation of macroscopic gravity, is analyzed. An overview of the space-timeaveraging procedure in Minkowski spacetime is given and comparison between this averaging scheme and that adopted in macroscopic gravity is carried out throughout the paper. Some new results concerning the algebraic structure of the averaging operator are precisely formulated and proved, the main one being that the averaging bilocal operator is idempotent iff it is factorized into a bilocal product of a matrix-valued function on the manifold, taken at a point, by its inverse at another point. The previously proved existence theorems for the averaging and coordination bilocal operators are revisited with more detailed proofs of related results. A number of new results concerning the structure of the volume-preserving averaging operators and the class of proper coordinate systems are given. It is shown, in particular, that such operators are defined on an arbitrary n-dimensional differentiable metric manifold with a volume n-form up to the freedom of (n-1) arbitrary functions of n arguments and 1 arbitrary function of (n-1) arguments. All the results given in this paper are also valid whenever appropriate for affine connection manifolds including (pseudo)-Riemannian manifolds.

This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…

Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.

Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ? 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551

Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551

...the information on a quick turnaround basis. ETA will make every effort to coordinate the quick turnaround surveys with other research...Administration. Title: Quick Turnaround Surveys of WIA. OMB Number...Burden Cost for operation and maintenance: $0. Comments...

Average case analysis of disk scheduling, increasing subsequences and space-time geometry Eitan arising from the disk scheduling problem with a linear seek function and a probability distribution on the location of I/O requests. The disk scheduling algorithm of Andrews, Bender and Zhang [2] which provides

, particularly so for the calculation of spectral densities from which vibrational energy levels canTimeaveraging the semiclassical initial value representation for the calculation of vibrational energy levels Alexey L. Kaledin and William H. Millera) Department of Chemistry and Kenneth S. Pitzer

Climate Projections Using Bayesian Model Averaging and Space-Time Dependence K. Sham Bhat, Murali Haran, Adam Terando, and Klaus Keller. Abstract Projections of future climatic changes are a key input to the design of climate change mitiga- tion and adaptation strategies. Current climate change projections

The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.

Griswold, M. E.; Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States); Wurtele, J. S. [Department of Physics, University of California at Berkeley, Berkeley, California 94720 (United States)

Time-distance helioseismology provides information about vector flows in the near-surface layers of the Sun by measuring wave travel times between points on the solar surface. Specific spatial averages of travel times have been proposed for distinguishing between flows in the east-west and north-south directions and measuring the horizontal divergence of the flows. No specific measurement technique has, however, been developed to measure flow vorticity. Here we propose a new measurement technique tailored to measuring the vertical component of vorticity. Fluid vorticity is a fundamental property of solar convection zone dynamics and of rotating turbulent convection in particular. The method consists of measuring the travel time of waves along a closed contour on the solar surface in order to approximate the circulation of the flow along this contour. Vertical vorticity is related to the difference between clockwise and counter-clockwise travel times. We applied the method to characterize the vortical motions ...

Summary Background and objectives Plasma phosphate levels display considerable intraindividual variability. The phosphatonin fibroblast growth factor 23 is a central regulator of plasma phosphate levels, and it has been postulated to be a more stable marker than conventional CKD–mineral and bone disorder parameters. Thus, fibroblast growth factor 23 has been hypothesized to reflect time-averaged plasma phosphate levels in CKD patients. Design, setting, participants, & measurements Among 40 patients from the outpatient dialysis center, serial measurements of plasma calcium and phosphate (before every dialysis session) as well as C-terminal fibroblast growth factor 23, parathyroid hormone, and alkaline phosphatase (one time weekly) were performed over a study period of 4 weeks in November and December of 2011. Intraindividual variability of repeated plasma fibroblast growth factor 23 measurements compared with other CKD–mineral and bone disorder markers was tested, and the association of a single plasma fibroblast growth factor 23 measurement with time-averaged plasma phosphate levels was analyzed. Results Against expectations, intraindividual variability of fibroblast growth factor 23 (median coefficient of variation=27%; interquartile range=20–35) was not lower than variability of plasma phosphate (median coefficient of variation=15%; interquartile range=10–20), parathyroid hormone (median coefficient of variation=24%; interquartile range=15–39), plasma calcium (median coefficient of variation=3%; interquartile range=2–4), or alkaline phosphatase (median coefficient of variation=5%; interquartile range=3–10). Moreover, the correlation between the last fibroblast growth factor 23 measurement after 4 weeks and time-averaged plasma phosphate did not surpass the correlation between the last fibroblast growth factor 23 measurement and a single plasma phosphate value (r=0.67, P<0.001; r=0.76, P<0.001, respectively). Conclusions Surprisingly, fibroblast growth factor 23 was not more closely associated to time-averaged plasma phosphate levels than a single plasma phosphate value, and it did not show a lower intraindividual variability than other tested markers of CKD–mineral and bone disorder. Thus, fibroblast growth factor 23 should not be used in clinical practice as a reflector of time-averaged plasma phosphate levels. PMID:23846463

Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- ? ? n+12??- y(n,?) = ?- 1?- y(m,?)dm n-2 ? where m is the position, x(m) = ?m; ? are the unknown parameters and y(m,?) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, ?) = A +?H [A sin(k?t(m ))+ A cos(k?t(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ? g(m ) = b bl?l(m ) l=1 where, b is a vector of unknown time base distortion parameters, and ? is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.

Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA

The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.

The detection of amplitude distribution of the conventional timeaverage holography is realized through the reconstructed image intensity, however, no satisfactory results can be often obtained because there are many noise influences such as speckle noise. As there are only two values for the phase of the first kind of zero-order Bessel function, namely 0 and ?, we can determine the amplitude distribution through the reconstructed field phase. And this method is better than the conventional, but it takes a longer time. This paper presents a new method that through introducing the shearing principle to vibration measurement of timeaverage digital holograph, we can detect the amplitude distribution rapidly by directly using the shearing interferogram to find out the phase stepping region, and this method need no phase unwrapping operation. Simulation computation and experiment results show that the time new method need for vibration measurement is shorter than that of reconstructed field phase, and the measurement effect is better than that of reconstructed image intensity.

Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

Iuppa, R. [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy)] [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy); Di Sciascio, G., E-mail: roberto.iuppa@roma2.infn.it, E-mail: giuseppe.disciascio@roma2.infn.it [Istituto Nazionale di Fisica Nucleare, Sezione di Roma Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy)

We develop a technique for finding the dynamical evolution in time of an averaged density matrix. The result is an equation of evolution that includes an Effective Hamiltonian, as well as decoherence terms in Lindblad form. Applying the general equation to harmonic Hamiltonians, we confirm a previous formula for the Effective Hamiltonian together with a new decoherence term which should in general be included, and whose vanishing provides the criteria for validity of the Effective Hamiltonian approach. Finally, we apply the theory to examples of the AC Stark Shift and Three- Level Raman Transitions, recovering a new decoherence effect in the latter.

A method and apparatus for directly measuring the time-averaged streamwise momentum in a turbulent stream use a probe which has total head response which varies as the cosine-squared of the angle of incidence. The probe has a nose with a slight indentation on its front face for providing the desired response. The method of making the probe incorporates unique design features. Another probe may be positioned in a side-by-side relationship to the first probe to provide a direct measurement of the total pressure. The difference between the two pressures yields the sum of the squares of the cross-stream components of the turbulence level.

We develop a technique for finding the dynamical evolution in time of an averaged density matrix. The result is an equation of evolution that includes an effective Hamiltonian, as well as decoherence terms in Lindblad form. Applying the general equation to harmonic Hamiltonians, we confirm a previous formula for the effective Hamiltonian together with a additional decoherence term which should, in general, be included and whose vanishing provides the criteria for validity of the effective Hamiltonian approach. Finally, we apply the theory to examples of the ac Stark shift and three-level Raman transitions, recovering a decoherence effect in the latter.

Gamel, Omar; James, Daniel F. V. [Department of Physics, University of Toronto, 60 St. George Street, Toronto, Ontario M5S 1A7 (Canada)

Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system timeaveraging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale timeaveraging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.

Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.

Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

Spatial mapping of variables that vary in space and time is a common procedure in many research fields. Very often it is of interest to map the time-average or time-integration of the variable over the whole period of interest. Normally, such a map is produced by spatially interpolating the whole period averages of the observed data. An alternative option is

At high Reynolds number, the flow of an incompressible viscous fluid over a lifting surface is a rich blend of fluid dynamic phenomena. Here, boundary layers formed at the leading edge develop over both the suction and pressure sides of the lifting surface, transition to turbulence, separate near the foil's trailing edge, combine in the near wake, and eventually form a turbulent far-field wake. The individual elements of this process have been the subject of much prior work. However, controlled experimental investigations of these flow phenomena and their interaction on a lifting surface at Reynolds numbers typical of heavy-lift aircraft wings or full-size ship propellers (chord-based Reynolds numbers, Re_C {˜} 10(7{-}10^8) ) are largely unavilable. This paper presents results from an experimental effort to identify and measure the dominant features of the flow over a two-dimensional hydrofoil at nominal Re_C values from near one million to more than 50 million. The experiments were conducted in the US Navy's William B. Morgan Large Cavitation Channel with a solid-bronze hydrofoil (2.1 m chord, 3.0 m span, 17 cm maximum thickness) at flow speeds from 0.25 to 18.3 m s(-1) . The foil section, a modified NACA 16 with a pressure side that is nearly flat and a suction side that terminates in a blunt trailing-edge bevel, approximates the cross-section of a generic naval propeller blade. Time-averaged flow-field measurements drawn from laser-Doppler velocimetry, particle-imaging velocimetry, and static pressure taps were made for two trailing-edge bevel angles (44 (°) and 56 (°) ). These velocity and pressure measurements were concentrated in the trailing-edge and near-wake regions, but also include flow conditions upstream and far downstream of the foil, as well as static pressure distributions on the foil surface and test section walls. Observed Reynolds-number variations in the time-averaged flow over the foil are traced to changes in suction-side boundary-layer transition and separation. Observed Reynolds-number variations in the time-averaged near wake suggest significant changes occur in the dynamic flow in the range of Re_C investigated.

Bourgoyne, Dwayne A.; Hamel, Joshua M.; Ceccio, Steven L.; Dowling, David R.

The averaging of apparent relative dielectric permittivities by time domain reflectometry (TDK) is examined for properties varying along TDK waveguides and in the plane perpendicular to a TDR probe. A square root averaging model with uniform weighting factors describes the measured apparent relative dielectric permittivity for axially varying materials; variations in the transverse plane are described by an inverse averaging

We report two novel techniques for cooling and rotating Bose-Einstein condensates in a dilute rubidium vapor that highlight the control and versatility afforded over cold atom systems by time-averaged adiabatic potentials (TAAPs). The intrinsic loss channel of the TAAP has been successfully employed to evaporatively cool a sample of trapped atoms to quantum degeneracy. The speed and efficiency of this process compares well with that of conventional forced rf evaporation. In an independent experiment, we imparted angular momentum to a cloud of atoms forming a Bose-Einstein condensate by introducing a rotating elliptical deformation to the TAAP geometry. Triangular lattices of up to 60 vortices were created. All findings reported herein result from straightforward adjustments of the magnetic fields that give rise to the TAAP.

The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

Context. Time-distance helioseismology provides information about vector flows in the near-surface layers of the Sun by measuring wave travel times between points on the solar surface. Specific spatial averages of travel times have been proposed for distinguishing between flows in the east-west and north-south directions and measuring the horizontal divergence of the flows. No specific measurement technique has, however, been developed to measure flow vorticity. Aims: Here we propose a new measurement technique tailored to measuring the vertical component of vorticity. Fluid vorticity is a fundamental property of solar convection zone dynamics and of rotating turbulent convection in particular. Methods: The method consists of measuring the travel time of waves along a closed contour on the solar surface in order to approximate the circulation of the flow along this contour. Vertical vorticity is related to the difference between clockwise and anti-clockwise travel times. Results: We applied the method to characterize the vortical motions of solar convection using helioseismic data from the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory (SDO/HMI) and from the Michelson Doppler Imager onboard the Solar and Heliospheric Observatory (SOHO/MDI). Away from the equator, a clear correlation between vertical vorticity and horizontal divergence is detected. Horizontal outflows are associated with negative vorticity in the northern hemisphere and positive vorticity in the southern hemisphere. The signal is much stronger for HMI than for MDI observations. We characterize the spatial power spectrum of the signal by comparison with a noise model. Vertical vorticity at horizontal wavenumbers below 250 /R? can be probed with this helioseismic technique.

We examined the kinematics of sediment particles in the proximity of a vertical plate attached to the side wall of a pressurized duct. The laboratory phenomenon investigated was archetypal of local scour near bridge abutments in covered rivers. We conducted a clear-water scour experiment where we measured the temporal evolution of the local erosion depth; the latter was found to follow a logarithmic trend. The concentration and velocity of the moving particles were measured on the basis of digital movies taken at various evolution stages of the process. The spatial pattern of the sediment motion showed that the action of the principal vortex system becomes progressively more evident with the general lowering of the sediment transport intensity detected as the scour hole increases in time. Furthermore, the phenomenology of local sediment transport was analyzed; key findings of the research include (1) the time-averaged values of the sediment transport rate are linked with those of concentration and velocity through relationships which are almost invariant, regardless of the experimental time and of the location within the scour hole; (2) intermittency and irregularity of the sediment fluxes increase as their mean intensity decreases; (3) the relative weight of the temporal fluctuations of concentration and velocity in the time-mean sediment transport activity is significant throughout the entire process. Potential implications of the present results for the modeling of local scour processes in pressurized-flow conditions are discussed. The considerations made are likely to have analogous applicability to local scour with a free surface, since the large-scale evolutions of the erosion depth in the two conditions may not be greatly different. Given the qualitative analogy of the present findings with earlier ones obtained for one-dimensional bed load, it is suggested that similar considerations may hold also for the small-scale modeling of a wider class of sediment transport processes.

Discrete maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on area-preserving maps describing finite Larmor radius (FLR) effects on ${\\bf E} \\times {\\bf B}$ chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. FLR effects are included by gyro-averaging the Hamiltonians of the maps which, depending on the zonal flow profile, can have monotonic or non-monotonic frequencies. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that in both cases FLR leads to chaos suppression, changes in the stability of fixed points, and robustness of transport barriers. FLR effects are also responsible for changes in the phase space topology and zonal flow bifurcations. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the...

A variant of the average derivative estimator is developed to construct an index measuring the effects of a copper smelter on house prices in Tacoma, Washington. The estimated average derivative index is smooth and has lower standard errors than regression coefficients estimated separately across discrete time periods. The dataset spans the time of initial rumors of Superfund site designation, the

Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes through a glass cell containing adsorbent material where the VOCs are retained. The adsorbent used, made in LABAQUA, is a mixture of alginic acid and activated carbon. Due to its high permeability it allows the passage and retention of THMs in a suitable way, thus solving many of the problems of other common adsorbents. Also, to avoid degradation of the adsorbent, it is wrapped in a low density polyethylene (LDPE) membrane. After a sampling period of between 1 and 14 days, the adsorbent is collected and analyzed in the laboratory to quantify the VOC average concentration. This device resolves some of the limitations of the classical sampling system (spot samples), since we will take into account the fluctuations in the concentration of VOCs by averaging the same over time. This study presents the results obtained by the device for quantifying the VOCs legislated in the Directive 2000/60/EC. We present the validation of linearity over time and the limits of quantification, as well as the results of sample rate (Rs) obtained for each compound. The results demonstrate the high robustness and high sensitivity of the device. In addition the system has been validated in real waste water samples, comparing the results obtained with this device with the values of classical spot sampling, obtaining excellent results. PMID:24388502

Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio

...calculate the daily geometric average concentrations of sulfur...determine the daily geometric average percent reduction of potential... (c) If you operate a Class I municipal waste combustion...calculate the daily arithmetic average for concentrations of...

Three different image-processing methods based on the time-averaged technique were compared by the electronic speckle pattern interferometry (ESPI) technique for vibration measurement. The three methods are the video-signal-addition method, the video-signal-subtraction method, and the amplitude-fluctuation method. Also, errors introduced by using the zero-order Bessel function directly into the analysis of the fringe pattern were investigated. The video-signal-addition method has been the most generally used ESPI technique for vibration measurement. However, without additional image and/or signal-processing procedures, the fringe pattern obtained directly by the video-signal-addition method is rather difficult to observe. The reason for poor visibility of the experimentally obtained fringe pattern with this method is explained. To increase the fringe pattern's visibility without additional image and/or signal processes, we tried two video-signal-subtraction methods. One of the two methods is the video-signal-subtraction method that has normally been used in the static applications. The other method, called the amplitude-fluctuation method, and its associated theory are reported here. PMID:21102866

In variational data assimilation systems, background error covariances are often estimated from a temporal and spatial average. For a limited area model such as the Aire Limited Adaptation Dynamique Developpment International (ALADIN)\\/France, the spatial average is calculated over the regional computation domain, which covers western Europe. The purpose of this study is to revise the temporal stationarity assumption by diagnosing

A multi-wavelength inversion method is extended to reconstruct the time-averaged temperature distribution in non-axisymmetric turbulent unconfined sooting flame by the multi-wavelength measured data of low time-resolution outgoing emission and transmission radiation intensities. Gaussian, ? and uniform distribution probability density functions (PDF) are used to simulate the turbulent fluctuation of temperature, respectively. The reconstruction of time-averaged temperature consists of three steps.

The modified ogive analysis and the block ensemble average were employed to investigate the impact of the averagingtime extension on the energy balance closure over six land-use types. The modified ogive analysis, which requires a steady-state condition, can extend the averagingtime up to a few hours and suggests that an averagingtime of 30 min is still overall sufficient for eddy-covariance measurements over low vegetation. The block ensemble average, which does not require a steady-state condition, can extend the averagingtime to several days. However, it can improve the energy balance closure for some sites during specific periods, when secondary circulations exist in the vicinity of the sensor. These near-surface secondary circulations mainly transport sensible heat, and when near-ground warm air is transported upward, the sensible heat flux observed by the block ensemble average will increase at longer averagingtimes. These findings suggest an alternative energy balance correction for a ground-based eddy-covariance measurement, in which the attribution of the residual depends on the ratio of sensible heat flux to the buoyancy flux. The fraction of the residual attributed to the sensible heat flux by this energy balance correction is larger than in the energy balance correction that preserves the Bowen ratio.

Charuchittipan, Doojdao; Babel, Wolfgang; Mauder, Matthias; Leps, Jens-Peter; Foken, Thomas

This article examines the decisions made by one principal in her first year as a school turnaround specialist in a low-performing urban elementary school. Researchers focused on decisions related to the principal's three high-priority concerns: (1) elimination of an ineffective instructional program; (2) creation of a culture of teacher…

Surveys importance of economic and environmental factors in motivation of metro- and nonmetro-origin migrants relocating in 13 Kansas nonmetropolitan nonamenity turnaround counties. Finds employment-related reasons predominate, with economic characteristics of counties a significant factor. Quality of life/environmental reasons were less…

A model for predicting the time-dependent vertical thermal structure of lakes is pre- sented. Radiative heating with depth, mixing induced by the surface wave field, and turbu- lent energy exchanges arc included as lake-averaged processes. The model was designed to be applicable to a wide range of lake sizes. Comparison of predictions with horizontally averaged observations from Lake Ontario during

Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..

Ultrawideband (UWB) microwave radar imaging techniques for breast cancer detection typically require estimates of the spatially averaged dielectric properties of breast tissue. We propose an algorithm for estimating patient-specific, frequency-dependent average dielectric properties from scattered UWB microwave signals. The algorithm is a variation of an iterative finite-difference time-domain (FDTD) technique for solving the time-domain inverse scattering problem. The assumption is

David W. Winters; Essex J. Bond; Barry D. Van Veen; Susan C. Hagness

A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the potential to significantly increase the flexibility of hybrid rarefied/continuum flow analyses.

This paper examines trend uncertainties in layer-average free atmosphere temperatures arising from the use of different trend estimation methods. It also considers statistical issues that arise in assessing the significance of individual trends and of trend differences between data sets. Possible causes of these trends are not addressed. We use data from satellite and radiosonde measurements and from two reanalysis

B. D. Santer; T. M. L. Wigley; J. S. Boyle; D. J. Gaffen; J. J. Hnilo; D. Nychka; D. E. Parker; K. E. Taylor

and human energy expenditure for travel, which is assumed to be constant in time and space. The second one little over time and space, as pointed out first by Zahavi (1977), even if disaggregated travel times1 From average travel time budgets to daily travel time distributions: an appraisal of two

A major component in the analysis of the Earth's radiation budget is the recovery of daily and monthly averaged radiative parameters using noncontinuous spatial and temporal measurements from polar orbiting satellites. In this study, the accuracy of the top of atmosphere (TOA) shortwave (SW) temporal interpolation model for the Clouds and the Earth's Radiant Energy System (CERES) is investigated using temporally intensive half-hourly TOA fluxes from the CERES/ARM/GEWEX Experiment (CAGEX) over Oklahoma (Charlock et al., 1996).

The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (aspect ratios and slope factors).

Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.

The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls

E. A. Smith; A. Y. Hsu; W. L. Crosson; R. T. Field; L. J. Fritschen; R. J. Gurney; E. T. Kanemasu; W. P. Kustas; D. Nie; W. J. Shuttleworth

As the U.S. Department of Education prepares to throw $3 billion in one-time money on the table to improve perennially foundering schools, a gulf is emerging between what federal officials would like to see done with the funds and what many districts say is their capacity--and inclination--to deliver. While some districts say the federal largess…

The average groundwater travel time for a 13.2-hectare forested watershed at the Hubbard Brook Experimental Forest in New Hampshire (USA) was computed using a fractal analysis of an unevenly spaced time series of the chloride concentration in the precipitation (1,471 data points) and stream water (2,120 data points). The spectral density of the precipitation and stream water chloride concentrations was plotted against the corresponding wavelength. Each was fit by a power function with exponent beta of 0.5 and 0.7 for precipitation and stream outflow respectively. These beta values fall between a white-noise beta=0 (internally uncorrelated) and a 1/f noise beta=1 (internally weakly correlated). The average travel time through the catchment was calculated using the precipitation beta and stream beta following the methodology proposed in Kirchner (2000), with some modifications. The average travel time for Watershed 6 is estimated to be 3.7 months.

The Rapid Response Risk Assessment Turnaround System (R3ATS) is a decision support system that can be used for cost and schedule risk assessment as prescribed by the US Department of Energy (DOE) Order 413.3A, Program and Project Management for the Acquisition of Capital Assets. Unlike complex and training-intensive project control and accounting risk systems - or naive and statistically incorrect

We appraised in this study the effects of core excess reactivity and average coolant temperature on the operable time of the Nigeria Research Reactor-1 (NIRR-1), which is a miniature neutron source reactor (MNSR). The duration of the reactor operating time and fluence depletion under different operation mode as well as change in core excess reactivity with temperature coefficient was investigated

Y. A. Ahmed; I. B. Mansir; I. Yusuf; G. I. Balogun; S. A. Jonah

Mechanical aspects in the development of an oscillating scan mirror mechanism were reviewed, that featured a remarkably low level of structural vibration for the impact energies involved in mirror oscillation. Another feature was that energy lost during impact was returned to the mirror by applying torque only during the instant of impact. Because the duration of impact was only about 0.010 second, it was critical that energy losses be minimal because there was not much time to restore them.

The applicability of the so-called truncated Wigner approximation (-W) is extended to multitime averages of Heisenberg field operators. This task splits naturally in two. First, what class of multitime averages the -W approximates and, second, how to proceed if the average in question does not belong to this class. To answer the first question, we develop a (in principle, exact) path-integral approach in phase space based on the symmetric (Weyl) ordering of creation and annihilation operators. These techniques calculate a new class of averages which we call time-symmetric. The -W equations emerge as an approximation within these path-integral techniques. We then show that the answer to the second question is associated with response properties of the system. In fact, for two-timeaverages, Kubo’s renowned formula relating the linear-response function to two-time commutators suffices. The -W is directly generalized to the response properties of the system allowing one to calculate approximate time normally ordered two-time correlation functions with surprising ease. The techniques we develop are demonstrated for the Bose-Hubbard model.

Berg, B.; Plimak, L. I.; Polkovnikov, A.; Olsen, M. K.; Fleischhauer, M.; Schleich, W. P.

to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time mortality, eutrophication, contam- ination, and decline of faunal populations, have been of concern

tools of Riemannian geometry (curvature computations, geodesic convexity), the time minimal case is associated to a Finsler metric which is not smooth. Nevertheless a qualitative analysis of the geodesic flow, foliated by ellipses of the free motion: X = {K

Simulation of the temperature distribution during the Pulse Electrochemical Machining (PECM) process provides information\\u000a on system design and guidelines for practical use. The pulses that are applied to the PECM system have to be described on\\u000a a time scale that can be orders of magnitude smaller than the time scale on which the thermal effects evolve. If the full\\u000a detail

N. Smets; S. Van Damme; D. De Wilde; G. Weyns; J. Deconinck

Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

Kodra, Evan A [ORNL; Chatterjee, Snigdhansu [University of Minnesota; Ganguly, Auroop R [ORNL

The rotating shallow water equations are analyzed asymptotically in the limit of small Rossby and Froude numbers to the time scale longer than the advective time scale by one order in the small parameter. A new intermediate model with the advantage of simplicity provides a correction to the quasi-geostrophic equations for a balanced flow through a compressible, ageostrophic velocity slaved to the linear potential vorticity. Next, a four wave resonance involving three fast gravity waves and a single vortical mode is proposed as a means for gravity wave energy exchange across scales. In the reduced dynamics on the long time scale, a mechanism arises for enhanced gravity wave dissipation of larger scale waves through their interaction with smaller scale waves. The vortical mode acts as a catalyst for this interaction. In this and other experiments, the gravity waves exert no influence on the balanced flow.

is stationary in time and space during one hour and over the area of the LASA. The Auto and Cross-correlation Functions The auto-correlation function is often used in the study of observational data exhibiting some degree of randomness. The auto... is stationary in time and space during one hour and over the area of the LASA. The Auto and Cross-correlation Functions The auto-correlation function is often used in the study of observational data exhibiting some degree of randomness. The auto...

The purpose of this study was to determine if admission selection strategies that utilize cumulative and/or pre-requisite GPA are predictive of timely graduation for associate degree nursing (RN-AD) students. Data were obtained from de-identified records of 437 associate degree nursing students enrolled in three Midwest community colleges from…

Many climatic applications, including detection of climate change, require temperature time series that are free from discontinuities introduced by nonclimatic events such as relocation of weather stations. Although much attention has been devoted to discontinuities in the mean, possible changes in the variance have not been considered. A method is proposed to test and possibly adjust for nonclimatic inhomogeneities in

The present article challenges the assumption that later timing of sexual experiences is unequivocally associated with higher psychosocial adjustment. Data from two representative cross-sectional German studies conducted in 1996 and 2005 were analyzed to examine the psychosocial adjustment of young adults (age 20–29) who had their first sexual experiences early (before age 16), at an average age (between age 16

Claudia M. Haase; Monique Landberg; Christoph Schmidt; Kirsten Lüdke; Rainer K. Silbereisen

This paper presents a real time state estimation algorithm for control and security maintenance of multi-converter DC power electronic systems. Estimation of the state variables of the system includes the effects of ripples and converter switching actions. The proposed state estimator is based on the weighted least squares (WLS) approach. Generalized state space averaging technique in which we consider the

This paper presents a real time state estimation algorithm for control and security maintenance of multi-converter MES power electronic systems. Estimation of the state variables of the system includes the effects of ripples and converter switching actions. The proposed state estimator is based on the weighted least squares (WLS) approach. Generalized state space averaging technique in which we consider the

Madan M. Jalla; A. Emadi; Geoffrey A. Williamson; Babak Fahimi

Absolute measurement of small-amplitude vibrations by time-averaged heterodyne holography. This technique enables absolute measurements of sub-nanometric out-of-plane vibration amplitudes. Laser Doppler interferometric methods are commonly used for non-contact measurements of mechanical vibrations. These methods

Condition monitoring of rolling elements bearings is investigated in this paper. Recently (11), we have shown that Time Synchronous Averaging combined with Support Vector Machines can lead to efficient bearing fault diagnosis. But the generalization performance of the SVM- boundaries was strongly affected by the transmission path of the signals. This paper is then concerned with the integration of Independent

Komgom N. Christian; Njuki Mureithi; Aouni Lakis; Marc Thomas

to a nine-month (academic year) basis using a conversion factor of .818. Note C. Cost of living and merit census file, October. Note D. Faculty salaries for 2004 include contracted raises (cost-of-livingFACTBOOK 2013-2014 Faculty and Staff Average Salary of Full-Time Instructional Faculty by Academic

This study was conducted to ascertain the effects of employment on the academic performance of community college students. The grade point averages of 830 full-time students at Wilkes Community College (North Wilkesboro, N.C.) were analyzed to determine: (1) if working students' GPA's differed significantly from those of the total student…

Air fluorescence measurements of cosmic ray energy must be corrected for attenuation of the atmosphere. In this paper we show that the air-showers themselves can yield a measurement of the aerosol attenuation in terms of optical depth, time-averaged over extended periods. Although the technique lacks statistical power to make the critical hourly measurements that only specialized active instruments can achieve,

R. U. Abbasi; T. Abu-Zayyad; J. F. Amann; G. Archbold; R. Atkins; K. Belov; J. W. Belz; S. BenZvi; D. R. Bergman; J. H. Boyer; C. T. Cannon; Z. Cao; B. M. Connolly; Y. Fedorova; C. B. Finley; W. F. Hanlon; C. M. Hoffman; M. H. Holzscheiter; G. A. Hughes; P. Huntemeyer; C. C. H. Jui; M. A. Kirn; B. C. Knapp; E. C. Loh; N. Manago; E. J. Mannel; K. Martens; J. A. J. Matthews; J. N. Matthews; A. O'Neill; K. Reil; M. D. Roberts; S. R. Schnetzer; M. Seman; G. Sinnis; J. D. Smith; P. Sokolsky; C. Song; R. W. Springer; B. T. Stokes; S. B. Thomas; G. B. Thomson; D. Tupa; S. Westerhoff; L. R. Wiencke; A. Zech

What are regional ocean climatologies? Ocean climatology is a compendium of time-averaged fields of the ocean climatology from data was well ahead of capabilities of ocean numerical models and other practical REGIONAL CLIMATOLOGY OF GREENLAND- ICELAND-NORWEGIAN SEAS 2013 www.nodc.noaa.gov recently upscaled to 1/4Â°x

Many factory optimization problems, frominventory control to scheduling and reliability,can be formulated as continuous-timeMarkov decision processes. A primary goalin such problems is to find a gain-optimalpolicy that minimizes the long-run averagecost. This paper describes a new averagerewardalgorithm called SMART for findinggain-optimal policies in continuous timesemi-Markov decision processes. The paperpresents a detailed experimental study ofSMART on a large unreliable...

Fetal cardiac time intervals (fCTI) are dependent on fetal growth and development, and may reveal useful information for fetuses affected by growth retardation, structural cardiac defects or long QT syndrome. Fetal cardiac signals with a signal-to-noise ratio (SNR) of at least 15 dB were retrieved from fetal magnetocardiography (fMCG) datasets with a system based on independent component analysis (ICA). An

experimental data.1Â3,8,9 The technique relies on surface generated noise from wind and waves.10,11 Ship- ping s and, for correlation times less than a few seconds, the observed sig- nal-to-noise ratio (SNR) agrees-correlation of ambient noise data from a drifting vertical array, has been the subject of much discussion in recent years

We have conducted two field trips in the Argentina provinces of Mendoza and Neuquen to sample back-arc volcanism East of the Andean Cordillera margin. The volcanic activity ranges from Pliocene to very recent time, with a large variety of products, from basalts to highly differentiated lavas. Paleomagnetic volcanic samples were collected on more than 80 sites from this area. Paleomagnetic directions K-Ar dating and chemical analysis of this set aim at having a better understanding of the volcanic activity in this area but also at contributing to the study of the statistical behavior of the paleomagnetic field over the interval 0-5 Ma. We present new paleomagnetic results from 34 flow units belonging to two volcanic massifs: the Payun Matru and the Cerro Nevado, which each encompasses about 300 kyr of activity, from 0 to 300 ka and from 1.3 to 1.0 Ma, respectively. Most samples have NRM intensities between about 1 and 20 A/m and depict progressive removal of magnetization components in a consistent fashion during stepwise AF or thermal demagnetization allowing us to precisely obtain the characteristic components of magnetization. No transitional directions were recorded. The mean magnetic field direction from this set is compatible with a geocentric axial dipole (GAD) field but we cannot rule out the presence of a small persistent axial quadrupole (g2^{0}). Finally, these results are compared to other recent records from South America allowing to investigate properties of the paleosecular variation as the structure of latitudinal dependence of angular standard deviation (ASD) of virtual geomagnetic poles (VGPs) over the past 5 Myr for this continent.

Carlut, J.; Quidelleur, X.; Germa, A.; Gillot, P.; Tchilinguirian, P.

Time-averaged local heat transfer coefficients were measured during flow boiling of water at atmospheric pressure in a vertical channel of rectangular cross-section 2 mm by 1 mm for ranges of mass flux 57–211 kg\\/m2s, heat flux 27–160 kW\\/m2, thermodynamic quality 0–0.3 and inlet subcooling 1–12 K. The heat transfer coefficients were found to increase nearly with the square root of

The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.

Using a second-order helium-cooled superconducting quantum interference device gradiometer as the detector, ultra-low-field nuclear magnetic resonance (ULF-NMR) signals of protons are recorded in an urban environment without magnetic shielding. The homogeneity and stability of the measurement field are investigated. NMR signals of protons are studied at night and during working hours. The Larmor frequency variation caused by the fluctuation of the external magnetic field during daytime reaches around 5 Hz when performing multiple measurements for about 10 min, which seriously affects the results of averaging. In order to improve the performance of the averaged data, we suggest the use of a data processor, i.e. the so-called time-domain frequency correction (TFC). For a 50-timesaveraged signal spectrum, the signal-to-noise ratio is enhanced from 30 to 120 when applying TFC while preserving the NMR spectrum linewidth. The TFC is also applied successfully to the measurement data of the hetero-nuclear J-coupling in 2,2,2-trifluoroethanol.

Higher educational institutions face fundamental challenges as they adapt to increasingly competitive markets. This literature review provides a context on markets, failure, and turnarounds, and discusses the literature on the shift of the higher educational industry into competitive markets. The literature on turnarounds in the corporate…

"Central High School," a rural school composed largely of students of color from low income homes, experienced a quick, remarkable turnaround of student academic success, measured by State and Federal accountability standards. This turnaround began with an external restructuring grant initiative that required a state-approved external consultant.…

Maxwell, Gerri M.; Huggins, Kristin S.; Scheurich, James J.

School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…

We tested the effects of the herbicide metsulfuron-methyl on growth of the submerged macrophyte Myriophyllum spicatum under laboratory conditions using different exposure scenarios. The exposures of each scenario were comparable in the concentration×time factor, viz., the same 21-d time-weighted average (TWA) concentrations but variable in peak exposure concentrations (ranging from 0.1 to 21000ngaiL?1) and exposure periods (1, 3, 7, 14

J. D. M. Belgers; G. H. Aalderink; G. H. P. Arts; T. C. M. Brock

Blade measurements of time-averaged flux distribution are obtained with and without gas injection for a full-stage rotating turbine. Results are presented along the blade in the flow direction at 10, 50, and 90 percent span locations for both the pressure and suction surfaces; enough measurements were obtained to present spanwise distributions as well. The results suggest that the suction surface laminar flat plate prediction is in reasonable agreement with the data from the stagnation point up to about 10 percent of the wetted distance. The influence of upstream nozzle guide vane injection is to significantly increase the local blade heat flux in the immediate vicinity of the leading edge.

I present the results of an experiment to demonstrate the effect of phase mismatch between an object vibration and a bias vibration in pseudo-phase-step analysis of time-average holographic interferograms of vibration modes. Pseudo-phase-stepping applies conventional phase-step equations to zero-order Bessel function fringes and during phase unwrapping corrects for the errors incurred. A circular disk vibrating in a quadrature combination of its two one-diameter modes was used as a test object and provided a 360 degrees phase distribution. The results indicate that the process has considerable tolerance to phase mismatch. PMID:16912785

Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.

Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448

Briet, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope

Background Primary IgA nephropathy (IgAN) is the most common form of idiopathic glomerulonephritis worldwide. Although most patients are able to achieve remission with the current therapy, a large number of patients will still progress to end-stage renal disease. This study aimed to evaluate kidney disease progression and the risk factors for progression in IgAN patients who achieved remission. Methods Patients from a prospective database with IgAN were included in this study. All the subjects had achieved a complete remission (CR) or partial remission (PR) following 6 months of therapy. Renal survival and the relationship between the clinical parameters and composite renal outcomes were assessed. Results The study comprised 878 IgAN patients recruited between January 2005 and December 2010. Overall, 632 patients were enrolled in this study. The data from the 369 patients who achieved remission were analyzed; the mean follow-up time was 49 months. The median serum creatinine (SCr) concentration at baseline was 91.3 ?mol/L, and the time-averaged creatinine (TA-SCr) was 91.8 ?mol/L. The mean serum albumin (ALB) level at baseline was 39.4 g/L, and the time-averaged serum albumin (TA-ALB) was 42.1 g/L. Multivariate Cox regression analyses revealed that the TA-ALB and TA-SCr levels were independently associated with the composite renal outcome. The patients with a TA-SCr value?>?120 ?mol/L and a TA-ALB level?

A novel multi-frame particle image velocimetry (PIV) method, able to evaluate a fluid trajectory by means of an ensemble-averaged cross-correlation, is introduced. The method integrates the advantages of the state-of-art time-resolved PIV (TR-PIV) methods to further enhance both robustness and dynamic range. The fluid trajectory follows a polynomial model with a prescribed order. A set of polynomial coefficients, which maximizes the ensemble-averaged cross-correlation value across the frames, is regarded as the most appropriate solution. To achieve a convergence of the trajectory in terms of polynomial coefficients, an ensemble-averaged cross-correlation map is constructed by sampling cross-correlation values near the predictor trajectory with respect to an imposed change of each polynomial coefficient. A relation between the given change and corresponding cross-correlation maps, which could be calculated from the ordinary cross-correlation, is derived. A disagreement between computational domain and corresponding physical domain is compensated by introducing the Jacobian matrix based on the image deformation scheme in accordance with the trajectory. An increased cost of the convergence calculation, associated with the nonlinearity of the fluid trajectory, is moderated by means of a V-cycle iteration. To validate enhancements of the present method, quantitative comparisons with the state-of-arts TR-PIV methods, e.g., the adaptive temporal interval, the multi-frame pyramid correlation and the fluid trajectory correlation, were carried out by using synthetically generated particle image sequences. The performances of the tested methods are discussed in algorithmic terms. A high-rate TR-PIV experiment of a flow over an airfoil demonstrates the effectiveness of the present method. It is shown that the present method is capable of reducing random errors in both velocity and material acceleration while suppressing spurious temporal fluctuations due to measurement noise.

Jeon, Young Jin; Chatellier, Ludovic; David, Laurent

Branched DNA (bDNA) is a signal amplification technology used in clinical and research laboratories to quantitatively detect nucleic acids. An overnight incubation is a significant drawback of highly sensitive bDNA assays. The VERSANT® HIV-1 RNA 3.0 Assay (bDNA) (“Versant Assay”) currently used in clinical laboratories was modified to allow shorter target incubation, enabling the viral load assay to be run in a single day. To dramatically reduce the target incubation from 16–18 h to 2.5 h, composition of only the “Lysis Diluent” solution was modified. Nucleic acid probes in the assay were unchanged. Performance of the modified assay (assay in development; not commercially available) was evaluated and compared to the Versant Assay. Dilution series replicates (>950 results) were used to demonstrate that analytical sensitivity, linearity, accuracy, and precision for the shorter modified assay are comparable to the Versant Assay. HIV RNA-positive clinical specimens (n?=?135) showed no significant difference in quantification between the modified assay and the Versant Assay. Equivalent relative quantification of samples of eight genotypes was demonstrated for the two assays. Elevated levels of several potentially interfering endogenous substances had no effect on quantification or specificity of the modified assay. The modified assay with drastically improved turnaroundtime demonstrates the viability of signal-amplifying technology, such as bDNA, as an alternative to the PCR-based assays dominating viral load monitoring in clinical laboratories. Highly sensitive bDNA assays with a single day turnaround may be ideal for laboratories with especially stringent cost, contamination, or reliability requirements. PMID:22479381

Baumeister, Mark A.; Zhang, Nan; Beas, Hilda; Brooks, Jesse R.; Canchola, Jesse A.; Cosenza, Carlo; Kleshik, Felix; Rampersad, Vinod; Surtihadi, Johan; Battersby, Thomas R.

We explicitly test the equal-time consistency relation between the angular-averaged bispectrum and the power spectrum of the matter density field, employing a large suite of cosmological $N$-body simulations. This is the lowest-order version of the relations between $(\\ell+n)-$point and $n-$point polyspectra, where one averages over the angles of $\\ell$ soft modes. This relation depends on two wave numbers, $k'$ in the soft domain and $k$ in the hard domain. We show that it holds up to a good accuracy, when $k'/k\\ll 1$ and $k'$ is in the linear regime, while the hard mode $k$ goes from linear ($0.1\\,h\\,\\mathrm{Mpc}^{-1}$) to nonlinear ($1.0\\,h\\,\\mathrm{Mpc}^{-1}$) scales. On scales $k\\lesssim 0.4\\,h\\,\\mathrm{Mpc}^{-1}$, we confirm the relation within a $\\sim 5\\%$ accuracy, even though the bispectrum can already deviate from leading-order perturbation theory by more than $30\\%$. We further show that the relation extends up to nonlinear scales, $k \\sim 1.0\\,h\\,\\mathrm{Mpc}^{-1}$, within an accuracy of $\\sim 10\\...

Current and projected high-pressure, staged-combustion rocket engine designs, such as the XLR129 and space shuttle main engine (SSME), introduced a type of turnaround duct flowpath wherein the turnaround is accompanied by an increase in mean radius and a decrease in fluid velocity through a subsequent diffuser. This turnaround duct flowpath is needed in the rocket engine to conduct high-pressure turbopump turbine discharge gases toward the main injector while maintaining a compact turbomachinery and powerhead flowpath. Current experience in the SSME shows that this type of turnaround duct can experience flow separation with an attendant increase in flow turbulence levels. Advances in computational fluid dynamics techniques over the last decade produced a number of codes which are applicable to the analysis and prediction of flow field behavior within such ducts. A version of the TEACH code developed at Pratt & Whitney Engineering Division was used to analyze three representative turnaround duct configurations. The study included the effects of fluid properties, inlet velocity flowfields, solution grid mesh size, and numerical solution algorithm upon calculated flow. Results include flow streamline comparisons, velocity profiles, and calculated total pressure losses. The results showed that turnaround ducts of the type studied can exhibit separation but that appropriately designed turnaround ducts can be made to function without suffering from internal flow separation.

The Rapid Response Risk Assessment Turnaround System (R3ATS) is a decision support system that can be used for cost and schedule risk assessment as prescribed by the US Department of Energy (DOE) Order 413.3A, Program and Project Management for the Acquisition of Capital Assets. Unlike complex and training-intensive project control and accounting risk systems - or naive and statistically incorrect risk assessment approaches - employed throughout the DOE complex R3ATS is a powerful and yet simple decision support system for conducting project risk assessments. Output from R3ATS include: (1) establishing a project risk register which can be periodically updated to regularly monitor and assess a dynamic risk picture, (2) producing statistically derived and justifiable cost and schedule contingency probability density functions, and (3) inclusion, via Bayesian updating, of significant trigger events that result in project cost, schedule or technical risk events. During FY2007, R3ATS was used by the Oak Ridge Transuranic (TRU) Waste Processing Center (TWPC) project to examine the FY 2008 through FY 2009 budget ({approx}$70 million) and the life cycle budget (over $500 million). In less than eight weeks, Oak Ridge DOE personnel and TWPC project management were trained on - and subsequently incorporated - the R3ATS approach thus demonstrating its significance as a viable and rapid turn-around decision support tool for cost and schedule risk assessment. In conclusion: The R3ATS was developed to meet the mission need for a defensible risk-oriented decision support system. We have demonstrated how R3ATS meets DOE ORO budget planning efforts associated with the TWPC life cycle from FY 2008 through FY 2018. We have further identified the critical shortcomings in the entire approach that DOE takes perform risk management. When we realized that DOE guidance for risk management is naive and can easily produce results are not reliable or repeatable, we applied a defensible systems engineering approach to develop R3ATS. The TWPC approach to risk management offers a risk-balancing approach for performance management and ensures that future investment in training enables all users to use risk management as a tool for successful TWPC mission accomplishment. (authors)

Redus, K. [Redus and Associates, LLC, Oak Ridge, Tennessee (United States); Escher, R. [Epsilon Systems Solutions, Inc., Knoxville, TN (United States)

Back-arc volcanism located to the east of the Andean Cordillera was sampled in the Argentina provinces of Mendoza and Neuquen for paleomagnetic timeaverage field and paleosecular investigations. The activity ranges from 2 Ma to very recent time, with a large variety of products, from basalts to highly differentiated lavas. After removal of sites affected by lightning, those with ?95 higher than 10°, and combining of nearby sites displaying close directions, we present new paleomagnetic results from 31 flows units belonging to two volcanic massifs: the Payun Matru and the Cerro Nevado. Previous and new K-Ar age determinations constrain the volcanic activity of these massifs from 300 to 0 ka, and from 1.9 to 0.9 Ma, respectively. Most paleomagnetic samples have NRM intensities between about 1 and 20 A/m and depict progressive removal of magnetization components in a consistent fashion during stepwise AF or thermal demagnetization. Nineteen flows yielded a normal direction (declination = 354.8°, inclination = -53.0°, ?95 = 6.8°) and 12 flows a reverse direction (declination = 181.0°, inclination = 52.3°, ?95 = 5.9°). The combined data yielded a mean direction (declination = 357.3°, inclination = -52.8°, ?95 = 4.6°), which is not statistically different from the axial dipole field ( g10) expected at this latitude (36°S). The angular dispersion of virtual geomagnetic poles calculated from flows with normal directions (ASD = 16.5°) compares well with the observed value from global datasets for this site latitude, but flows with reverse directions display a surprisingly low dispersion (ASD = 12.5°). Since most reverse directions were sampled from flows ranging between 1.9 and 0.9 Ma, this can be interpreted as an interval of low paleomagnetic secular variation. Additional data, also with accurate time constraints, are obviously needed to better support this observation. Finally, no convincing evidence for a complex timeaverage field significantly different from the axial dipole can be supported by this study for the last 2 Myr.

Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.

Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local stationarity diminishes as the temporal duration of the cycle increases. This is most evident for a planetary mesh cycle, which can take several minutes to complete.

Despite the extensive use of Sn-Ag-Cu as a Pb-free solder alloy, its melting point is significantly higher than that of eutectic Sn-Pb solder. Sn-Ag-Bi-In solder is an alternative Pb-free solder, with a melting temperature close to that of eutectic Sn-Pb. This study elucidates the electromigration behavior of Sn-Ag-Bi-In solder and then compares the results with those of the Sn-Ag-Bi system. The behavior of Pb-free Sn-Ag-Bi-In solder strips under electromigration is examined by preparing them in Si (001) U-grooves. The samples are then tested under various temperatures and current densities. Although the compounds thicken near both electrodes with current stressing, the thickness at the anode exceeds that at the cathode. Experimental results of the average failure time indicate that Sn-Ag-Bi-In solder has a longer lifetime than does Sn-Ag-Bi, which is attributed to the ? phase. Additionally, the ? phase dissolved by the current in the early stage replenishes the outgoing atomic flux. These atomic fluxes also enhance the growth of abnormally large particles in the middle of the strips. Field-emission electron probe microanalysis (FE-EPMA) results indicate that the amount of indium is reduced after the ? phase near the cathode is exhausted for extended current stressing time.

We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.

Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.

Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low concentrations of tar in gasification streams. PMID:23398991

Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C

In our previous work, the basilar membrane velocity VBM for a gerbil cochlea was calculated and compared with physiological measurements. The calculated VBM showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the VBM for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase. PMID:20485540

Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

We develop further the approach to upper and lower bounds in quantum dynamics via complex analysis methods which was introduced by us in a sequence of earlier papers. Here we derive upper bounds for non-timeaveraged outside probabilities and moments of the position operator from lower bounds for transfer matrices at complex energies. Moreover, for the time-averaged transport exponents, we present improved lower bounds in the special case of the Fibonacci Hamiltonian. These bounds lead to an optimal description of the time-averaged spreading rate of the fast part of the wavepacket in the large coupling limit. This provides the first example which demonstrates that the time-averaged spreading rates may exceed the upper box-counting dimension of the spectrum.

We tested the effects of the herbicide metsulfuron-methyl on growth of the submerged macrophyte Myriophyllum spicatum under laboratory conditions using different exposure scenarios. The exposures of each scenario were comparable in the concentration × time factor, viz., the same 21-d time-weighted average (TWA) concentrations but variable in peak exposure concentrations (ranging from 0.1 to 21000 ng ai L?¹) and exposure periods (1, 3, 7, 14 or 21 d). To study recovery potential of the exposed M. spicatum plants we continued the observation on shoot and root growth for another 21 d in herbicide-free medium so that the total observation period was 42 d. Non-destructive endpoints, length and number of new shoots and roots, were determined weekly from day 14 onwards. Destructive endpoints, dry-weight (DW) of main shoots, new shoots and new roots, were measured at the end of the experiment (t=42 d). Metsulfuron-methyl exposure in particular inhibited new tissue formation but was not lethal to main shoots. On days 21 and 42 after start exposure, EC??/EC?? values for new tissues expressed in terms of peak concentration (=measured concentration during exposure periods of different length) showed large differences between exposure scenarios in contrast to EC??/EC?? values for days 21 and 42 expressed in terms of 21-d and 42-d TWA concentrations, respectively. At the end of the experiment (day 42), 42-d TWA EC(x) values were remarkably similar between exposure scenarios, while a similar trend could already be observed on day 21 for 21-d TWA EC(x) values. For the macrophyte M. spicatum and exposure to the herbicide metsulfuron-methyl the TWA approach seems to be appropriate to use in the risk assessment. However, the data from the toxicity experiment suggest that on day 21 also the absolute height of the pulse exposure played a (minor) role in the exposure - response relationships observed. PMID:21875741

Belgers, J D M; Aalderink, G H; Arts, G H P; Brock, T C M

This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina)] [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina) [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core. PMID:24387432

of a building and three real-time building load forecasting models were developed. They are first-order autogressive model, random walk model and linear regression model. Finally, the comparison of results show the random walk model provides the best...

This paper deals with the problem of reliable stabilization and H? control for a class of continuous-time switched Lipschitz nonlinear systems with actuator failures. We consider the case that actuators suffer “serious failure”—the never failed actuators cannot stabilize the given system. The differential mean value theorem (DMVT) allows transforming the switched Lipschitz nonlinear systems into switched linear parameter varying (LPV)

Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apart…

The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…

Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…

Consequences of population turnaround for rural economic development are examined in a 9-county region of Northern Lower Michigan. Data from census reports and 374 usable responses to a questionnaire mailed to a random sample of property owners drawn from 1982 county tax assessment rolls were used to test competing hypotheses about rural…

In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…

An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The article…

The increase in air travel puts tremendous strain on existing airport facilities, so turnaroundtimes for airplanes are under constant pressure to be reduced. Part of the turnaroundtime consists of the time required for passengers to board the plane and install themselves in their assigned seats. It seems that this boarding time for passengers is much higher than allowed,

We report the results of single tracer particle tracking by optical tweezers and video microscopy in micellar solutions. From careful analysis in terms of different stochastic models, we show that the polystyrene tracer beads of size 0.52-2.5 ?m after short-time normal diffusion turn over to perform anomalous diffusion of the form ? t? with ? ? 0.3. This free anomalous diffusion is ergodic and consistent with a description in terms of the generalized Langevin equation with a power-law memory kernel. With optical tweezers tracking, we unveil a power-law relaxation over several decades in time to the thermal plateau value under the confinement of the harmonic tweezer potential, as predicted previously (Phys. Rev. E 85 021147 (2012)). After the subdiffusive motion in the millisecond range, the motion becomes faster and turns either back to normal Brownian diffusion or to even faster superdiffusion, depending on the size of the tracer beads.

Summary form only given, as follows. Applications of gaseous plasma switching and relativistic ionization front studies can require measurements of microwave complex conductivity spatially integrated over many microwave wavelengths. Time-dependent conductivities and permittivities of 1-atm air plasmas formed by focusing energetic 100-picosecond laser pulses of 1064-nm wavelength into a long section of rectangular waveguide (WR90) have been determined. Laser propagation

is the Traveling Salesman Problem (TSP). In the classical TSP, a 'salesman' starting at a home city is required to visit every city in a given group once and only once and then return to the home city. The objective of this problem is to find the optimal... sequence of cities (usually minimum cost, distance, or time) in which to complete the tour. The TSP can be viewed as a graph theory problem if the cities are identified with nodes of a graph and the 'links' between the cities are identified...

The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (?registration) and the regularity of the deformation vector field (\\overline{\\Delta |J|} ) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (?signal). The two irregularity measures, \\overline{\\Delta |J|} and ?registration, were dependent on each other (p < 0.0001, R2 = 0.80 for P-4D-CT, R2 = 0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower \\overline{\\Delta |J|} and ?registration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal. The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42?mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16?mm in caudal direction (4.1% of the amplitude). Similar relative offsets were found at the diaphragm. We have devised a method to use amplitude binned 4D-CT to construct motion model and generate a mid-position planning CT for radiotherapy treatment purposes. We have decimated the systematic offset of this mid-position model with a motion model derived from P-4D-CT. We found that the A-4D-CT led to a decrease of local artefacts and that this decrease was correlated to the irregularity of the external respiration signal.

Kruis, Matthijs F.; van de Kamer, Jeroen B.; Belderbos, José S. A.; Sonke, Jan-Jakob; van Herk, Marcel

A technique that compensates for low spatial frequency spurious phase changes during an interference experiment is developed; it permits temporal averaging of multiple-phase measurements, made before and after object displacement. The method is tested with phase-stepped real-time holographic interferometry applied to cantilever bending of a piezoelectric bimorph ceramic. Results indicate that temporal averaging of the corrected data significantly reduces the white noise in a phase measurement without incurring systematic errors or sacrificing spatial resolution. White noise is reduced from 3 deg to less than 1 deg using these methods.

An existing technique which enables the estimation of the time domain averages of the tooth meshing vibration of the individual planet and sun gears in an epicyclic gearbox from measured vibration signals has been revised. A key feature of the existing technique is the sampling of the vibration signal within a rectangular window in the time domain when one of the planet gears is close to the vibration transducer. The revised technique permits the use of other window functions, and a detailed analysis shows that the errors in the estimate of the time domain average can be expressed in terms of the window function. Several suitable window functions which enable a reduction in the level of the errors are demonstrated by numerical examples and by the analysis of data from a test on a helicopter gearbox with deliberate damage to one of the planet gears.

Two innovative integrative samplers have been developed enabling high sampling rates unaffected by turbulences (thus avoiding the use of performance reference compounds) and with negligible lag time values. The first, called the constantly stirred sorbent (CSS) consists of a rotator head that holds the sorbent. The rotation speed given to the head generates a constant turbulence around the sorbent making it independent of the external hydrodynamics. The second, called the continuous flow integrative sampler (CFIS) consists of a small peristaltic pump which produces a constant flow through a glass cell. The sorbent is located inside this cell. Although different sorbents can be used, poly(dimethylsiloxane) PDMS under the commercial twister format (typically used for stir bar sorptive extraction) was evaluated for the sampling of six polycyclic aromatic hydrocarbons and three organochlorine pesticides. These new devices have many analogies with passive samplers but cannot truly be defined as such since they need a small energy supply of around 0.5 W supplied by a battery. Sampling rates from 181 x 10(-3) to 791 x 10(-3) L/day were obtained with CSS and 18 x 10(-3) to 53 x 10(-3) with CFIS. Limits of detection for these devices are in the range from 0.3 to 544 pg/L with a precision below 20%. An in field evaluation for both devices was carried out for a 5 days sampling period in the outlet of a waste water treatment plant with comparable results to those obtained with a classical sampling method. PMID:19540503

Efficient use of feed resources has become a clear challenge for the U.S. pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio, residual feed intake (RFI), real-time ultrasound back fat thickness (BF), ultrasound muscle depth, intramuscular fat content (IMF), birth weight (BW at birth), and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes B models with 35,140 SNP on 18 autosomes after quality control. Significance of nonoverlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes factor, and nonparametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166 to 140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream of the MC4R gene, was significantly associated with ADFI, ADG, and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN, and PENK. No QTL were identified for RFI, IMF, and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24962532

The brain is the body's largest energy consumer, even in the absence of demanding tasks. Electrophysiologists report on-going neuronal firing during stimulation or task in regions beyond those of primary relationship to the perturbation. Although the biological origin of consciousness remains elusive, it is argued that it emerges from complex, continuous whole-brain neuronal collaboration. Despite converging evidence suggesting the whole brain is continuously working and adapting to anticipate and actuate in response to the environment, over the last 20 y, task-based functional MRI (fMRI) have emphasized a localizationist view of brain function, with fMRI showing only a handful of activated regions in response to task/stimulation. Here, we challenge that view with evidence that under optimal noise conditions, fMRI activations extend well beyond areas of primary relationship to the task; and blood-oxygen level-dependent signal changes correlated with task-timing appear in over 95% of the brain for a simple visual stimulation plus attention control task. Moreover, we show that response shape varies substantially across regions, and that whole-brain parcellations based on those differences produce distributed clusters that are anatomically and functionally meaningful, symmetrical across hemispheres, and reproducible across subjects. These findings highlight the exquisite detail lying in fMRI signals beyond what is normally examined, and emphasize both the pervasiveness of false negatives, and how the sparseness of fMRI maps is not a result of localized brain function, but a consequence of high noise and overly strict predictive response models. PMID:22431587

Gonzalez-Castillo, Javier; Saad, Ziad S.; Handwerker, Daniel A.; Inati, Souheil J.; Brenowitz, Noah; Bandettini, Peter A.

The role of unsteady flow processes in establishing the performance of axial flow turbomachinery was investigated. The development of the flow model, as defined by the timeaverage flow equations associated with the passage of the blade row embedded in a multistage configuration, is presented. The mechanisms for unsteady momentum and energy transport is outlined. The modeling of the unsteady momentum and energy transport are discussed. The procedure for simulating unsteady multistage turbomachinery flows is described.

In this work a methodology for high-resolution timeaveraged two-dimensional flow mapping of converging flows was explored. Flow of non-transparent, highly concentrated shear-thinning and shear-thickening suspensions was circulating through the entrance flow adapter with adjustable position of the die entry. The entrance region was scanned with the distance resolution of 2.7 mm × 1 mm, radial to axial displacement respectively.

In this work a methodology for high-resolution timeaveraged 2D flow mapping (e.g. Takeda, 1999) of converging flows was explored. Flow of non-transparent, highly concentrated shear- thinning and shear-thickening suspensions was circulating through the entrance flow adapter with the adjustable position of the die entry. The entrance region was scanned with the distance resolution of 2.7 x 1 mm, radial

This article looks at the problem of scheduling jobs on multiple machines both in the online and offline settings. It attempts to identify the key ideas in recent work on this problem for different machine models.

linewidth: nano-Hertz range ! - excitation is accompanied by large light shift due to coupling to other lock PM fiber ECDL 934 nm AOM tuning slave laser diode 934 nm H maser / 871 nm reference frequency comb generator PM fiber SHG cavity pulse switch, fine tuning AOM 467 nm to trap H-C lock DPD slave laser diode

Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process. PMID:12656032

The principal objectives of the computer aided/Automated fast turn-around laboratory (CAFTAL) for VLSI are: application of cutting edge computer science and software systems engineering to fast turn-around fabrication in order to develop more productive and flexible new approaches; fast turn-around fabrication of optimized VLSI systems achieved through synergistic integration of system research and device research in aggressive applications such as superfast computers, and investigation of physical limits on submicron VLSI in order to define and explore the most promising technologies. To make a state-of-the-art integrated circuit process more manufacturable, we must be able to understand both the numerous individual process technologies used to fabricate the complete device as well as the important device, circuit and system limitations in sufficient detail to monitor and control the overall fabrication sequence. Specifically, we must understand the sensitivity of device, circuit and system performance to each important step in the fabrication sequence. Moreover, we should be able to predict the manufacturability of an integrated circuit before we actually manufacture it. The salient objective of this program is to enable accurate simulation and control of computer-integrated manufacturing of ultra large scale integrated (ULSI) systems, including millions of submicron transistors in a single silicon chip.

The efficient ground turnaround of the Space Shuttle is critical to the execution of national policies. Research identified four major subsystems within the Vandenberg Ground Turnaround System; the Orbiter Maintenance Subsystem, External Tank Subsystem, Solid Rocket Booster Subsystem, and Launch Pad Subsystem. A Q-GERT simulation of the ground Turnaround System was conducted to investigate the system and observe the interactions between the major subsystems. The Q-GERT model simulated the integration of Space Shuttle components to determine the subsystem most sensitive to change. The results indicate the Launch Pad Subsystem is the most critical in the Vandenberg Ground Turnaround System. However, further research is recommended in areas of logistics support, spares availability, and transportation of components.

Bureau 2006). The second pattern has been a dramatic shift of the population out of the central cities into the suburbs beginning after World War II. While 76.7% of the urban population lived in central cities in 1910, the proportion was 45.8% in 1970.... In the beginning stages of aggregation, the major concern is more one of insufficient density than overpopulation because, without agglomeration, the costs of communication among members of an aggregate, both in time and energy, rise to a...

In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005–2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we employ should be useful in other metapopulation studies for calculating time-averaged reproductive output for different sites. PMID:24817307

Holyoak, Marcel; Meese, Robert J.; Graves, Emily E.

In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we employ should be useful in other metapopulation studies for calculating time-averaged reproductive output for different sites. PMID:24817307

A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.

Tingle, A.R. [Analysas Corp., Oak Ridge, TN (United States); Baker, L. [Automated Sciences Group, Inc., Oak Ridge, TN (United States); Long, D.D. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States). Hazardous Waste Remedial Actions Program; Miracle, M. [Advanced Sciences, Inc., Oak Ridge, TN (United States)

We report on the manipulation of the center-of-mass motion ('sloshing') of a Bose-Einstein condensate in a time-averaged orbiting potential (TOP) trap. We start with a condensate at rest in the center of a static trapping potential. When suddenly replacing the static trap with a TOP trap centered about the same position, the condensate starts to slosh with an amplitude much larger than the TOP micromotion. We show, both theoretically and experimentally, that the direction of sloshing is related to the initial phase of the rotating magnetic field of the TOP. We show further that the sloshing can be quenched by applying a carefully timed and sized jump in the phase of the rotating field.

Cleary, P. W.; Hijmans, T. W.; Walraven, J. T. M. [Van der Waals-Zeeman Institute of the University of Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)

? Commands incentives on Merit ? Increases City tax / sales revenues ? Vendor/ Supplier Demand driver ? Triple Bottom Line = $$$ + Society + Sustainability Interim Uses for Vacant Land Thinking Outside of the Box Wai Iam Tam UTA... ? Commands incentives on Merit ? Increases City tax / sales revenues ? Vendor/ Supplier Demand driver ? Triple Bottom Line = $$$ + Society + Sustainability Interim Uses for Vacant Land Thinking Outside of the Box Wai Iam Tam UTA...

Comprehensive studies of radiation belt dynamics using linear prediction filters (LPFs) were first published by Nagai (1988) and Baker et al. (1990). These studies focused primarily on short-term predictions of daily averaged radiation flux at geostationary orbit using either global magnetospheric indices, or the speed of the solar wind impinging the Earth's magnetosphere. Using daily solar wind data from the NSSDC's OMNI database and SAMPEX 2-6MeV omni-directional electron fluxes at various magnetic L-shells, Vassiliadis et al. (2002) combined linear response functions across L-shells to provide a novel means of visualizing the dynamic response of relativistic radiation belt electrons to solar wind input. Several physical interpretations and implications were gleaned from the results, including a demonstration of strong seasonal and solar cycle dependent variations in the global response function, as well as the existence of an immediate and short-lived, quasi-adiabatic peak in the response functions just outside of the slot region, but inside the heart of the outer electron belt. Our recent studies have demonstrated the ability of autoregressive (AR) filters to remove autocorrelations associated primarily with the diurnal variation observed in radiation belt data sets. Here we extend prior work by the previously mentioned authors by effectively removing diurnal variations in SAMPEX orbit-averaged electron data, and analyzing the linear prediction filters at sub-daily time scales. Several interesting new features become visible only at this higher time resolution, including a substantial negative response in the hours immediately following a solar wind event for L-shells from approximately 4.5-8, as well as significant temporal structure to the quasi-adiabatic peak observed by Vassiliadis and colleagues. The main features observed at daily time scales become readily apparent when the filters are smoothed appropriately.

Rigler, E. J.; Baker, D. N.; Vassiliadis, D.; Kanekal, S. G.; Klimas, A. J.

A molecular Rayleigh scattering based flow diagnostic is developed to measure timeaverage velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.

The purpose of this study is to examine in detail incompressible laminar and turbulent flows inside a turnaround duct with and without guide vanes and to investigate the effects of vanes on the flow characteristics. To perform this study, an implicit finite difference code cast in general curvilinear coordinates is further developed. The code is based on the method of pseudo-compressibility and utilize ADI or implicit approximate factorization algorithm to achieve computational efficiency. Method of segmental sweeping is developed to overcome the multiple-zone problem due to imposition of guide vanes. In the present study, several test cases have been computed. These test cases include laminar and turbulent flows inside a turnaround duct without and with two or three guide vanes. The study reveals that: (1) there exists large recirculation zones inside the duct if no vanes are present; (2) properly shaped and positional guide vanes are effective in eliminating flow separation; and (3) laminar and turbulent flows have similar flow features. But turbulent flow has less total pressure drop.

integrative sampler and the solid-phase extraction for estimating herbicide time-weighted average integrative sampler and the solid-phase extraction for estimating herbicide time-weighted average for detecting episodic or short-term pollution events (e.g. increased herbicide concentrations during a flood

In this paper, theoretical calculations as well as numerical simulations are performed for the time-averaged acoustic force and torque on a rigid cylinder of arbitrary size in a fluid with low viscosity, i.e., the acoustic boundary layer is thin compared to the cylinder radius. An exact analytical solution and its approximation are proposed in the form of an infinite series including Bessel functions. These solutions can be evaluated easily by a mathematical software package such as mathematica and matlab. Three types of incident waves, plane traveling wave, plane standing wave, and dual orthogonal standing waves, are investigated in detail. It is found that for a small particle, the viscous effects for an incident standing wave may be neglected but those for an incident traveling wave are notable. A nonzero viscous torque is experienced by the rigid cylinder when subjected to dual orthogonal standing waves with a phase shift even when the cylinder is located at equilibrium positions without imposed acoustic forces. Furthermore, numerical simulations are carried out based on the FVM algorithm to verify the proposed theoretical formulas. The theoretical results and the numerical ones agree with each other very well in all the cases considered. PMID:21682376

In a two-part paper, key computed results from a set of first-of-a-kind numerical simulations on the unsteady interaction of axial compressor stator with upstream rotor wakes and tip leakage vortices are employed to elucidate their impact on the time-averaged performance of the stator. Detailed interrogation of the computed flowfield showed that for both wakes and tip leakage vortices, the impact of these mechanisms can be described on the same physical basis. Specifically, there are two generic mechanisms with significant influence on performance: reversible recovery of the energy in the wakes/tip vortices (beneficial) and the associated nontransitional boundary layer response (detrimental). In the presence of flow unsteadiness associated with rotor wakes and tip vortices, the efficiency of the stator under consideration is higher than that obtained using a mixed-out steady flow approximation. The effects of tip vortices and wakes are of comparable importance. The impact of stator interaction with upstream wakes and vortices depends on the following parameters: axial spacing, loading, and the frequency of wake fluctuations in the rotor frame. At reduced spacing, this impact becomes significant. The most important aspect of the tip vortex is the relative velocity defect and the associated relative total pressure defect, which is perceived by the stator in the same manner as a wake. In Part 2, the focus will be on the interaction of stator with the moving upstream rotor tip and streamwise vortices, the controlling parametric trends, and implications on design.

Valkov, T.V. [Shell International Gas Ltd., London (United Kingdom); Tan, C.S. [Massachusetts Inst. of Tech., Cambridge, MA (United States). Gas Turbine Lab.

The reasonable criterion of sufficiency of data collection was shown for time-of-flight (TOF) particle desorption mass spectrometry with a single ion counting registration system: 1000-2000 counts should be collected in the peak (for complex molecules, such as bovine insulin, mol. wt. 5733.6 u). In this case, providing that all counts in the peak are measured, the standard deviation from the real average mass, due to the statistical error in measuring the isotopic distribution centroid, is within the range 0.03-0.07 u. However, if the centroid is calculated using only those counts above a given level relative to the peak base, an error of ca. 0.5 u or more is produced; the size of the error depends mainly on the magnitude of the height level of the centroid measurement. This is true for the molecules with masses greater than 700 u. Correction values to the peak centroid measured at a certain level are given. The validity of corrections was proved experimentally using a MSBX TOF mass spectrometer with a 252Cf ion source.

In a Mach 3.8 wind tunnel, both instantaneous and time-averaged flow structures of different scales around a blunt double-cone with or without supersonic film cooling were visualized via nano-tracer planar laser scattering (NPLS), which has a high spatiotemporal resolution. Three experimental cases with different injection mass flux rates were carried out. Many typical flow structures were clearly shown, such as shock waves, expansion fans, shear layers, mixing layers, and turbulent boundary layers. The analysis of two NPLS images with an interval of 5 ?s revealed the temporal evolution characteristics of flow structures. With matched pressures, the laminar length of the mixing layer was longer than that in the case with a larger mass flux rate, but the full covered region was shorter. Structures like K—H (Kelvin—Helmholtz) vortices were clearly seen in both flows. Without injection, the flow was similar to the supersonic flow over a backward-facing step, and the structures were relatively simpler, and there was a longer laminar region. Large scale structures such as hairpin vortices were visualized. In addition, the results were compared in part with the schlieren images captured by others under similar conditions.

The turnaround recommendations and framework in "The Turnaround Challenge" grew out of both new research and synthesis of extensive existing research, as carried out by Mass Insight Education & Research Institute and its partners since September 2005. If the main report is the tip of the proverbial iceberg, this supplement represents at least part…

Calkins, Andrew; Guenther, William; Belfiore, Grace; Lash, Dave

Modern digital designs require high performance and low cost. In this scenario, timing analysis is an essential step for each phase of the integrated circuit design cycle. To minimize the design turn-aroundtime, the ability to correctly predict...

Turbulent mixing generated by shock-driven acceleration of a perturbed interface is simulated using a new multi-component Reynolds-averaged Navier-Stokes (RANS) model closed with a two-equation - model. The model is implemented in a hydrodynamics code using a third-order weighted essentially non-oscillatory finite-difference method for the advection terms and a second-order central difference method for the gradients in the source and diffusion terms. In the present reshocked Richtmyer-Meshkov instability and mixing study, an incident shock with Mach number is generated in air and progresses into a sulfur hexafluoride test section. The time evolution of the predicted mixing layer widths corresponding to six shock tube test section lengths are compared with experimental measurements and three-dimensional multi-mode numerical simulations. The mixing layer widths are also compared with the analytical self-similar power-law solution of the simplified model equations prior to reshock. A set of model coefficients and initial conditions specific to these six experiments is established, for which the widths before and after reshock agree very well with experimental and numerical simulation data. A second set of general coefficients that accommodates a broader range of incident shock Mach numbers, Atwood numbers, and test section lengths is also established by incorporating additional experimental data for , , and with and with and previous RANS modeling. Terms in the budgets of the turbulent kinetic energy and dissipation rate equations are examined to evaluate the relative importance of turbulence production, dissipation and diffusion mechanisms during mixing. Convergence results for the mixing layer widths, mean fields, and turbulent fields under grid refinement are presented for each of the cases.

Realistic simulations with spatially coherent noise have been run in order to compare the performance of adaptive beamforming (ABF), inverse beamforming (IBF), and conventional beamforming (CBF) for the case of finite-averagingtimes, where the actual spatial coherence of the acoustic field, or covariance matrix, is not known a priori, but must be estimated. These estimation errors cause large errors in the ABF estimate of the directionality of the acoustic field, partly because ABF is a highly nonlinear algorithm. In addition, it is shown that ABF is fundamentally limited in its suppression capability at very low frequency (VLF), based on the sidelobe level of the conventional beampattern in the direction of the noise interferer [G. L. Mohnkern, "Effects of Errors and Limitations on Interference Suppression," NOSC Technical Document 1478, Naval Ocean Systems Center (1989)]. The simulations include a low-level plane wave signal of interest, a stronger noise plane wave interferer, and spatially random background noise. Both IBF and ABF performed significantly better than CBF, and IBF's performance was slightly better than ABF's performance. The performances of IBF and the ABF algorithm, the minimum variance distortionless response (MVDR) [A. H. Nuttall and D. W. Hyde, "Unified Approach to Optimum and Suboptimum Processing for Arrays," USL Report Number 992, Naval Underwater Systems Center, New London, CT (22 April 1969)] were recently compared independently [J. S. D. Solomon, A. J. Knight, and M. V. Greening, "Sonar Array Signal Processing for Sparse Linear Arrays," Defense Science and Technology Organization (DSTO) Technical Report (June 1999)] using measured data, with the result that IBF outperformed MVDR. This result is significant because MVDR requires orders of magnitude more processing power than IBF or CBF. PMID:11108366

Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…

Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan

In an era of school reform and high stakes accountability, the major challenge in education is to turnaround the nation's lowest-performing schools. National policy provides abundant resources with often restrictive prerequisites. Research indicates that quality principals and teachers are the most important factors for turning around the…

In this paper the author describes the overheating and shutdown of the synthesis gas machine following a biennial turnaround. Inspection revealed damage to the balance piston labyrinths of the IP compressor and an imperfectly balanced rotor. The author gives an explanation of the damage as well as the consequences to production.

This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…

, average annual cycles for Monterey Bay (MB; orange), the Coastal Upwelling Zone (CUZ; green), the Coastal and warmest o shore and in late summer. 60 m temperature (F) is coldest in MB and the CUZ in spring aseasonal. Mixed layer depth (D) is shallowest in MB and the CUZ in spring, and deep everywhere in winter. e

A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

The efficiency of producing salable products in the pork industry is largely determined by costs associated with feed and by the amount and quality of lean meat produced. The objectives of this paper were 1) to explore heritability and genetic correlations for growth, feed efficiency, and real-time ultrasound traits using both pedigree and marker information and 2) to assess accuracy of genomic prediction for those traits using Bayes A prediction models in a Duroc terminal sire population. Body weight at birth (BW at birth) and weaning (BW at weaning) and real-time ultrasound traits, including back fat thickness (BF), muscle depth (MD), and intramuscular fat content (IMF), were collected on the basis of farm protocol. Individual feed intake and serial BW records of 1,563 boars obtained from feed intake recording equipment (FIRE; Osborne Industries Inc., Osborne, KS) were edited to obtain growth, feed intake, and feed efficiency traits, including ADG, ADFI, feed conversion ratio (FCR), and residual feed intake (RFI). Correspondingly, 1,047 boars were genotyped using the Illumina PorcineSNP60 BeadChip. The remaining 516 boars, as an independent sample, were genotyped with a low-density GGP-Porcine BeadChip and imputed to 60K. Magnitudes of heritability from pedigree analysis were moderate for growth, feed intake, and ultrasound traits (ranging from 0.44 ± 0.11 for ADG to 0.58 ± 0.09 for BF); heritability estimates were 0.32 ± 0.09 for FCR but only 0.10 ± 0.05 for RFI. Comparatively, heritability estimates using marker information by Bayes A models were about half of those from pedigree analysis, suggesting "missing heritability." Moderate positive genetic correlations between growth and feed intake (0.32 ± 0.05) and back fat (0.22 ± 0.04), as well as negative genetic correlations between growth and feed efficiency traits (-0.21 ± 0.08, -0.05 ± 0.07), indicate selection solely on growth traits may lead to an undesirable increase in feed intake, back fat, and reduced feed efficiency. Genetic correlations among growth, feed intake, and FCR assessed by a multiple-trait Bayes A model resulted in increased genetic correlation between ADG and ADFI, a negative correlation between ADFI and FCR, and a positive correlation between ADG and FCR. Accuracies of genomic prediction for the traits investigated, ranging from 9.4% for RFI to 36.5% for BF, were reported that might provide new insight into pig breeding and future selection programs using genomic information. PMID:24671579

The main objective of the paper is to improve the Round Robin (RR) algorithm using dynamic ITS by coalescing it with Shortest Remaining Time Next (SRTN) algorithm thus reducing the average waiting time, averageturnaroundtime and the number of context switches. The original time slice has been calculated for each process based on its burst time.This is mostly suited for soft real time systems where meeting of deadlines is desirable to increase its performance. The advantage is that processes that are closer to their remaining completion time will get more chances to execute and leave the ready queue. This will reduce the number of processes in the ready queue by knocking out short jobs relatively faster in a hope to reduce the average waiting time, turn around time and number of context switches. This paper improves the algorithm [8] and the experimental analysis shows that the proposed algorithm performs better than algorithm [6] and [8] when the processes are having an increasing order, decreasing order and random order of burst time.

A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

In this paper we apply the concept of radar time (popularised by Bondi in his work on k-calculus) to the well-known relativistic twin `paradox'. Radar time is used to define hypersurfaces of simultaneity for a class of travelling twins, from the `Immediate Turn-around' case, through the `Gradual Turn-around' case, to the `Uniformly Accelerating' case. We show that this definition of simultaneity is independent of choice of coordinates, and assigns a unique time to any event (with which the travelling twin can send and receive signals), resolving some common misconceptions.

A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT) -based domain score estimation methods were evaluated, under conditions of few items per content area per…

He suggests starting with Python and Java, because they are easy to learn. The ... This is not just a theoretical question. ... does it, you can expect to do as well as the average big company—that is, to grow ..... And not in the trivial sense that.

This demo provides students with a concrete understanding of the average rate of change for physical situations and for functions described in tabular or graphic form. Animations in gif and Quicktime format are available, as well as Excel programs for classroom or student use.

The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.

We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

Eigo Shintani; Rudy Arthur; Thomas Blum; Taku Izubuchi; Chulwoo Jung; Christoph Lehner

Standard tomographic analyses ignore model uncertainty. It is assumed that a given model generated the data and the task is to estimate the quantum state, or a subset of parameters within that model. Here we apply a model averaging technique to mitigate the risk of overconfident estimates of model parameters in two examples: (1) selecting the rank of the state in tomography and (2) selecting the model for the fidelity decay curve in randomized benchmarking.

The extraction of charm mixing and CP violation parameters requires the combination of many measurements. The latest averages are reviewed and the contribution of individual measurements is discussed. While it is established that the CP-even eigenstate is the shorter lived, the oscillation frequency of charm mesons is still not known to high accuracy. All CP asymmetries are found to be compatible with zero within less than 3 sigma.

A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running timeaveraging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.

A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum

Freethey, G. W.; Spangler, L. E.; Monheiser, W. J.

We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of K?lauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100?km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at K?lauea between mid-2011 and mid-2013 was approximately 2?m3/s, which is about half the long-term average rate over the course of K?lauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of K?lauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at K?lauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.

The MEMS Exchange is a DARPA funded research program to provide the domestic MEMS community with an interface between MEMS designers and microfabrication facilities to allows designer to develop and exercise custom process sequences. These process sequences may cross the boundaries separating fabrication facilities in order to maximize process and design freedom. Information transfer within the MEMS-Exchange fabrication network takes

William L. Benard; Kaigham J. Gabriel; Michael A. Huff

We have used the data from six satellite instruments that measure the total column amount of ozone to construct a consistent merged data set extending from late 1978 into 2000. The keys to constructing a merged data set are to minimize potential drift of individual instruments and to accurately establish instrument-to-instrument offsets. We have used the short-wavelength D-pair measurements (306nm-313nm) of the SBUV and SBUV/2 instruments near the equator to establish a relatively drift-free record for these instruments. We have then used their overlap with the Nimbus 7 and EP TOMS instruments to establish the relative calibration of the various instruments. We have evaluated the drift uncertainty in our merged ozone data (MOD) set by examining both the individual instrument drift uncertainty and the uncertainty in establishing the instrument- to-instrument differences. We conclude that the instrumental drift uncertainty over the 22-year data record is 0.9 %/decade (2-sigma). We have compared our MOD record with 37 ground stations that have a continuous record over that time period. We have a mean drift with respect to the stations of +0.3 %/decade which is within 1-sigma of our uncertainty estimate. Using the satellite record as a transfer standard, we can estimate the capability of the ground instruments to establish satellite calibration. Adding the statistical variability of the station drifts with respect to the satellite to an estimate of the overall drift uncertainty of the world standard instrument, we conclude that the stations should be able to be used to establish the drift of the satellite data record to within and uncertainty of 0.6 %/decade (2-sigma). Adding to this an uncertainty due to the-incomplete global coverage of the stations, we conclude that the station data should be able to establish the global trend with an uncertainty of about 0.7 %/decade, slightly better than for the satellite record. We conclude that merging the two records together gives only a slight improvement in the uncertainty. Keeping them separate gives the greater confidence of two independent measures of the ozone trend and potential recovery. We fit the trend in our MOD record through May of 1991 and then extrapolated forward to see if the data at the end of the record was above the statistical model as a measure of ozone recovery as was done in the last WMO/UNEP assessment report. Because our data set drifts with respect to the ground-stations through May of 1991, we calculated a smaller global trend (-1.1 %/decade) than in the WMO/UNEP report. Our data in 1998 and 1999 was, on average 2 DU above the extrapolated statistical model with a 2-sigma uncertainty of 6 DU. For the combined mid-latitudes of the northern and southern hemispheres, the data was 5 DU above the extrapolated statistical model with a 2-sigma uncertainty of 10 DU. These may be signs of recovery, but they are still statistically insignificant.

Stolarski, Richard S.; Meeson, Blanche W. (Technical Monitor)

This is an interactive map of California and the Sierra Nevada mountains, showing projected variations in water stored in snowpack, from 1950 to 2090, assuming low or high emission scenarios over that period of time. Interactive can be adjusted to show different months of the year and various climate models, graphed by site.

For a usual-form static pressure probe immersed in a low-speed turbulent/unsteady flow, in a meanflow-aligned manner, for the probe head tube diameter do small compared to a freestream large-scale turbulence scale-length l_t, a relation is presented, and explained, between time-averaged sensed pressure p_i, corresponding Pitot pressure P, flow transverse mean-square fluctuation velocity ( ˜ v'^2) = (v'^2)/U^2 (where U is flow meanspeed), and (secondarily) flow streamwise mean-square fluctuation velocity ( ˜ u'^2). A co-occurring relation between P, 0.5?U^2, and (secondarily) ( ˜ u'^2) also is presented, and explained. Here, both pi and P are referenced to a flow effective ambient edge pressure p_e. Each relation is limited to lower-turbulence-level flow. These relations, together with connecting hot-wire-anemometry (hwa) ( ˜ u'^2) determinations, have been used to determine U and (v'^2) distributions across low-speed planar and axisymmetric air jets. Example such distributions are considered. Distributions are compared with present-work hwa U and ( ˜ u'^2) distributions, and other-work hwa, LDA, and PIV distributions. Cross-sectionally inboard of a jet half-meanspeed station, U and (v'^2) values obtained using this approach are as accurate as ones on average determined using other methods. Important factors for present determinations are use of a high-accuracy pressure sensor and computer-implemented data-logging sensed-pressure averaging. Particular advantages and disadvantages for this approach are noted. Use of this approach for other flow forms is considered. Effects on determinations of allowing l_t/do to be only moderately large, or allowing probe meanflow yaw misalignments, are considered.

Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f>4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841

Rosowski, John J; Cheng, Jeffrey Tao; Ravicz, Michael E; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme

Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f > 4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841

Rosowski, John J.; Cheng, Jeffrey Tao; Ravicz, Michael E.; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme

EVOLUTION OF ADIABATIC INVARIANTS IN STOCHASTIC AVERAGING. DMITRY DOLGOPYAT. Abstract. An averaging problem with Markov fast motion is considered. The diffusive limit is obtained for the evolution of adi of this phenomenon).) In this case terms neglected in (2) can dramatically alter long time dynamics. Hence one has

This resource is a multi-exercise module relating to average speed. It contains an interactive Java simulation featuring a car that the user controls for speed. Graphs of instantaneous and average velocities are simultaneously displayed, helping the user to understand the concept of time-weighted average. The lesson plan available through the Help menu explains how to calculate a weighted average, first with two speeds, and then with multiple. This item is part of a larger collection of simulation based physics models sponsored by the MAP project (Modular Approach to Physics).

The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

Long period fiber gratings (LPFGs) have been proposed as label-free optical biosensor for a few years. Refractive index changes, which modify the fiber transmission spectrum, are still used for evaluating a biochemical interaction that occurs along the grating region. A turn-around point (TAP) LPFG was manufactured for enhancing the refractive index sensitivity of these devices. Considering the simplicity and the fast process with respect to the silanization procedure, the functionalization of the fiber was carried out by Eudragit L100 copolymer. An IgG/anti-IgG immunoassay was implemented for studying the antigen/antibody interaction. A limit of detection lower than 100 ?g L-1 was achieved. Based on the same model assay, we compared the resonance wavelength shifts during the injection of 10 mg L-1 anti-IgG antigen between the TAP LPFG and a standard non-TAP one, in which the coupling occurs with a lower order cladding mode, as performance improvement of the LPFG-based biosensors.

Long period fiber gratings have been effectively used in the field of biochemical sensing since a few years. Compared to other well-known label-free optical approaches, long period gratings (LPGs) take advantage of the typical peculiarity of optical fibers. Coupling the propagating core mode with a high-order cladding mode near its turn-around point (TAP) was the strategy adopted to achieve good performances without additional coatings, except for the sensing and selective biolayer deposited on the fiber. Both the modeling and manufacturing of TAP LPGs were discussed. After the functionalization of the fiber surface with the deposition of a Eudragit L100 copolymer layer followed by immunoglobulin G (IgG) covalent immobilization, an IgG/anti-IgG bioassay was implemented along the grating region and the kinetics of antibody/antigen interaction was analyzed. A quantitative comparison between a TAP LPG and a non-TAP LPG was carried out to highlight the improvement of the proposed immunosensor. The real effectiveness and feasibility of an LPG-based biosensor were demonstrated by using a complex matrix consisting of human serum, which also confirmed the specificity of the assay, and a limit of detection of 70 ?g L(-1) (460 pM) was achieved. PMID:24835405

We present an alternative definition of the fuzzy weighted average, in which Zadeh's extension principle is applied to the definition of the non-fuzzy weighted average where weights are required to be normalised. It is argued that the alternative approach should be preferred above the traditional approach. An algorithm for the computation of the fuzzy weighted average for the alternative approach

Broek van den Pim; Joost Noppen; K. Demirli; A. Akgunduz

Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.

This applet is an interactive demonstration of average speed and velocity. The user can move the object, while average speed and velocity is calculated and available through the Data panel. A full lesson plan is available through the Help menu. This item is part of a larger collection of simulation based physics modules sponsored by the MAP project (Modular Approach to Physics).

A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the timeaveraged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and AverageTurn-aroundTime (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

Students will determine average speeds from data collected and convert units for speed problems. Students try to roll the ball with a prescribed average speed based on intuition. Then, based on unit conversion we will see how accurate the rolls really were.

People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These

Average properties of general inhomogeneous cosmological models are discussed in the Newtonian framework. It is shown under which circumstances the average flow reduces to a member of the standard Friedmann--Lema\\^\\i tre cosmologies. Possible choices of global boundary conditions of inhomogeneous cosmologies as well as consequences for the interpretation of cosmological parameters are put into perspective.

of indices i such that f(w) 6= f(w (i) ). De#12;nition 2 (Average sensitivity) The average sensitivity of f with respect to d, bs d;w (f), is the number of blocks S i such that f(w) 6= f(w (S i ) ). De#12;nition 4 For a binary string w 2 f0; 1g n , we denote by w (i) , the string obtained from w by ipping its ith bit

This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…

ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaroundtime to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .

Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

A primary application of econophysics is using digital signal processing techniques to filter and predict market data, which is theorized to exhibit random walk motion. An exponential moving average is one tool that physicists use to smooth data from an input signal to identify its trends. The Exponential Moving Average Stock Model implements three types of exponential moving averages and allows the user to change the parameters of each. The model allows the user to view the results of exponential moving averages computed on the New York Stock Exchange daily closing price of six familiar companies. It demonstrates one way that traders use causal filters to smooth market data and forecast the next day's price.

In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning, deepen…

The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

Rats were trained on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10 s and stimulus B (the other stimulus) indicated food availability after 20 s. Testing consisted of non-reinforced probe trials in which the stimulus was A, B, or the compound AB. On single-stimulus trials, rats responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats showed a single scalar peak of responding at a time midway between those for stimulus A and B. These results suggest that when provided with discrepant information regarding the temporal predictability of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use this average to generate an expectation of reward for the compound stimuli. PMID:19594288

Swanton, Dale N.; Gooch, Cynthia M.; Matell, Matthew S.

New publication Climatological averages for 1981-2010 and 2001-2010 for stations appearing in the monthly bulletin of the Climatological Observers Link ISBN 978-0-9569485-0-2 Available 1 September detach and return form below with payment. Please make cheques payable to `Climatological Observers Link

Homogeneous shear flows (with constant strainrate du/dy) are generated with the Doll's and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, P(xx) - P(yy), depend on both the thermostat type and the chosen shearflow algorithm. The Doll's and Sllod algorithms predict opposite signs for this stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures, T(xx) > T(zz) > T(yy).

Homogeneous shear flows (with constant strainrate dvx/dy ) are generated with the Doll’s and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear normal-stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, Pxx-Pyy , depend on both the thermostat type and the chosen shearflow algorithm. The Doll’s and Sllod algorithms predict opposite signs for this normal-stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures: Txx>Tzz>Tyy .

in a unified light. First, we characterize the convergence properties of averaging algorithms over acyclic. Averaging algorithms have been stud- ied both in continuous time (Olfati-Saber and Murray, 2004; Ren). In (Olfati-Saber and Murray, 2004), averaging algo- rithms are investigated via graph Laplacians under

In the framework of the elliptic regularization method, the Cauchy problem for the Schrödinger equation with discontinuous degenerating coefficients is associated with a sequence of regularized Cauchy problems and the corresponding regularized dynamical semigroups. We study a divergent sequence of quantum dynamical semigroups as a random process with values in the space of quantum states defined on a measurable space of regularization parameters with a finitely additive measure. The mathematical expectation of the considered processes determined by the Pettis integral defines a family of averaged dynamical transformations. We investigate the semigroup property and the injectivity and surjectivity of the averaged transformations. We establish the possibility of defining the process by its mathematical expectation at two different instants and propose a procedure for approximating an unknown initial state by solutions of a finite set of variational problems on compact sets.

Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.

Vogelsberger, S. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Spehner, D. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Laboratoire de Physique et Modelisation des Milieux Condenses, Universite Joseph Fourier and CNRS, BP 166, F-38042 Grenoble (France)

Joint wavelet analysis of complete and downsampled series of paleomagnetic and petromagnetic characteristics of rocks in the Matuyama-Jaramillo transitional zone in the Adzhidere section is used to extract paleomagnetic data whose variations are associated with the geomagnetic field alone and data correlating with variations in petromagnetic parameters. It supposed that this correlation can be caused by an external factor affecting weak variations in the magnetic field and climatic changes reflected in the composition and amount of the ferromagnetic fraction in rocks. Preliminary data are obtained for the characteristic times of field variations at the time of accumulation of rocks in the transitional zone.

This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important asects of the task and its potential use. Here are the first few lines of the commentary for this task: John makes DVDs of his friendâs shows. He has realized that, because of his fixed costs, his average cost per DVD depends on the number of DVDs he prod...

of admission factors GRE scores (including the physics subject test), 3 letters of recommendation-time students #12;22 .5 7.1 20 $ 336,472.00 $ 6,731,449.00 17.4 Male Female White 19 Black Hispanic 1 Other 2, Graduate Advisor 3 AverageTime to Degree Average Years to DegreeStudents Starting 2000-2002 Three

The current status of the search for indirect CP violation in the neutral D meson system at the B-factories and at LHCb is reported. The indirect CP asymmetry search is performed by the measurement of the proper-time asymmetry ($A_{\\Gamma}$) in decays of $D^0-\\bar{D^0}$ mesons to CP eigenstates, $K^-K^+$ and $\\pi^- \\pi^+$, and by $y_{CP}$, the ratio between the effective lifetime measured in decay to a CP eigenstate and that to the mixed eigenstate $K \\pi$. All results are consistent with the no CP violation hypothesis. The latest world averages for mixing and CP asymmetry in the charm sector evaluated by the Heavy Flavour Averaging Group are presented. The no mixing hypothesis is excluded at more than 12 standard deviations. The search for direct and indirect CP violation in the charm sector is consistent with no CP violation at 2.0% confident level.

average-linkage clustering example 1 #12;average-linkage clustering example 1 #12;average-linkage clustering example 1 #12;average-linkage clustering example 1 #12;average-linkage clustering example 1 #12;average-linkage clustering example 1 #12;average-linkage clustering example 1 #12;average-linkage

The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866

The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-curves and the spectral variability at visible + mid-IR wavelengths for Mars as a function of viewing angle, illumination, season. We also considered the appearance of an increasingly frozen Mars and simulated its detection versus real Mars with TPF-C and TPF-I as a function of spectral resolving power, signal-to-noise, integration time.

Giovanna Tinetti; Victoria S. Meadows; David Crisp; William Fong; Thangasamy Velusamy; Heather Snively

385: SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN 1947 SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN, 1947 Part I- -Sea surface temperature monthly average charts, northeastern Pacific Ocean 5 Part II- -Sea

The concept of time in the `clockwork' Newtonian world was irrelevant; and has generally been ignored until recently by several generations of physicists since the implementation of quantum mechanics. We will set aside the utility of time as a property relating to physical calculations of events relating to a metrics line element or as an aspect of the transformation of a particles motion/interaction in a coordinate system or in relation to thermodynamics etc., i.e. we will discard all the usual uses of time as a concept used to circularly define physical parameters in terms of other physical parameters; concentrating instead on time as an aspect of the fundamental cosmic topology of our virtual reality especially as it inseparably relates to the nature and role of the observer in natural science.

Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

The authors consider static and adaptive-decentralized optimisation of the performance of a distributed computing system with a central processor (CP), denoted by S0 loosely coupled to N peripheral processors (PPs), denoted by S i, i=1, . . ., N. Each PP receives a stream of Poisson job arrivals with a rate ?i. The arriving load at Si is distributed between

The influence of specific absorption rate averaging schemes on the spatial correlation between mass-averaged specific absorption rate and radio-frequency-induced steady-state temperature-rise distributions in the “Visible Human” body model exposed to plane waves in the 30–800 MHz frequency range is investigated through finite-difference time-domain modeling. The averaged specific absorption rate is computed on the basis of the IEEE Std. C95.3-2002 specific

Alexander Razmadze; Levan Shoshiashvili; David Kakulia; Revaz Zaridze; Giorgi Bit-Babik; Antonio Faraone

In this paper we study a Monte Carlo simulation based approach to stochastic discrete optimization problems. The basic idea of such methods is that a random sample is generated and consequently the expected value function is approximated by the corresponding sample average function. The obtained sample average optimization problem is solved, and the procedure is repeated several times until a

Anton J. Kleywegt; Alexander Shapiro; Tito Homem-de-Mello

STATE OF CALIFORNIA AREA WEIGHTED AVERAGE CALCULATION WORKSHEET: RESIDENTIAL CEC-WS-2R (Revised 08/09) CALIFORNIA ENERGY COMMISSION Area Weighted Average Calculation Worksheet WS-2R Residential (Page 1 of 1) Site/Time: ____________________ HERS Provider: __________________ 2008 Residential Compliance Forms August 2009 This worksheet should

...Summary Sheet for Determination of Average Opacity 1 Figure 1 to Subpart QQQ of Part...Summary Sheet for Determination of Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity for 1-minute...

...Summary Sheet for Determination of Average Opacity 1 Figure 1 to Subpart QQQ of Part...Summary Sheet for Determination of Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity for 1-minute...

...Summary Sheet for Determination of Average Opacity 1 Figure 1 to Subpart QQQ of Part...Summary Sheet for Determination of Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity for 1-minute...

...Summary Sheet for Determination of Average Opacity 1 Figure 1 to Subpart QQQ of Part...Summary Sheet for Determination of Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity for 1-minute...

Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest

The assembly and maintenance of cilia depends on intraflagellar transport (IFT). Activated IFT motor kinesin-II enters the cilium with loaded IFT particles comprising IFT-A and IFT-B complexes. At the ciliary tip, kinesin-II becomes inactivated, and IFT particles are released. Moreover, the rate of IFT entry is dynamically regulated during cilium assembly. However, the regulatory mechanism of IFT entry and loading/unloading of IFT particles remains elusive. We show that the kinesin-II motor subunit FLA8, a homolog of KIF3B, is phosphorylated on the conserved S663 by a calcium-dependent kinase in Chlamydomonas. This phosphorylation disrupts the interaction between kinesin-II and IFT-B, inactivates kinesin-II and inhibits IFT entry, and is also required for IFT-B unloading at the ciliary tip. Furthermore, our data suggest that the IFT entry rate is controlled by regulation of the cellular level of phosphorylated FLA8. Therefore, FLA8 phosphorylation acts as a molecular switch to control IFT entry and turnaround. PMID:25175706

This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

Wiltshire, David L. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand, and International Center for Relativistic Astrophysics Network (ICRANet), Piazzale le della Repubblica 10, Pescara 65121 (Italy)

RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

The axial projection of the glutamine synthetase molecule has been reconstructed from electron micrographs of a stained preparation by using a new method of correlation search and averaging. The average over 50 individual molecules appears as a radial pattern with sixfold symmetry. The handedness evident in the average is attributed to nonuniformity of the negative stain. PMID:32653

Frank, J.; Goldfarb, W.; Eisenberg, D.; Baker, T.S.

Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10

Various theoretical perspectives suggest that marginal changes in the quantity of crime and arrests are related to one another. Unfortunately, they provide little guidance as to the amount of time that is required for these effects to be realized. In this paper, autoregressive integrated moving average (ARIMA) time-series modeling techniques, which necessitate making minima! assumptions concerning the lag structure one

Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averagingtime. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575

Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

Double averaging of the fundamental flow equations is an attractive methodology for investigating spatially heterogeneous\\u000a flows. The resulting double-averaged equations can be used as a framework for development of turbulence models. In order to\\u000a fully explore the potential of such models the stress terms that appear in the momentum equation as a result of each averaging\\u000a step needs to be

A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.

The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

CALIFORNIA ENERGY COMMISSION STAFF FORECAST: AVERAGE RETAIL ELECTRICITY PRICES 2005 TO 2018: Marks, Mignon. Staff Forecast of Average Retail Electricity Prices: 2005 to 2018. California Energy DRAFTSTAFFREPORT JUNE 2007 CEC-200-2007-013-SD Arnold Schwarzenegger, Governor #12;CALIFORNIA ENERGY COMMISSION

Use of zonal average temperatures is common in numerical modeling of stratospheric chemistry. In most cases, this approach is accurate. The exceptions are when a heterogeneous reaction or a thermal process which is highly non-linear in temperature is involved. The differences between use of an average temperature and more realistic temperature are presented for cases where temperature fluctuations are of

Dozens of schools are slated for aggressive interventions over the next three years under a new, multistate effort that aims to clear hurdles that have hindered previous attempts to improve underperforming schools. Education officials in Colorado, Delaware, Illinois, Louisiana, Massachusetts, and New York have agreed to partner with Mass Insight…

If the job of a traditional superintendent is hard, imagine the complexities involved in building a school system from scratch--especially one composed of schools with some of the most intractable educational challenges. That's the task facing education leaders in Michigan and Tennessee, which are building special districts to take over…

Average annual pay levels by industry varied widely across the nation according to this BLS release for 1997, with high average incomes of $49,995 recorded in mining areas and a low average pay of $15,877 recorded in retail trade, an industry largely populated by part-time workers. Regionally, The District of Columbia held the greatest annual pay level at $46,775 in 1997, while South and North Dakota, Montana, Mississippi, and Arkansas retained low annual salaries of $21,645, $22,047, $21,947, $22,772, and $23,268 respectively. Additional comparisons of annual incomes by state and industry for 1996 and 1997 are available at this Average Annual Pay by State and Industry page with four detailed statistical tables.

Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.

Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113

We present a new approach for averaging in general relativity and cosmology. After a short review of the theory originally taken from the equivalence problem, we consider two ways of dealing with averaging based on Cartan scalars. We apply the theory for two different Lemaître-Tolman-Bondi models. In the first one, the correlation term behaves as a positive cosmological constant, in the second example, the leading correlation term behaves like spatial curvature. We also show the non-triviality of averaging for linearized monochromatic gravitational wave.

If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811

...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

We present theoretical and experimental results showing that a thermal ghost imaging system can produce images of high quality even when it uses detectors so slow that they respond only to intensity-averaged (that is, ...

This paper highlights the effects of digitization in relation to the effective numberical range over which the signal to be averaged has to be resolved. The effect of quantization noise on the number of averaging cycles is assessed and it is shown that for averaging weak signals such as the HIS bundle electrogram, high resolution analogue-to-digital (A/D) conversion is required so as to prevent a substantial increase in the number of averaging cycles. Also the requirement for adequate low-pass filtering is discussed and relationships between the numerical range of the A/D conversion process, the order of the low-pass filter, its cut-off frequency and sampling frequency are formulated. PMID:1890821

Instrument that measures an average temperature across a nonuniform temperature profile under steady-state conditions has been developed. The principle of operation is an application of the expansion of a solid material caused by a change in temperature.

We present high-precision measurements of pure element stable isotope pairs that demonstrate mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds was pragmatically based on elemental mass fractions. Our isotopic measurements establish that this approximation has no physical basis. We propose an alternative model to mass fraction averaging, based on the number of electrons or protons, termed "electron fraction," which predicts backscatter yield better than mass fraction averaging. We also present an improved backscatter (electron loss) factor based on a modified electron fraction average for the ZAF atomic number correction that provides a significant analytical improvement, especially where large atomic number corrections are required. PMID:12807672

Donovan, John J; Pingitore, Nicholas E; Westphal, Andrew

The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...

The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

Averages are meant to convey the essential features of a set of data, or a random variable, in a simple and a concise way. Like any other summary, an average can be misleading, misused and abused; there is a fair amount of literature on this aspect of averages, the book by D. Huff(1973) being a particularly readable account. In one intuitive use of averages there is a source of error which can be quite serious and which is often not recognized. This source of error is illustrated below by a quality control problem, a project, an experiment and a game. A Taylor series expansion gives an insight into the nature of the error.

This study is dedicated to demonstrate the periodicities embedded in the averaged responses of chaotic systems with periodic excitations. Recent studies in the field of non-linear oscillations often found random-like responses for some deterministic non-linear systems with periodic excitations, which were then named “chaotic systems”. However, in this study, by discretizing the initial conditions on a chosen domain and averaging

C.-P. CHAO; Y. Kang; S.-S. Shyr; C.-C. Chou; M.-H. CHU

Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.

The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images. PMID:18662790

The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower bounds. The upper bound, when used as an approximation for the transmission probability, is unreasonably good and we conjecture that it is asymptotically exact.

Yin Lu; Christian Miniatura; Berthold-Georg Englert

We consider averaged shelling and coordination numbers of aperiodic tilings. Shelling numbers count the vertices on radial shells around a vertex. Coordination numbers, in turn, count the vertices on coordination shells of a vertex, defined via the graph distance given by the tiling. For the Ammann-Beenker tiling, we find that coordination shells consist of complete shelling orbits, which enables us to calculate averaged coordination numbers for rather large distances explicitly. The relation to topological invariants of tilings is briefly discussed.

I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.

Svanda, Michal, E-mail: michal@astronomie.cz [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)

This paper presents the mathematical basis to optimize the recursive-least-squares (RLS) technique for real-time implementations on digital signal processors (DSPs). The proposed decoupled recursive-least-squares (DRLS) technique is optimized for real-time power system measurements, monitoring, protection, and diagnostics applications. The comparison between the proposed DRLS technique and the conventional RLS shows considerable improvement in terms of reducing the turnaround-time implementation on

Key figure from the 2007 Intergovernmental Panel on Climate Change (IPCC) report that shows changes in global average surface temperature, global average sea level, and Northern Hemisphere snow cover from as far back as 1850.

IPCC (Intergovernmental Panel on Climate Change) AR4 Synthesis Report

The vacuum averages of the energy densities of massless spin-(1/2 and spin-1 fields and of the time component of the Bel-Robinson tensor are evaluated around a cosmic string. The results are transcribed into the Rindler wedge and correlated with other calculations.

Informational Confidence Bounds for Self-Normalized Averages and Applications AurÂ´elien Garivier 9 Email: aurelien.garivier@math.univ-toulouse.fr Abstract--We present deviation bounds for self] for a related model). One possible solution consists in constructing, at time t, a confidence interval based

An algorithm for automatic averaging of a magnetocardiogram (MCG) is described. Due to the relatively low signal-to-noise ratio in the MCG, the measured MCG data are often averaged to be analyzed. Generally, R-peaks are used as trigger points, which become anchors for superposition and we can obtain an averaged epoch eventually. However, we have to determine several parameters, such as the threshold magnitude for recognizing R-peak, the time-period of the epoch window, and which channel has dominant R-peaks. In order to determine these parameters automatically, we utilize the magnitude histogram of the root-mean-square waveform of all the channels. We can determine the threshold magnitudes for recognizing R-peaks and T-peaks, respectively, by using the characteristic distribution of the MCG signal histogram. Peak detection procedure using these thresholds records all the locations of the R-peaks and T-peaks, thus we get the average latencies of the R-T intervals and the R-R intervals. From these latencies, we estimate the full width of the epoch window. By adding a routine for processing double R-peaks, our algorithm could conduct the MCG averaging sequence fully automatically. The algorithm has been tested on recordings of 40 normal subjects and 15 patients suffering from myocardial ischemia, and we conclude that this algorithm reliably performs the averaging sequence. The MCG recordings are measured by our 62-channel planar gradiometer system in a magnetically shielded room. PMID:16015714

Kim, K; Lee, Y H; Kwon, H; Kim, J M; Kim, I S; Park, Y K

We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

The average of a color distribution has special significance for color coding (e.g. to estimate the illuminant) but how it depends on the visual representation (e.g. perceptual vs. cone-opponent) or nonlinearities (e.g. categorical coding) is unknown. We measured the perceived average of two colors shown alternated in spatial arrays. Observers adjusted the components until the average equaled a specified reference hue. Matches for red, blue-red, or yellow-green were consistent with the arithmetic mean chromaticity, while blue-green settings deviated toward blue. The settings show little evidence for categorical coding, and cannot be predicted from the scaled appearances of the individual components. PMID:24695184

Webster, Jacquelyn; Kay, Paul; Webster, Michael A.

For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ˜ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

Since its introduction, the normalized gain or the g-factor has been widely used in assessing students' performance in pre- and post-tests. The average g-factor can be calculated using either the average scores of the class or individual student's scores. In general, these two calculations produce different results. The nature of these two results is explored for several idealized situations. The results suggest that we may be able to utilize the difference between the two results to extract information on how the population may have changed as a result of instruction

\\u000a We present methods for extracting an average representation of respiratory dynamics from free-breathing lung MR images. Due\\u000a to individual variations in respiration and the difficulties of real-time pulmonary imaging, time of image acquisition bears\\u000a little physiologic meaning. Thus, we reparameterize each individual’s expiratory image sequence with respect to normalized\\u000a lung capacity (area, as a substitute for volume), where 1 represents

Averaged Probabilistic Relational Models Daniel Wright June 3, 2002 Abstract Most real-world data is stored in relational form. In contrast, most statistical learning methods work with "flat" data information about the objects and relations, and present an approximation algorithm for reasoning about

Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

AVERAGE DENSITIES AND LINEAR RECTIFIABILITY OF MEASURES P. M Â¨ ORTERS Abstract: We show that a measure on IR d is linearly rectifiable if and only if the lower 1Âdensity is positive and finite regularity of the measure Â¯ is intimately related to the behaviour of the densities. We say Â¯ is ffÂrectifiable

The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

Laser Fusion Energy and The High Average Power Program John Sethian Naval Research Laboratory Dec for Inertial Fusion Energy with lasers, direct drive targets and solid wall chambers Lasers DPPSL (LLNL) Kr posters Snead Payne #12;Laser(s) Goals 1. Develop technologies that can meet the fusion energy

this approximation holds is inverse propor- tional to the rate of change of slow variables. The system is described on the systems with the classical Preisach nonlinearity. Key words: Averaging technique, Hysteresis, Sweeping to Boole Centre for Research in Informatics and Department of Applied Mathematics, University College Cork

shudder of revulsion. This unfor- tunate association of eels with snakes has deprived most Americans from of the world. Not only that, but the reptilian association of eels is completely erroneous. The American eelINTRODUCTION To the average American, the image of the common eel conjures up a picture of a snake

The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

Average solar wind electron properties based on a 1 year Vela 4 data sample-from May 1967 to May 1968 are presented. Frequency distributions of electron-to-ion temperature ratio, electron thermal anisotropy, and thermal energy flux are presented. The resulting evidence concerning heat transport in the solar wind is discussed.

During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

Abstract. High-resolution angiography and total average blood flow measurements in the rat retina using optical coherence tomography (OCT) are presented. Recently, an en face integration method, which does not require explicit calculation of vessel angles, was introduced to calculate blood flow using Doppler OCT. Using this method, rapid two-dimensional scanning has been shown to quantify pulsatile flow. However, high scanning speeds sacrifice transverse sampling density and may lead to biased velocity estimates. Alternatively, we show here that by using a volumetric scanning protocol that asynchronously samples a single vessel with respect to the heartbeat, it is possible to obtain accurate time-averaged flow measurements, even without explicit information about the pulsatile waveform. Total average retinal blood flows calculated using either arteries or veins are comparable, supporting the accuracy of this method. PMID:23887484

Average cosmological invariant parameters (ACIPs) are calculated for six groups of BATSE cosmic gamma-ray bursts selected by their peak fluxes on the 1.024s timescale. The ACIPs represent the average temporal and spectral properties of these events equally in the observer frame of reference and in the co-moving frames of outbursting emitters. The parameters are determined separately for rise fronts and for back slopes of bursts, defined as the time profiles before and after the main peaks, respectively. The ACIPs for the rise fronts are found to be different for different intensity groups, while the ACIPs for the back slopes show no significant dependence on intensity. We conclude that emitters of bursts manifest standard average properties only during the back slopes of bursts.

Mitrofanov, Igor G.; Anfimov, Dimitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, W. S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles

Average cosmological invariant parameters (ACIPs) are calculated for six groups of BATSE cosmic gamma-ray bursts selected by their peak fluxes on the 1.024 S timescale. The ACIPs represent the average temporal and spectral properties of these events equally in the observer frame of reference and in the comoving frames of outbursting emitters. The parameters are determined separately for rise fronts and for back slopes of bursts, defined as the time profiles before and after the main peaks, respectively. The ACIPs for the rise fronts are found to be different for different intensity groups, while the ACIPs for the back slopes show no significant dependence on intensity. We conclude that emitters of bursts manifest standard average properties only during the back slopes of bursts.

Mitrofanov, Igor G.; Anfimov, Dmitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles A.

Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5?m axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.

Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.

A signal analysis procedure is described for obtaining time intervals parameters of the fetal electrocardiogram as recorded from the maternal abdomen. Applying averaging to the fetal electrocardiogram quantification of the PR interval, QRS duration and QT interval were measured. This technique which includes the subtraction of an averaged maternal ECG waveform using cross-correlation function and fast Fourier transform algorithm, enables the detection of all the fetal QRS complexes in spite of their coincidence with the maternal ECGs. Results that were obtained from 21 pregnant women at the gestational age of 32-41 weeks and an example of a recording with fetal premature ventricular contractions are presented. This method shows an important improvement with respect to detection of fetal heart rate and detection of arrhythmia disturbances in the fetal ECG. The averaging procedure can be used to evaluate long-lived alterations in the fetal ECG. PMID:2208985

Timeaveraged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

Timeaveraged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

The technology base formed by the development of high peak power simulators, laser drivers, FEL`s, and ICF drivers from the early 60`s through the late 80`s is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having ? = 0 have volume deceleration parameter qvol<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated qvol in some Lemaître-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that qvol>0, while those models which we have been able to find which exhibit qvol<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

We sketch the average behaviour of the temperatures and densities of the main components of the $\\Lambda$CDM universe after inflation. It is modelled as a perfect fluid with dark energy associated with the macroscopic effect of conformal variations of the metric. The main events of the thermal evolution are studied, such as the effect of particle annihilations and decoupling, and the transitions between the eras dominated by different entities. Estimates of the average present epoch temperature of baryonic matter and dark matter composed of neutralinos are given. We study the eventual presence of a sterile neutrino component and find that the sterile neutrino density at the epoch of primordial nucleosynthesis is in agreement with expectations when their evolution starts, at the end of inflation, in temperature equilibrium with the rest of the universe.

jModelTest is a new program for the statistical selection of models of nucleotide substitution based on "Phyml" (Guindon and Gascuel 2003. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst Biol. 52:696-704.). It implements 5 different selection strategies, including "hierarchical and dynamical likelihood ratio tests," the "Akaike information criterion," the "Bayesian information criterion," and a "decision-theoretic performance-based" approach. This program also calculates the relative importance and model-averaged estimates of substitution parameters, including a model-averaged estimate of the phylogeny. jModelTest is written in Java and runs under Mac OSX, Windows, and Unix systems with a Java Runtime Environment installed. The program, including documentation, can be freely downloaded from the software section at http://darwin.uvigo.es. PMID:18397919

We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

Ghosh, M. K. [Department of Mathematics, Indian Institute of Science, Bangalore 560012 (India); Bagchi, A. [Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)

the multiple-scale analysis, investigate the behaviour of solutions of the Mathieu equation: Â¨x + (a - 2q cos 2 equation df dt - f = f2 e-t with 1 and initial condition f(0) = 1 determine an approximation by using the averaging method. 4. For the differential equation d2 y dt2 + y + |y |y = 0 with initial conditions y(0) = 1

A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

A numerical procedure is presented for the tabulation of average run lengths (ARL's) of geometric moving average charts. Both one-and two-sided ARL's are given for various settings of the control limits, smoothing constant and shift in the nominal level of the process mean. Where comparison is possible. the tabulated ARL's are in agreement with those obtained by Roberts [3]. [4

The purpose of this study was to determine if there were differences in the learning style preferences of gifted, average-ability, and special needs students. One hundred seventy-two elementary school students from two large school jurisdictions from a large urban area in Western Canada were administered the Dunn, Dunn, & Price (1985) Learning Style Inventory (LSI). A multivariate analysis of variance

Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

This study examined effects of contextualized math instruction on problem-solving performance of 17 middle school remedial students and 49 average-achieving prealgebra students. Results showed that both remedial and average students receiving contextualized-problem instruction outperformed comparison groups receiving word-problem instruction.…

In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

An average resonance capture study of 124Te was carried out by bombarding samples of 123Te with 2- and 24-keV neutron beams. The complete set of 0+, 1+, 2+ states disclosed by the experiment is consistent with the data of Robinson, Hamilton, and Snelling, demonstrating that there are no undetected states of these spins (especially 0+ states) below about 2500 keV. In particular, proposed 0+ levels at 1156 and 1290 keV are ruled out. This impacts various attempted interpretations in terms of intruder states, U(5), and O(6) symmetries.

In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from the supernova observation. This work was motivated by recent findings that showed that there are models which despite $\\Lambda=0$ have volume deceleration parameter $q^{vol} 0$, while those models which we have been able to find which exhibit $q^{vol} to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

Both absolute and differential temperature measurements were simultaneously performed as a function of time for a pixel on a high-temperature, multi-spectral, spatially and temporally varying infrared target simulator. A scanning laser beam was used to maintain a pixel at an on-the-average constant temperature of 520 K. The laser refresh rate of up to 1 kHz resulted in small-amplitude temperature fluctuations with a peak-to-peak amplitude of less than 1 K. The experimental setup to accurately measure the differential and the absolute temperature as a function of time is described.

Perturbative gravity about global de Sitter space is subject to linearization-stability constraints. Such constraints imply that quantum states of matter fields couple consistently to gravity {\\it only} if the matter state has vanishing de Sitter charges; i.e., only if the state is invariant under the symmetries of de Sitter space. As noted by Higuchi, the usual Fock spaces for matter fields contain no de Sitter-invariant states except the vacuum, though a new Hilbert space of de Sitter invariant states can be constructed via so-called group-averaging techniques. We study this construction for free scalar fields of arbitrary positive mass in any dimension, and for linear vector and tensor gauge fields in any dimension. Our main result is to show in each case that group averaging converges for states containing a sufficient number of particles. We consider general $N$-particle states with smooth wavefunctions, though we obtain somewhat stronger results when the wavefunctions are finite linear combinations of de Sitter harmonics. Along the way we obtain explicit expressions for general boost matrix elements in a familiar basis.

In this paper,we investigate diameter and average path length(APL) of Sierpinski pentagon based on its recursive construction and self-similar structure.We find that the diameter of Sierpinski pentagon is just the shortest path lengths between two nodes of generation 0. Deriving and solving the linear homogenous recurrence relation the diameter satisfies, we obtain rigorous solution for the diameter. We also obtain approximate solution for APL of Sierpinski pentagon, both diameter and APL grow approximately as a power-law function of network order $N(t)$, with the exponent equals $\\frac{\\ln(1+\\sqrt{3})}{\\ln(5)}$. Although the solution for APL is approximate,it is trusted because we have calculated all items of APL accurately except for the compensation($\\Delta_{t}$) of total distances between non-adjacent branches($\\Lambda_t^{1,3}$), which is obtained approximately by least-squares curve fitting. The compensation($\\Delta_{t}$) is only a small part of total distances between non-adjacent branches($\\Lambda_t^{1,3}$) and has little effect on APL. Further more,using the data obtained by iteration to test the fitting results, we find the relative error for $\\Delta_{t}$ is less than $10^{-7}$, hence the approximate solution for average path length is almost accurate.

An approach for synthesizing averaged circuit models for switching converters that realize their respective state-space averaged models is presented. The method proceeds in a systematic fashion by determining appropriate averaged circuit elements that are consistent with the averaged circuit waveforms. The averaged circuit models that are obtained are syntheses of the state-space averaged models for the underlying switched circuits. An

A bounce-averaged Monte Carlo operator is presented that simulates bounce-averaged perturbative Lorentz pitch angle scattering of particles in toroidal plasmas, in particular a tokamak. In conjunction with bounce-averaged expressions for the deterministic motion, this operator allows a quick and inexpensive simulation on time scales long compared to a bounce time. An analytically tractable model of transport due to toroidal magnetic field ripple is described.

We consider the classical problem of online job schedul- ing on uniprocessor and multiprocessor machines. For a given job, we measure the quality of service provided by an algorithm by the stretch of the job, which is defined as the ratio of the amount of time that the job spends in the sys- tem to the processing time of the

S. Muthukrishnan; Rajmohan Rajaraman; Anthony Shaheen; Johannes E. Gehrke

In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned…

Determination of instantaneous local average particle density of a gas-particle suspension requires satisfying both the time scale relation and the volume scale relation or its continuum counter part of timeaveraging. This procedure was validated by comparing simultaneous velocity and mass flux measurements and the laser phase Doppler measurements.

In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from the supernova observation. This work was motivated by recent findings that showed that there are models which despite $\\Lambda=0$ have volume deceleration parameter $q^{vol} to the dark energy phenomenon. We have calculated $q^{vol}$ in some Lema\\^itre--Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that $q^{vol} > 0$, while those models which we have been able to find which exhibit $q^{vol} to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.

We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648

We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from < 1 J/cm/sup 2/ for some metals to > 46 J/cm/sup 2/ for a bare polished glass substrate. 4 refs., 7 figs., 1 tab.

A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

The average structure of the monoclinic modification of lazurite Ca{sub 1.26}Na{sub 6.63}K{sub 0.04}[Al{sub 6}Si{sub 6}O{sub 24}](SO{sub 4}){sub 1.53}S{sub 0.99}Cl{sub 0.05} (discovered in the Lake Baikal region) incommensurately modulated along the c axis is solved by the single-crystal X-ray diffraction method. The unit-cell parameters are a = 9.069(1) A, b = 12.868(1) A, c = 12.872(1) A, {gamma} = 90.19(1) deg., sp. gr. Pa, R = 6.9%, 2057 reflections. The AlO{sub 4} and SiO{sub 4} tetrahedra form a partially ordered framework. The positions in the cavities of the framework are split and randomly occupied by Na and Ca atoms and the SO{sub 4}, S{sub 2}, S{sub 3}, and SO{sub 2} anionic groups. The structure of the mineral is compared with the superstructure of triclinic lazurite. Conclusions are drawn about the causes of the incommensurate modulation in monoclinic and other lazurites.

Bolotina, N. B.; Rastsvetaeva, R. K. [Russian Academy of Sciences, Shubnikov Institute of Crystallography (Russian Federation)], E-mail: rast@ns.crys.ras.ru; Sapozhnikov, A. N. [Russian Academy of Sciences, Vinogradov Institute of Geochemistry, Siberian Division (Russian Federation)

In this paper,we investigate diameter and average path length(APL) of Sierpinski pentagon based on its recursive construction and self-similar structure.We find that the diameter of Sierpinski pentagon is just the shortest path lengths between two nodes of generation 0. Deriving and solving the linear homogenous recurrence relation the diameter satisfies, we obtain rigorous solution for the diameter. We also obtain approximate solution for APL of Sierpinski pentagon, both diameter and APL grow approximately as a power-law function of network order $N(t)$, with the exponent equals $\\frac{\\ln(1+\\sqrt{3})}{\\ln(5)}$. Although the solution for APL is approximate,it is trusted because we have calculated all items of APL accurately except for the compensation($\\Delta_{t}$) of total distances between non-adjacent branches($\\Lambda_t^{1,3}$), which is obtained approximately by least-squares curve fitting. The compensation($\\Delta_{t}$) is only a small part of total distances between non-adjacent branches($\\Lambda_t^{1...

Average power effects relative to the operation of parametric oscillators and amplifiers have been calculated. Temperature gradients have been calculated for both radial and longitudinal heat extraction. In many instances, the thermal load on a parametric oscillator is higher than the thermal load on a parametric amplifier with the same pump power. Having one or both these wavelengths resonant increases the chances that a generated photon will be absorbed by the nonlinear crystal. Temperature profiles and thermal diffusion time constants have been calculated for Gaussian beams, given the heat-deposition rate. With radical heat extraction the temperature profile can be expressed in a power series or approximated by a Gaussian distribution function.

Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.

Glycogen is conventionally viewed as an energy reserve that can be rapidly mobilized for ATP production in higher organisms. However, several studies have noted that glycogen with short average chain length in some bacteria is degraded very slowly. In addition, slow utilization of glycogen is correlated with bacterial viability, that is, the slower the glycogen breakdown rate, the longer the bacterial survival time in the external environment under starvation conditions. We call that a durable energy storage mechanism (DESM). In this review, evidence from microbiology, biochemistry, and molecular biology will be assembled to support the hypothesis of glycogen as a durable energy storage compound. One method for testing the DESM hypothesis is proposed.

The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

We demonstrate second harmonic vortex generation in atmospheric pressure air using tightly focused femtosecond laser beam. The circularly polarized ring-shaped beam of the second harmonic is generated in the air by fundamental beam of the same circular polarization, while the linear polarized beam produces two-lobe beam at the second harmonic frequency. The achieved normalized conversion efficiency and average second harmonic power are two orders of magnitude higher compared to those previously reported and can be increased up to 20 times by external gas flow. We demonstrate that the frequency doubling originates from the gradient of photoexcited free electrons created by pondermotive force.

Beresna, Martynas; Kazansky, Peter G. [Optoelectronics Research Centre, University of Southampton, Southampton SO17 1BJ (United Kingdom); Svirko, Yuri [Department of Physics and Mathematics, University of Joensuu, Joensuu FI-80101 (Finland); Barkauskas, Martynas; Danielius, Romas [Light Conversion Ltd., Vilnius, LT-10223 (Lithuania)

[figure removed for brevity, see original site] Figure 1

This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.

The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

Many researchers have described cognitive differences between gifted and average-performing children. Regarding strategy use, the gifted advantage is often associated with differences such as greater knowledge of strategies, quicker problem solving, and the ability to use strategies more appropriately. The current study used microgenetic methods…

the average weight of Connecticut River fish was considerably less (Table 1). The difference in the Connecticut River basin. Fisheries (Bethesda) 7(6): 2-11. POTTER. I. C.Â· F. W. H. BEAMISH, AND B. G. H04 Hold1l.'orth. Hall Un'iversity ofMassachlUletts Amherst. MA 01008 AN IMPROVED OTTER SURFACE

The possibility of using a quantum system to simulate another one has been recognized for a long time as an important research direction in quantum information and quantum computing. In Ref. [J. Li et. al, Nat. Commun. 4, 1420 (2013)], a superconducting circuit (a transmon) was employed to simulate a NMR (nuclear magnetic resonance) effect known as motional averaging. In this paper we analyze the results of numerical integration of the time evolution of the density matrix of a qubit with random frequency fluctuations, and compare the results with those obtained by using the method of quantum trajectories. We show that both methods produce equivalent results, although some differences may appear in the range of intermediate modulation frequencies.

J. Li; M. P. Silveri; K. S. Kumar; J. -M. Pirkkalainen; A. Vepsäläinen; W. C. Chien; J. Tuorila; M. A. Sillanpää; P. J. Hakonen; E. V. Thuneberg; G. S. Paraoanu

Statistical studies of BATSE gamma-ray bursts are presented: we average the time profiles, aligning the events at their highest peaks. Using the time histories in different energy channels, we summarize the general features of the average intensity and spectral evolution of gamma-ray bursts (GRBS) and compare the average evolution of bright and dim events. While no differences are found between the average intensity histories, the average hardness ratio histories demonstrate a hardness/brightness correlation. The observed lack of differences between the intensity histories of bright and dim bursts is incompatible with cosmological time dilation effects predicted by simple cosmological models. The combined results of no intensity history difference but a hardness history difference between bright and dim bursts places constraints on galactic models for the origin of GRBS.

Mitrofanov, Igor G.; Chernenko, Anton M.; Pozanenko, Alexei S.; Briggs, Michael S.; Paciesas, William S.; Fishman, Gerald J.; Meegan, Charles A.; Sagdeev, Roald Z.

We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'.

Khrennikov, Andrei [International Center for Mathematical Modeling in Physics and Cognitive Sciences, University of Vaexjoe, Vaexjoe S-35195 (Sweden)

We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics. PMID:12779582

This study investigates whether average political bloggers engage in four distinct activities: informing readers, checking\\u000a the media, engaging in political advocacy, and gathering money for charitable causes. Findings show that most bloggers inform\\u000a their readers about articles in the media or to draw attention to key posts on other blogs. Somewhat less frequently, most\\u000a bloggers write posts that detect errors

Background Since the burden of neuropathic pain (NeP) increases with pain severity, it is important to characterize and quantify pain severity when identifying NeP patients. This study evaluated whether painDETECT, a screening questionnaire to identify patients with NeP, can distinguish pain severity. Materials and methods Subjects (n=614, 55.4% male, 71.8% white, mean age 55.5 years) with confirmed NeP were identified during office visits to US community-based physicians. The Brief Pain Inventory – Short Form stratified subjects by mild (score 0–3, n=110), moderate (score 4–6, n=297), and severe (score 7–10, n=207) average pain. Scores on the nine-item painDETECT (seven pain-symptom items, one pain-course item, one pain-irradiation item) range from ?1 to 38 (worst NeP); the seven-item painDETECT scores (only pain symptoms) range from 0 to 35. The ability of painDETECT to discriminate average pain-severity levels, based on the average pain item from the Brief Pain Inventory – Short Form (0–10 scale), was evaluated using analysis of variance or covariance models to obtain unadjusted and adjusted (age, sex, race, ethnicity, time since NeP diagnosis, number of comorbidities) mean painDETECT scores. Cumulative distribution functions on painDETECT scores by average pain severity were compared (Kolmogorov–Smirnov test). Cronbach’s alpha assessed internal consistency reliability. Results Unadjusted mean scores were 15.2 for mild, 19.8 for moderate, and 24.0 for severe pain for the nine items, and 14.3, 18.6, and 22.7, respectively, for the seven items. Adjusted nine-item mean scores for mild, moderate, and severe pain were 17.3, 21.3, and 25.3, respectively; adjusted seven-item mean scores were 16.4, 20.1, and 24.0, respectively. All pair-wise comparisons of scores between pain-severity groups showed sizable and statistically significant differences (P<0.0001). Cumulative distribution functions showed distinct separation between severity (P<0.0001). Cronbach’s alphas were 0.76 and 0.80 for the nine- and seven-item scales, respectively. Conclusion This study provides strong psychometric evidence on the validity and reliability of painDETECT for distinguishing average pain severity in patients with NeP.

Cappelleri, Joseph C; Bienen, E Jay; Koduru, Vijaya; Sadosky, Alesia

Averaging is a common way to alleviate errors and random fluctuations in measurements and to smooth out data. Averaging also provides a way to merge structured data in a smooth manner. The present paper describes an algorithm to compute averages on matrix Lie groups. In particular, we discuss the case of averaging over the special orthogonal group of matrices, the

This paper extends averaging theory for power electronic systems to include feedback controlled converters. New averaging techniques based on the integral equation description provide theoretical justification for commonly used averaging methods. The new theory provides a basis for answering fundamental questions about the averaging approximation. A ripple estimate expression is presented, along with the simulation results for a feedback controlled

Symbolic analysis methods for the averaged modeling of switching power converters are presented in this paper. A general averaging method suitable for computer-aided modeling is discussed first. Then, a symbolic analysis package that uses this averaging method to automatically generate an analytical averaged model for a switching power converter is described. The package is implemented using the computer algebra system

We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.

University of Waterloo Weather Station Summary Â May 2013 A warm month with average precipitation January. Most of the precipitation for the month happened during just 3 days (the 10th , 28th and 29th). Even with this average month, we are still about 100 mm over the precipitation we expect at this time

of Nevada, Reno 09 2 #12;Average Atom & Static Conductivity Example Aluminum: density 0.27 gm/cc T = 5 e and Conductivity Consider an applied electric field: E(t) = F ^z sin t A(t) = F ^z cos t The time dependent Schr Average-Atom & Static Conductivity (Ziman) R Kubo-Greenwood Formula (Infrared Catastrophe) R "Proper

Non-linear oscillatory systems containing a fast phase and a relatively slow phase are considered. A modified averaging method is proposed for the situation in which the slow variables, averaged over the fast phase, do not change. A procedure for separating the variables over substantially longer time intervals with respect to a small parameter is proposed and justified; over these extended

Graduation success, grade-point average, and destination major of ten cohorts of students matriculating and subsequently leaving undergraduate engineering programs at nine southeastern universities are studied from 1987-2002. Grade point averages are frozen at the time students leave engineering to investigate the role of grades in their decision to leave engineering and their choice of a destination major. This study adds

Matthew W. Ohland; Guili Zhang; Brian Thorndyke; Timothy J. Anderson

Abstract: This article deals with the optimal transfer of a satellite between Keplerian orbits using low, averaging 1. INTRODUCTION An important problem in astronautics is to transfer a satellite between elliptic relaxed, but is satisfied at the end, adjusting the transfer time). As observed, this led to an averaged

Using time-frequency analysis techniques to investigate the event-related low-frequency (delta: 0.5-4 Hz; theta: 4-8 Hz) activity of auditory event-related potentials (ERPs) data of highly and average intelligent children, 18 intellectually gifted children, and 18 intellectually average children participated the present study. Present findings…

Liu, Tongran; Shi, Jiannong; Zhao, Daheng; Yang, Jie

The visual system can make highly efficient aggregate judgements about a set of objects, with speed roughly independent of the number of objects considered. While there is a rich literature on these mechanisms and their ramifications for visual summarization tasks, this prior work rarely considers more complex tasks requiring multiple judgements over long periods of time, and has not considered certain critical aggregation types, such as the localization of the mean value of a set of points. In this paper, we explore these questions using a common visualization task as a case study: relative mean value judgements within multi-class scatterplots. We describe how the perception literature provides a set of expected constraints on the task, and evaluate these predictions with a large-scale perceptual study with crowd-sourced participants. Judgements are no harder when each set contains more points, redundant and conflicting encodings, as well as additional sets, do not strongly affect performance, and judgements are harder when using less salient encodings. These results have concrete ramifications for the design of scatterplots. PMID:24051798

Gleicher, Michael; Correll, Michael; Nothelfer, Christine; Franconeri, Steven

There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents h(xy) extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of h(xy)(q) since its h(xy)(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature. PMID:21867256

There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.

There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. The multifractal detrended cross-correlation analysis (MF-DCCA) approaches can be used to quantify such cross-correlations, such as the MF-DCCA based on detrended fluctuation analysis (MF-X-DFA) method. We develop in this work a class of MF-DCCA algorithms based on the detrending moving average analysis, called MF-X-DMA. The performances of the MF-X-DMA algorithms are compared with the MF-X-DFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving average processes and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents $h_{xy}$ extracted from the MF-X-DMA and MF-X-DFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, ...

Detailed simulation studies, highly resolved in space and time, show that a physical relationship exists among instantaneous soil-moisture values integrated over different soil depths. This dynamic relationship evolves in time as a function of the hydrologic inputs and soil and vegetation characteristics. When depth-averaged soil moisture is sampled at a low temporal frequency, the structure of the relationship breaks down and becomes undetectable. Statistical measures can overcome the limitation of sampling frequency, and predictions of mean and variance for soil moisture can be defined over any soil averaging depth d. For a water-limited ecosystem, a detailed simulation model is used to compute the mean and variance of soil moisture for different averaging depths over a number of growing seasons. We present a framework that predicts the mean of soil moisture as a function of averaging depth given soil moisture over a shallow d and the average daily rainfall reaching the soil.

Puma, Michael J.; Celia, Michael A.; Rodriguez-Iturbe, Ignacio; Guswa, Andrew J.

A time dependent global (volume averaged) model is developed for a nitrogen discharge for a time varying discharge power in the pressure range 1-100 mTorr. The time evolution of the plasma parameters, the electron temperature and the particle densities are calculated by solving the particle and energy balance equations. The effect of the modulation frequency and the duty ratio on

A time dependent global (volume averaged) model is developed for a nitrogen discharge for a time varying discharge power in the pressure range 1–100 mTorr. The time evolution of the plasma parameters, the electron temperature and the particle densities are calculated by solving the particle and energy balance equations. The effect of the modulation frequency and the duty ratio on

A frequency dependent computer model based on the two-dimensional vertically averaged shallow-water equations is described for general purpose application in tidally dominated embayments. This model simulates the response of both tides and tidal currents to user-specified geometries and boundary conditions. The mathematical formulation and practical application of the model are discussed in detail. Salient features of the model include the ability to specify: (1) stage at the open boundaries as well as within the model grid, (2) velocities on open boundaries (river inflows and so forth), (3) spatially variable wind stress, and (4) spatially variable bottom friction. Using harmonically analyzed field data as boundary conditions, this model can be used to make real time predictions of tides and tidal currents. (USGS)

The dynamics of magnetic flux trapped in low dc magnetic fields (of the order of the Earth's field) was for the first time studied experimentally in single-crystal YBCO samples with unidirectional twin boundaries in the temperature range near Tc (0.8 < T/Tc < 0.99). Strong pinning in the system of unidirectional planar defects was demonstrated, and a significant deviation from monotonous behavior was established for the averaged effective pinning potential Up(T) for the trapped flux of low density. In order to compare different methods of Jc determination, the field dependences of the magnetization loop width M(H), which are related to the effective pinning and Jc, were obtained, and resistive measurements on microbridges made from the same single crystals were carried out.

The cosmological dynamics of gravitational clustering satisfies an approximate invariance with respect to the cosmological parameters that is often used to simplify analytical computations. We describe how this approximate symmetry gives rise to angular-averaged consistency relations for the matter density correlations. This allows one to write the (?+n) density correlation, with ? large-scale linear wave numbers that are integrated over angles, and n fixed small-scale nonlinear wave numbers, in terms of the small-scale n-point density correlation and ? prefactors that involve the linear power spectra at the large-scale wave numbers. These relations, which do not vanish for equal-time statistics, go beyond the already known kinematic consistency relations. They could be used to detect primordial non-Gaussianities, modifications of gravity, limitations of galaxy biasing schemes, or to help design analytical models of gravitational clustering.

The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti ..gamma.., even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti ..gamma.. can be as much as one-half to two-thirds. We calculate the parametric dependence of anti ..gamma.. and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti ..gamma.. and anti Z in different TFTR discharges.

Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.

...TD 9509] RIN 1545-BE23 Farmer and Fisherman Income Averaging AGENCY: Internal Revenue Service (IRS...regulations relating to the averaging of farm and fishing income in computing income tax liability. The regulations reflect changes...

In distributed consensus and averaging algorithms, processors exchange and update certain values ("estimates" or "opinions") by forming a local average with the values of their neighbors. Under suitable conditions, such ...

...is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160 (see §§ 404.1340...

...is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160 (see §§ 404.1340...

...is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160 (see §§ 404.1340...

...is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160 (see §§ 404.1340...

...is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160 (see §§ 404.1340...

for both temperature and precipitation The daily high temperatures of May were just a little below average, it was still within the average range). Also on the edge of the average range was the precipitation (63.6 mm) for the month, but this time on the low end of the range. About 40 mm of this precipitation came down between

2,725 STUDENTS REGISTERED WITH THE OFFICE OF CAREER MANAGEMENT Women: 42% Average Annual Salary-TIME MBA STUDENTS: 250 (Approximately 125 students in each class) Women: 25% Average Annual Salary: $89: $50,220 Minority: 12% Average Monthly Internship Salary: $2,676 UNDERGRADUATE STUDENTS BY MAJOR *As

...insurance amount under the average-monthly-wage method...insurance amount under the average-monthly-wage method...First, we find your average monthly wage, as described...we apply any automatic cost-of-living or ad hoc...

...insurance amount under the average-monthly-wage method...insurance amount under the average-monthly-wage method...First, we find your average monthly wage, as described...we apply any automatic cost-of-living or ad hoc...

...insurance amount under the average-monthly-wage method...insurance amount under the average-monthly-wage method...First, we find your average monthly wage, as described...we apply any automatic cost-of-living or ad hoc...

...insurance amount under the average-monthly-wage method...insurance amount under the average-monthly-wage method...First, we find your average monthly wage, as described...we apply any automatic cost-of-living or ad hoc...

...insurance amount under the average-monthly-wage method...insurance amount under the average-monthly-wage method...First, we find your average monthly wage, as described...we apply any automatic cost-of-living or ad hoc...

Averaging is an important concept which has found numerous applications in general and in pattern recognition and computer vision in particular. In this paper we consider averaging directional vectors of arbitrary dimensions. Given a set of vectors, we intend to compute an average vector which optimally represents the input vectors according to some formal criterion. Several optimisation criteria are formulated.

Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores. (Author/LRW)

Summary form only given. High average power FELs require high average current electron injectors capable of generating high quality electron bunches with a short bunch duration and a repetition rate equal to the frequency of the rf linac. In this talk the use of rf-gated gridded thermionic electron guns is proposed as a solution for high average power operation. In

P. Sprangle; J. Peñano; S. Gold; D. Gordon; A. Ting; B. Hafizi; C. Mitchell

Recently a new concept of shadow price, the average shadow price, based on the average and not on the marginal contribution of a resource, has been developed for pure Integer Linear Programming problems. In this paper we prove that average shadow prices can be used in a Mixed Integer Linear Programming problems and that some of its properties are analogous

A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds timeaveraging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

Herein, we test and compare different techniques to obtain averaged reaction barriers from quantum mechanics/molecular mechanics (QM/MM) simulations based on snapshots taken from molecular dynamics. Reasonable values can be obtained from a fairly small sample of well-chosen snapshots if an exponential averaging, also called Boltzmann averaging, is used. Snapshots with geometries close to the expected transition state are to be picked preferentially. Exponential averaging, arithmetic averaging, and simply taking the minimum barrier are compared to free-energy calculations from umbrella sampling. Three reactions within a protein in a water environment are used as test cases. PMID:25195924

We study the complexity of computing average quantities related to spin systems, such as the mean magnetization and susceptibility in the ferromagnetic Ising model, and the average dimer count (or average size of a matching) in the monomer-dimer model. By establishing connections between the complexity of computing these averages and the location of the complex zeros of the partition function, we show that these averages are #P-hard to compute, and hence, under standard assumptions, computationally intractable. In the case of the Ising model, our approach requires us to prove an extension of the famous Lee-Yang Theorem from the 1950s.

...2011-07-01 false How is the annual refinery or importer average and corporate pool...Standards § 80.205 How is the annual refinery or importer average and corporate pool...level determined? (a) The annual refinery or importer average and corporate...

...2012-07-01 false How is the annual refinery or importer average and corporate pool...Standards § 80.205 How is the annual refinery or importer average and corporate pool...level determined? (a) The annual refinery or importer average and corporate...

...2013-07-01 false How is the annual refinery or importer average and corporate pool...Standards § 80.205 How is the annual refinery or importer average and corporate pool...level determined? (a) The annual refinery or importer average and corporate...

...2010-07-01 false How is the annual refinery or importer average and corporate pool...Standards § 80.205 How is the annual refinery or importer average and corporate pool...level determined? (a) The annual refinery or importer average and corporate...

...2014-07-01 false How is the annual refinery or importer average and corporate pool...Standards § 80.205 How is the annual refinery or importer average and corporate pool...level determined? (a) The annual refinery or importer average and corporate...

The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.

Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.

In this paper, we describe a method for measuring the average flow velocity of a sample by means of Nuclear Magnetic Resonance. This method is based on the Carr-Purcell-Meiboom-Gill (CPMG) sequence and does not require the application of any additional static or pulsed magnetic field gradients to the background magnetic field. The technique is based on analyzing the early-time behavior of the echo amplitudes of the CPMG sequence. Measurements of average flow velocity of water are presented. The experimental results show a linear relationship between the slope/ y-intercept ratio of a linear fit of the first echoes in the CPMG sequence, and the average flow velocity of the flowing fluid. The proposed method can be implemented in low-cost Low-Field NMR spectrometers allowing a continuous monitoring of the average velocity of a fluid in almost real-time, even if the flow velocity changes rapidly.