Note: This page contains sample records for the topic average turnaround time from
While these samples are representative of the content of,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of
to obtain the most current and comprehensive results.
Last update: November 12, 2013.

Improving medical imaging report turnaround times.  


Southern Ohio Medical Center (SOMC), a 232-bed community-based teaching hospital, is equipped with state-of-the-art equipment such as 2 16-slice computed tomography (CT) scanners, 3 MR scanners, 3 ultrasound scanners, 2 digital mammography units, and 3 nuclear medicine cameras. One hundred twenty-six employees--ranging from support personnel to technologists along with 7 board-certified radiologists--staff the medical imaging department. Procedure volume is approximately 164,000 per year and is performed in all American College of Radiology (ACR)-accredited modalities. Filmless since 1998, SOMC's medical imaging department has resulted in productivity gains to the estimated 164,000 procudures for fiscal year 2005. The catalyst for the department is a robust picture archiving and communication system (PACS). Working with the radiologists, staff, and transcription services, turnaround time was reduced to from 13 hours to 9 hours from exam start to report sign off. Additional technology intervention was essential to further decrease report turnaround time. SOMC served as a beta site for a radiology information system (RIS). The new RIS has allowed the medical imaging department to move from a paper department to a "pseudo paperless" department. Orders, history sheets, consents, and other forms are scanned into the RIS for staff and radiologist use. Requisitions are no longer printed, and staff has access to review workstations to ensure that patients are called back into the department for procedures. This new workflow has also reduced paper traffic within the department. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. From its implementation, voice recognition enhanced the RIS technology. All of the radiologists began to use the product as soon as it was available. They perform all the editing and corrections either by voice command or by manual typing. The medical imaging department has noted that voice command corrections and editing are more efficient for the radiologist. The overall impact on decreased radiology report turnaround times is not only seen in medical imaging, but also has a global affect within the hospital. SOMC plans to realize a reduction length of patient stays, and a faster process for plotting the course of patient treatment, e.g., faster visits from emergency department (ED) physicians to patients. PMID:15794377

Marquez, Luis O


Real-time aircraft turnaround operations manager  

Microsoft Academic Search

Aircraft turnaround operations are the activities conducted to prepare an inbound aircraft at an airport for a following outbound flight that is scheduled for the same aircraft. The activities of aircraft turnaround operations include both the inbound and outbound exchanges of passengers, crew, catering services, cargo and baggage handling. Hence, appears the importance of following up on all activities done

M. Abd Allah Makhloof; M. Elsayed Waheed; Usama A. El-Raouf Badawi



Psychiatric patients turnaround times in the emergency department  

Microsoft Academic Search

BACKGROUND: To analyze the turnaround times of psychiatric patients within the Emergency Department (ED) from registration to discharge or hospitalization in a University Hospital in 2002. METHODS: Data from a one-year period of psychiatric admissions to the emergency service at a University Hospital were monitored and analyzed focused on turnaround times within the ED. Information on patients variables such as

Stefan Kropp; Christoph Andreis; Bert te Wildt; Udo Reulbach; Martin Ohlmeier; Irina Auffarth; Marc Ziegenbein



Improving medical imaging report turnaround times: the role of technolgy.  


At Southern Ohio Medical Center (SOMC), the medical imaging department and the radiologists expressed a strong desire to improve workflow. The improved workflow was a major motivating factor toward implementing a new RIS and speech recognition technology. The need to monitor workflow in a real-time fashion and to evaluate productivity and resources necessitated that a new solution be found. A decision was made to roll out both the new RIS product and speech recognition to maximize the resources to interface and implement the new solution. Prior to implementation of the new RIS, the medical imaging department operated in a conventional electronic-order-entry to paper request manner. The paper request followed the study through exam completion to the radiologist. SOMC entered into a contract with its PACS vendor to participate in beta testing and clinical trials for a new RIS product for the US market. Backup plans were created in the event the product failed to function as planned--either during the beta testing period or during clinical trails. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. Speech recognition enhanced the RIS technology as soon as it was implemented. The results show that the project has been a success. The new RIS, combined with speech recognition and the PACS, makes for a very effective solution to patient, exam, and results management in the medical imaging department. PMID:15898577

Marquez, Luis O; Stewart, Howard


Predictable turn-around time for post tape-out flow  

NASA Astrophysics Data System (ADS)

A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.

Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya



Quick-turnaround-time improvement for product development and transfer to mass production  

Microsoft Academic Search

We describe equipment and facility operational methods in a production fab which are designed to achieve quick-turnaround-time (QTAT) manufacturing and ease product transfer from development to mass production. An advanced CIM system with precise lot management is introduced to keep the optimum balance of manufacturing TAT and throughput. Substantial end-user computing reduces the engineering holding time for handling development lots.

Hidetoshi Koike; Fumitomo Matsuoka; Shinichi Hohkibara; Etsuo Fukuda; Kazuhiro Tomioka; Hideshi Miyajima; Kouichi Muraoka; Nobuo Hayasaka; Minoru Kimura



Decreased Turn-Around Time for Forensic Genetic Testing.  

National Technical Information Service (NTIS)

At the time of the initial Crime Laboratory Improvement (CLIP) grant application, the Biological Science Section of the New York State Police Forensic Investigation Center was in the process of implementing a program with dual objectives which were to (1)...

B. Duceman



Reduction in order turnaround time, telephone calls, and trips to the pharmacy by means of facsimile transceivers.  


The facsimile transceiver (FAX) was trialed at Beverly Hospital, Beverly Massachusetts, on the Intensive Care Unit and the Pediatric Floor from March 13 to April 27, 1987. Beverly Hospital is a 233 bed non profit community hospital. The facsimile unit was used to transmit medication orders to the pharmacy over existing telephone lines. The purpose of the trial was to determine the time saved between the writing of the medication order and the arrival of the medication on the nursing unit. This time was considered as medication turnaround time. During the trial period, a total of 1767 medication order sheets were processed via the FAX machine. The results of the trial showed that medication turnaround time has improved by an average of 90 minutes for each order placed. In addition, it was found that a total of 60 phone calls and 14 trips to the pharmacy, per day, were eliminated from both trial units combined. This resulted in an estimated time savings of 6.5 staff hours per day on these two units alone. On a hospital-wide basis, this would result in considerable labor savings for nursing staff on the floors. In addition, supply costs of medication order sheets will be reduced by approximately $3200 per year if the FAX units were used throughout the hospital. As a result of the trial, Beverly Hospital purchased 9 units, at a cost of $17,325 and implemented a hospital-wide policy of sending medication orders to the pharmacy using this equipment. This will result in improved ability of the pharmacy to respond to new orders in a more timely manner, supply cost reductions and improved use of staff. PMID:10286430

Newberg, D F; Stevens, J



Computerised pathology test order entry reduces laboratory turnaround times and influences tests ordered by hospital clinicians: a controlled before and after study  

PubMed Central

Objective To assess the impact of a computerised pathology order entry system on laboratory turnaround times and test ordering within a teaching hospital. Methods A controlled before and after study compared test assays ordered from 11 wards two months before (n?=?97?851) and after (n?=?113?762) the implementation of a computerised pathology order entry system (Cerner Millennium Powerchart). Comparisons were made of laboratory turnaround times, frequency of tests ordered and specimens taken, proportions of patients having tests, average number per patient, and percentage of gentamicin and vancomycin specimens labelled as random. Results Intervention wards experienced an average decrease in turnaround of 15.5 minutes/test assay (range 73.8 to 58.3?minutes; p<0.001). Reductions were significant for prioritised and non?prioritised tests, and for those done within and outside business hours. There was no significant change in the average number of tests (p?=?0.228), or specimens per patient (p?=?0.324), and no change in turnaround time for the control ward (p?=?0.218). Use of structured order screens enhanced data provided to laboratories. Removing three test assays from the liver function order set resulted in significantly fewer of these tests being done. Conclusions Computerised order entry systems are an important element in achieving faster test results. These systems can influence test ordering patterns through structured order screens, manipulation of order sets, and analysis of real time data to assess the impact of such changes, not possible with paper based systems. The extent to which improvements translate into improved patient outcomes remains to be determined. A potentially limiting factor is clinicians' capacity to respond to, and make use of, faster test results.

Westbrook, J I; Georgiou, A; Dimos, A; Germanos, T



Pathology turnaround time in pediatric oncology: a tool to prepare patients and families for the diagnostic waiting period.  


There is little data on the amount of time patients and families typically wait for pathology results when pediatric malignancy is suspected. The purpose of this study was to determine the average waiting period after diagnostic intervention for pediatric cancer. Pathology reports were reviewed for pediatric patients who had their initial diagnosis and were followed in the Division of Hematology/Oncology from 2007 through 2010. The average turnaround time (TAT) for all pathology (n=266) was 6.9 days. The TAT for pathology results according to diagnosis was 10.1 days for CNS tumors (n=59), 9.7 days for sarcomas (n=40), 5.4 days for lymphomas (n=31), 5.4 days for neuroblastoma (n=13), 7.3 days for kidney tumors (n=11), 7.2 days for thyroid tumors (n=7), 9.4 days for ovarian tumors (n=7), 7.0 days for schwannomas/neurofibromas (n=5), 5.7 days for testicular tumors (n=3), 5.0 days for hepatoblastoma (n=3), and 7.0 days for nasopharyngeal carcinomas (n=2). Overall the TAT for leukemia was 3.1 days (n=76), with diagnosis by flow cytometry taking 1.2 days and results by bone marrow biopsy taking 4.0 days. The TAT for pediatric oncology pathology after diagnostic intervention varies according to diagnosis. The hope is that this information will better prepare patients and families for the agonizing waiting period associated with diagnosis. PMID:23823114

Forlenza, Christopher J; Levy, Adam S



Practice policy and quality initiatives: decreasing variability in turnaround time for radiographic studies from the emergency department.  


A study was performed to evaluate use of quality improvement techniques to decrease the variability in turnaround time (TAT) for radiology reports on emergency department (ED) radiographs. An interdepartmental improvement team applied multiple interventions. Statistical process control charts were used to evaluate for improvement in mean TAT for ED radiographs, percentage of ED radiographs read within 35 minutes, and standard deviation of the mean TAT. To determine if the changes in the radiology department had an effect on the ED, the average time from when an ED physician first met with the patient to the time when the final treatment decision was made was also measured. There was a significant improvement in mean TAT for ED radiographs (from 23.9 to 14.6 minutes), percentage of ED radiographs read within 35 minutes (from 82.2% to 92.9%), and standard deviation of the mean TAT (from 22.8 to 12.7). The mean time from when an ED physician first met with the patient to the time a final treatment decision was made decreased from 88.7 to 79.8 minutes. Quality improvement techniques were used to decrease mean TAT and the variability in TAT for ED radiographs. This change was associated with an improvement in ED throughput. PMID:23479701

Towbin, Alexander J; Iyer, Srikant B; Brown, James; Varadarajan, Kartik; Perry, Laurie A; Larson, David B


Turnaround Necessities  

ERIC Educational Resources Information Center

|Turning around chronically low-performing schools is challenging work requiring fundamental rethinking of the change process, and a systemic rather than school-by-school approach. Without a doubt, high-impact school leaders are critical to turnaround success, and pockets of success around the country demonstrate this. However, transformational…

Robinson, William S.; Buntrock, LeAnn M.



Integrated post tape outflow for fast design to mask turn-around time  

NASA Astrophysics Data System (ADS)

SMIC is a pure-play IC foundry, as foundry culture Turn-Around Time is the most important thing FABs concern about. And aggressive tape out schedule required significant reduction of GDS to mask flow run time. So the objective of this work is to evaluate an OPC methodology and integrated mask data preparation flow on runtime performance via so-called 1-IO-tape-out platform. By the way, to achieve fully automated OPC/MDP flow for production. To evaluate, we choose BEOL layers since they were the ones hit most by runtime performance -- not like FEOL, for example, Poly to CT layers there're still some non-critical layers in the between, OPC mask makings & wafer schedules are not so tight. BEOL, like M2, V2,then M3 V3 and so on, critical layer OPC mask comes one by one continuously. Hence, that's why we pick BEOL layers. And the integrated flow we evaluated included 4 layers of metal with MB-OPC and 6 layers of Via with R-B OPC. Our definition of success to this work is to improve runtime performance at least of larger than 2x. At meantime, of course, we can not sacrifice the model accuracy, so maintaining equal or better model accuracy and OPC/mask-data output quality is also a must. For MDP, we also test the advantage of OASIS and compared with GDS format.

Hung, Chi-Yuan; Liu, Qingwei; Zhang, Liguo; Shang, Shumay; Bailey, George E.; Jost, Andrew; Brist, Travis



Averaging facial expression over time  

PubMed Central

The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time.

Haberman, Jason; Harp, Tom; Whitney, David



Flexible time domain averaging technique  

NASA Astrophysics Data System (ADS)

Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng



Preparing printed circuit boards for rapid turn-around time on a plotter  

SciTech Connect

This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab`s Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day`s time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter.

Hawtree, J.



Turnaround in cyclic cosmology.  


It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion. PMID:17359014

Baum, Lauris; Frampton, Paul H



Time averaging of instantaneous quantities in HYDRA  

SciTech Connect

For turbulent flow the evaluation of direct numerical simulations (DNS) where all scales are resolved and large-eddy simulation (LES) where only large-scales are resolved is difficult because the results are three-dimensional and transient. To simplify the analysis, the instantaneous flow field can be averaged in time for evaluation and comparison to experimental results. The incompressible Navier-Stokes flow code HYDRA has been modified for calculation of time-average quantities for both DNS and LES. This report describes how time averages of instantaneous quantities are generated during program execution (i.e., while generating the instantaneous quantities, instead of as a postprocessing operation). The calculations are performed during program execution to avoid storing values at each time step and thus to reduce storage requirements. The method used in calculating the time-average velocities, turbulent intensities, <{ital u}{sup ``}{sup 2}>, <{ital va}{sup ``}{sup 2}>, and <{ital w}{sup ``}{sup 2}>, and turbulent shear, <{ital u}{sup ``}{ital v}{sup ``}> are outlined. The brackets <> used here represent a time average. the described averaging methods were implemented in the HYDRA code for three-dimensional problem solutions. Also presented is a method for taking the time averages for a number of consecutive intervals and calculating the time average for the sum of the intervals. This method could be used for code restarts or further postprocessing of the timer averages from consecutive intervals. This method was not used in the HYDRA implementation, but is included here for completeness. In HYDRA, the running sums needed fro time averaging are simply written to the restart dump.

McCallen, R.C.



Statistics of time averaged atmospheric scintillation  

SciTech Connect

A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

Stroud, P.



Exploring Fish School Algorithm for Improving Turnaround Time: An Experience of Content Retrieval  

Microsoft Academic Search

In distributed e-learning paradigm, learning pedagogy demands different content retrieval methodologies after reaching certain boundary of learning. Hence, the learners are expected, to retrieve the contents and they need to improvise at substantially faster rate. The overall learning process converges into a finite time usage and they will return to the same point of access point. The present paper proposes

Soumya Banerjee; Santi Caballe



Short Planning Turn-Around Time and High Flexibility of the Swedish Astronomy/Aeronomy Satellite Odin  

NASA Astrophysics Data System (ADS)

The Swedish small satellite Odin combines two different scientific disciplines, astronomy and atmospheric research. It has a 3-axis stabilized, zero momentum, attitude control system that operates in two different modes, an inertial pointing astronomy mode and an atmospheric mode providing pointing/scanning of the Earth limb. The reference attitude for the atmospheric mode is created by a versatile reference attitude generator, providing also capability for observing and tracking any celestial or solar system objects. The absolute pointing requirements in the scientific modes are 15 arc-seconds inertial pointing and 1.2 arc-minutes in atmospheric mode reconstructed attitude. One part of the Odin mission is to observe the formation of the ozone holes at the poles in spring and fall and another is to observe moving celestial objects such as comets. The unpredictability of when such observation opportunities occur together with rapidly changing scientific demands put high requirements on a short turn- around time for the Operations Cycle - science demand, planning and command generation, reconstruction of attitude history, feedback and delivery to scientists. A small efficient team, including also members that was deeply involved with and responsible for the design and development of the system, has been a key to provide this a short turn-around time for the Operations Cycle. It is possible to safely reconfigure the satellite for either of two disciplines with short notice and have a reconstructed attitude available to the users shortly after the observations have been done. The small efficient team together with modular software based on MATLAB also facilitates in rapidly meeting new demands on the planning and attitude reconstruction from the users.

Jakobsson, B.; Karlsson, T.; Nylund, M.; Olsson, T.; Vinterhav, E.



The Turnaround Fallacy  

ERIC Educational Resources Information Center

|For as long as there have been struggling schools in America's cities, there have been efforts to turn them around. But overall, school turnaround efforts have consistently fallen far short of hopes and expectations. Quite simply, turnarounds are not a scalable strategy for fixing America's troubled urban school systems. Fortunately, findings…

Smarick, Andy



On Reynolds Averaging of Turbulence Time Series  

Microsoft Academic Search

We show that validity of Reynolds averaging for estimating the (ensemble) mean of a turbulence time series requires that the\\u000a series values be both stationary and uncorrelated. In strict statistical terminology, these two conditions are jointly designated as independent identically distributed (i.i.d.). Moreover, we show that when the series values are correlated, knowledge of the correlation between the values is

George Treviño; Edgar L. Andreas



On Reynolds Averaging of Turbulence Time Series  

NASA Astrophysics Data System (ADS)

We show that validity of Reynolds averaging for estimating the (ensemble) mean of a turbulence time series requires that the series values be both stationary and uncorrelated. In strict statistical terminology, these two conditions are jointly designated as independent identically distributed ( i. i. d.). Moreover, we show that when the series values are correlated, knowledge of the correlation between the values is needed to obtain a reliable estimate of the mean. Last, we contend that a viable averaging algorithm must be Reynolds number ( Re) dependent, requiring one version for low Re (Gaussian) turbulence and another for high Re (non-Gaussian) turbulence. Alternatively the median (as opposed to the mean) is recommended as a measure of the central tendency of the turbulence probability density function.

Treviño, George; Andreas, Edgar L.



Time-averaged MSD of Brownian motion  

NASA Astrophysics Data System (ADS)

We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

Andreanov, Alexei; Grebenkov, Denis S.



Time vs. ensemble averages for nonstationary time series  

NASA Astrophysics Data System (ADS)

We analyze whether sliding window time averages applied to stationary increment processes converge to a limit in probability. The question centers on averages, correlations, and densities constructed via time averages of the increment x(t,T)=x(t+T)-x(t), e.g. x(t,T)=ln(p(t+T)/p(t)) in finance and economics, where p(t) is a price, and the assumption is that the increment is distributed independently of t. We apply Tchebychev’s Theorem to the construction of statistical ensembles, and then show that the convergence in probability condition is not satisfied when applied to time averages of functions of stationary increments. We further show that Tchebychev’s Theorem provides the basis for constructing approximate ensemble averages and densities from a single, historic time series where, as in FX markets, the series shows a definite ‘statistical periodicity’. The convergence condition is not satisfied strongly enough for densities and certain averages, but is well-satisfied by specific averages of direct interest. Rates of convergence cannot be established independently of specific models, however. Our analysis shows how to decide which empirical averages to avoid, and which ones to construct.

McCauley, Joseph L.



Turnaround Momentum Still Fragile  

ERIC Educational Resources Information Center

The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a…

Klein, Alyson



Fairmont Hotels' Turnaround Strategy  

Microsoft Academic Search

In 1994 a wealthy investor purchased controlling interest in Fairmont Hotels, but in 1991 there was no guarantee the five-property chain would even be in operation by 1994. The year 1991 began Fairmont's turnaround when the owners, the Swig family, hired Robert Small, most recently of Walt Disney World hotels, to run the company. Small focused on a top-line strategy

W. Terry Umbreit



Turnaround Principal Competencies  

ERIC Educational Resources Information Center

|When the Minneapolis Public Schools first set out to hire turnaround school principals, administrators followed their usual process--which focused largely on reputation and anecdotal support and considered mainly internal candidates. Yet success at the complicated task of turning around the fortunes of a failing school depends on exceptionally…

Steiner, Lucy; Barrett, Sharon Kebschull



Use of the rapid BinaxNOW malaria test in a 24-hour laboratory associated with accurate detection and decreased malaria testing turnaround times in a pediatric setting where malaria is not endemic.  


The impact of implementing the BinaxNow malaria test was evaluated. From 288 tests, 34 malaria cases were detected. Laboratory turnaround time decreased from 9.8 to 1.7 h for report of any Plasmodium spp., 10.2 to 1.6 h for P. falciparum, and 8.6 to 1.1 h for any result. PMID:23408691

Ota-Sullivan, K; Blecker-Shelly, D L



Human resource turnarounds: advice from the experts  

Microsoft Academic Search

Provides advice for human resource managers gathered from a panel of 24 turnaround managers. A turnaround is a situation in which a company has experienced a period of declining profits, high costs and\\/or inability to meet its financial obligations. Turnaround managers specialize in the turnaround of firms in financial decline. Presents the stages of a turnaround, human resource problems encountered

John E. Oliver; William B. Fredenberger



Human Capital in Turnaround Schools  

ERIC Educational Resources Information Center

|Finding, keeping and supporting great educators presents the single biggest challenge to successful school turnarounds. Without teachers and administrators who bring the needed combination of skills and passion, nothing else will achieve the desired effect. The turnaround model supported by the U.S. Department of Education School Improvement…

Ferris, Kristen



Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports  

ERIC Educational Resources Information Center

|If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being…

Silva, Elena



Time average vibration fringe analysis using Hilbert transformation  

SciTech Connect

Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad



Time-averaged quadratic functionals of a Gaussian process  

NASA Astrophysics Data System (ADS)

The characterization of a stochastic process from its single random realization is a challenging problem for most single-particle tracking techniques which survey an individual trajectory of a tracer in a complex or viscoelastic medium. We consider two quadratic functionals of the trajectory: the time-averaged mean-square displacement (MSD) and the time-averaged squared root mean-square displacement (SRMS). For a large class of stochastic processes governed by the generalized Langevin equation with arbitrary frictional memory kernel and harmonic potential, the exact formulas for the mean and covariance of these functionals are derived. The formula for the mean value can be directly used for fitting experimental data, e.g., in optical tweezers microrheology. The formula for the variance (and covariance) allows one to estimate the intrinsic fluctuations of measured (or simulated) time-averaged MSD or SRMS for choosing the experimental setup appropriately. We show that the time-averaged SRMS has smaller fluctuations than the time-averaged MSD, in spite of much broader applications of the latter one. The theoretical results are successfully confirmed by Monte Carlo simulations of the Langevin dynamics. We conclude that the use of the time-averaged SRMS would result in a more accurate statistical analysis of individual trajectories and more reliable interpretation of experimental data.

Grebenkov, Denis S.



Improving the Average Response Time in Collective I/O  

SciTech Connect

In collective I/O, MPI processes exchange requests so that the rearranged requests can result in the shortest file system access time. Scheduling the exchange sequence determines the response time of participating processes. Existing implementations that simply follow the increasing order of file ofsets do not necessary produce the best performance. To minimize the average response time, we propose three scheduling algorithms that consider the number of processes per file stripe and the number of accesses per process. Our experimental results demonstrate improvements of up to 50% in the average response time using two synthetic benchmarks and a high-resolution climate application.

Jin, Chen; Sehrish, Saba; Liao, Wei-keng; Choudhary, Alok; Schuchardt, Karen L.



Scaling of average receiving time and average weighted shortest path on weighted Koch networks  

NASA Astrophysics Data System (ADS)

In this paper we present weighted Koch networks based on classic Koch networks. A new method is used to determine the average receiving time (ART), whose key step is to write the sum of mean first-passage times (MFPTs) for all nodes to absorption at the trap located at a hub node as a recursive relation. We show that the ART exhibits a sublinear or linear dependence on network order. Thus, the weighted Koch networks are more efficient than classic Koch networks in receiving information. Moreover, average weighted shortest path (AWSP) is calculated. In the infinite network order limit, the AWSP depends on the scaling factor. The weighted Koch network grows unbounded but with the logarithm of the network size, while the weighted shortest paths stay bounded.

Dai, Meifeng; Chen, Dandan; Dong, Yujuan; Liu, Jie



Average waiting time assignment. I. The single link case  

Microsoft Academic Search

A system is considered in which V users are competing for the transmission capacity of a link. The users generate messages in a Poisson manner. The message length distribution of each user is arbitrary and may differ for different users. The objective is to investigate nonpreemptive service-time independent scheduling as a means of selectively controlling the average waiting time of




Bacteriorhodopsin-films for dynamic time average interferometry  

Microsoft Academic Search

Bacteriorhodopsin (BR) films have been successfully tested in different holographic applications, e.g. information recording and pattern recognition. In this paper their properties for dynamic holographic interferometry are described and examples from dynamic time average interferometry are presented. A mathematical description of the relations between rise and decay time and their influence on holographic interferometry with BR-films is presented and the

T. Renner; N. Hampp



Time-average TV holography for vibration fringe analysis  

SciTech Connect

Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad



Scaling of average sending time on weighted Koch networks  

NASA Astrophysics Data System (ADS)

Random walks on weighted complex networks, especially scale-free networks, have attracted considerable interest in the past. But the efficiency of a hub sending information on scale-free small-world networks has been addressed less. In this paper, we study random walks on a class of weighted Koch networks with scaling factor 0 < r <= 1. We derive some basic properties for random walks on the weighted Koch networks, based on which we calculate analytically the average sending time (AST) defined as the average of mean first-passage times (MFPTs) from a hub node to all other nodes, excluding the hub itself. The obtained result displays that for 0 < r < 1 in large networks the AST grows as a power-law function of the network order with the exponent, represented by log43r+1r, and for r = 1 in large networks the AST grows with network order as N ln N, which is larger than the linear scaling of the average receiving time defined as the average of MFPTs for random walks to a given hub node averaged over all starting points.

Dai, Meifeng; Liu, Jie



H-Coal pilot plant 1980 turnaround report. Revision  

SciTech Connect

Since the start of H-Coal operations on March 11, 1980, the plant has operated for a total of 2949 hours. Of this total time on stream, 2572 hours were expended circulating oil and 377 hours with coal feed to the unit. Approximately 1148 tons of coal were processed during this period. The coal processed was Kentucky number11. On November 14, 1980, the H-Coal Pilot Plant was shut down for an overall turnaround maintenance program. During the turnaround period, all operating activities in the 100, 200, and 400 Areas were terminated to enable inspection and, as necessary, repair of all major equipment. Particular emphasis was placed on inspection of the coal liquefaction section (Area 200) which had not been inspected since the initial start-up in January 1980. Additionally, the Sour Water and Sour Gas Sections (Area 400) were thoroughly inspected due to previous evidence of cooling water corosion in several exchangers. During the turnaround, except for a very brief period, the Waste Oil Recovery Section (Area 600) continued to operate to recover start-up oil. In general, equipment inspection did not reveal any unexpected surprises and the turnaround maintenance work progressed very smoothly. By January 10, 1981, the turnaround maintenance work had been completed and plant start-up was initiated.

Searles, R.C.



Average waiting time profiles of uniform DQDB model  

SciTech Connect

The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.

Rao, N.S.V. [Oak Ridge National Lab., TN (United States); Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D. [Old Dominion Univ., Norfolk, VA (United States). Dept. of Computer Science



Stability of stochastic nonlinear switched systems with average dwell time  

NASA Astrophysics Data System (ADS)

This paper concerns the stability of nonlinear stochastic switched systems with delay. By using the Razumikhin method and the average dwell time approach, some sufficient criteria of stability are derived. It is shown that when the Lyapunov-like functions are decreasing in all active subsystems, the switched system is pth-moment exponentially stable. Moreover, it is also shown that under some conditions the system can be pth-moment stable and global asymptotically stable, where the Lyapunov-like functions are increasing on some intervals. The effectiveness of the proposed results is illustrated by two examples.

Wu, Xiaotai; Yan, Litan; Zhang, Wenbing; Tang, Yang



Operating turnaround strategies during crisis periods: a research on manufacturing firms  

Microsoft Academic Search

Many firms face with organizational declines at some point in their life cycles because of both external and internal factors. As an alternative response to the times of crisis, operating turnaround strategies are targeted to enhance a firm's chances of ending the threat and achieving sustainable performance recovery. In this study, it is aimed to determine whether operating turnaround strategies

Mehmet Tikici; Ece Omay; Neslihan Derin; Seyda Nur Seçkin; Mehmet Cüreo?lu



Time-average fringe method for vibration mode analysis  

NASA Astrophysics Data System (ADS)

A new optical method of vibration mode analysis using time-average fringe is presented in this paper. A sequence of the deformed and partly blurred sinusoidal fringe images on the surface of a vibrating membrane are grabbed by a low sampling rate commercial CCD camera. By Fourier transform, filtering, inverse Fourier transform, the vibration mode is obtained from the fundamental component of the Fourier spectrum. The theoretical analysis of this approach is given. Computer simulations and experiments have proved to verify its validity. Under different excited vibration frequencies, the vibration modes of a vibrating surface can be qualitatively analyzed. In the experiment, when the excited vibration frequency changes continuously, the changing process of the vibration modes of membrane is observed clearly. Experimental results show that this approach as the features of rapid speed, high accuracy, and simple experimental set-up.

Su, Xianyu; Zhang, Qican; Wen, Yongfu; Xiang, Liqun



Time averaged optical dipole traps for cold atoms  

NASA Astrophysics Data System (ADS)

Achieving high atom populations in optical traps, (created by laser light far-detuned to the red of an atomic resonance, FORT) is essential for various experiments especially for all optical realization of a Bose-Einstein condensate. However an efficient loading of these traps still remains a challenge. The focus of this talk is to explain a new approach of maximizing optical trap populations. We show that the FORT population (NFORT), which is proportional to the FORT volume, saturates once a certain potential depth is reached. Our goal is to increase the volume and keep the potential depth at its saturation limit. This is accomplished by a fast sweeping of the FORT beams while loading takes place. We observe a considerable increase in NFORT for certain amplitudes and frequencies of the sweeping. A detailed study of these time averaged optical traps will be presented.

Ahmadi, Peyman



Average search time for ground-vehicle target detectability  

NASA Astrophysics Data System (ADS)

This paper summarizes N.P. Travnikova's model as a method to compare average search times by military observers using powered optics such as binoculars. Both discrete and continuous scanning methods are considered for target searches. This empirical model quantifies which type of vision system is best suited for the most efficient target detection for a given field of view. An analysis is also provided of the relative importance of target diameter, background luminance, and contrast upon overall detectability with the subsequent results compared to known field test data. The detectability of specific military ground vehicles over a variety of search and target acquisition tasks with several off-the-shelf binoculars is examined. Some examples on various types of search studies, such as compare looking at the target, both line retrace time effects, etc., for a low contrast target are also considered. This paper consists of two sections. The first explains the derivation of methodology and limits of its applicability. The second section offers a parametric analysis that compares the relative importance of target diameter, background luminance, and contrast upon overall target detectability with the subsequent results compared to field test data.

Goetz, Richard C.; Gerhart, Grant R.; Matchko, Roy M.



Series Overview. Sustaining School Turnaround at Scale. Brief 1  

ERIC Educational Resources Information Center

Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…

Education Resource Strategies, 2012



Turnaround Schools and the Leadership They Require  

ERIC Educational Resources Information Center

|"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…

Leithwood, Kenneth; Strauss, Tiiu



Keys to Sustaining Successful School Turnarounds  

ERIC Educational Resources Information Center

|To identify the changes associated with the school turnaround process, this article reviewed 15 case studies of elementary school turnaround initiatives that sustained improvements for at least two years. Changes were clustered into eight categories: leadership, school policy, programs, organizational processes, staffing, classroom practices,…

Duke, Daniel L.



School Turnarounds: The Essential Role of Districts  

ERIC Educational Resources Information Center

|The inspiration for this book was a crucial observation: that if the school turnaround movement is to have widespread and lasting consequences, it will need to incorporate meaningful district involvement in its efforts. The result is a volume that considers school turnaround efforts at the district level, examining the evidence thus far and…

Zavadsky, Heather



Time dependence of average regional sulfur oxide concentrations  

SciTech Connect

The atmospheric burden of SO/sub 2/ and SO/sub 4/ in a large regional area has been calculated as a function of time during the July 1978 SURE intensive period. A formulism is developed in which a highly parameterized description of deposition and chemical conversion is combined with a realistic emissions inventory and with realistic trajectories as calculated with the BNL long-range air quality model, AIRSOX. Temporal variations in the regional atmospheric burden are calculated as being due to variations in the advection of material through the region, with the pollutant being advected having a characteristic half-life for its formation and decay. The calculated atmospheric burdens are divided by a nominal mixing height to derive an average ground-level concentration. Comparison is made with the corresponding concentrations derived from SURE observations. The regions for which these comparisons have been made are in the Midwest and East and are similar in size (approx. 5 x 10/sup 5/ KM/sup 2/) and location to areas commonly affected by regional sulfate episodes.

Kleinman, L I; Carney, T A; Meyers, R E



Loading cold atoms into a time averaged optical dipole trap  

NASA Astrophysics Data System (ADS)

We describe our investigations of optical dipole traps for neutral atoms using a high power CO2 laser beams (FORT). Studies of these quasi-electrostatic traps lead us to the creation of a Bose-Einstein condensate (BEC) by all optical means. We have developed a new technique to increase the optical trap population, improving spatial and phase space densities of the atomic cloud. This enhances the evaporative cooling efficiency to realize a BEC. This technique is based on a fast sweeping of the CO2 beam while loading the atoms from a Magneto optical trap. We have found that the FORT population (NFORT), being proportional to the FORT volume, also saturates once a certain potential depth is reached. The goal would be to engineer the sweeping amplitude and frequency to increase the effective volume and keep the potential depth at its saturation limit. A considerable increase in the NFORT is observed for a proper choice of the amplitude and frequency of the sweeping. A detailed study of these time averaged optical traps will be presented.

Ahmadi, Peyman



Stability of switched systems with average dwell-time  

Microsoft Academic Search

It is shown that switching among stable linear systems results in a stable system provided that switching is “slow-on-the-average”. In particular, it is proved that exponential stability is achieved when the number of switches in any finite interval grows linearly with the length of the interval, and the growth rate is sufficiently small. Moreover, the exponential stability is uniform over

J. P. Hespanha; A. Stephen Morse



Time Series ARIMA Models of Undergraduate Grade Point Average.  

ERIC Educational Resources Information Center

|The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

Rogers, Bruce G.


Scheduling to Minimize the Average Completion Time of Dedicated Tasks  

Microsoft Academic Search

We propose a polynomial time approximation scheme for schedulinga set of dedicated tasks on a constant number m of processors inorder to minimize the sum of completion times Pmjx jjPC j . In additionwe give a polynomial time approximation scheme for the weightedpreemptive problem with release dates, Pmjx j ; pmtn; r jjPw j C j .1 IntroductionIn this paper

Foto N. Afrati; Evripidis Bampis; Aleksei V. Fishkin; Klaus Jansen; Claire Kenyon



Rapid PCR amplification protocols decrease the turn-around time for detection of antibiotic resistance genes in Gram-negative pathogens.  


A previously designed end-point multiplex PCR assay and singleplex assays used to detect ?-lactamase genes were evaluated using rapid PCR amplification methodology. Amplification times were 16-18 minutes with an overall detection time of 1.5 hours. Rapid PCR amplifications could decrease the time required to identify resistance mechanisms in Gram-negative organisms. PMID:23891223

Geyer, Chelsie N; Hanson, Nancy D



The time-averaged paleomagnetic field 0–5 Ma  

Microsoft Academic Search

Persistent departures from the geocentric axial dipole field model of the time-aver- aged paleomagnetic field over the past 5 Myr have been analyzed using oceanic data from deep-sea cores and continental data from igneous rocks and sediments. The data set comprises the equivalent of 9490 spot readings of the field (5831 normal and 3659 reverse) from 930 groups of data.

M. W. McElhinny; P. L. McFadden; R. T. Merrill



A New Mathematical Model of Air Pollutant Concentration Averaging Time and Frequency  

Microsoft Academic Search

A previous mathematical model of air pollutant concentration, averaging time, and frequency gave results close to actual measurements for the relatively short averaging times for which it was used. However, recent calculations show that mathematically unsound results are obtained with the model for averaging times longer than one month. A new mathematical model, described herein, gives mathematically sound results for

R. I. Larsen



Pre-K as School Turnaround Strategy  

ERIC Educational Resources Information Center

Both state and local school systems, recognizing the strong and growing evidence of pre-kindergarten's effectiveness in closing achievement gaps, are implementing early learning programs as part of their education reform efforts. In low-performing districts, pre-k has emerged as a promising turnaround strategy, reaching children before they become…

Buenafe, Anthony Q.



Relegation and Reversion: Longitudinal Analysis of School Turnaround and Decline  

ERIC Educational Resources Information Center

|School turnaround has become a popular strategy for improvement of chronically low-performing schools. Research related to school turnaround has relied substantially upon case studies. These case studies often focus on successful turnarounds and report immediate outcomes, failing to provide information about the sustainability of the results. In…

Hochbein, Craig



Trend analysis of total ozone data for turnaround and dynamical contributions  

Microsoft Academic Search

Statistical trend analyses have been performed for monthly zonal average total ozone data from both TOMS and SBUV satellite sources and ground-based instruments over the period 1978–2002 for detection of a “turnaround” in the previous downward trend behavior and hence evidence for the beginning of an ozone recovery. Since other climatic and geophysical changes can impact ozone behavior and can

Gregory C. Reinsel; Alvin J. Miller; Elizabeth C. Weatherhead; Lawrence E. Flynn; Ronald M. Nagatani; George C. Tiao; Donald J. Wuebbles



The consequences of time averaging for measuring temporal species turnover in the fossil record  

NASA Astrophysics Data System (ADS)

Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and speciation modes.

Tomašových, Adam; Kidwell, Susan



The three-dimensional time and volume averaged conservation equations of two-phase flow  

SciTech Connect

The purpose of this paper is to present a concise derivation of the time and volume-averaged conservation equations of two-phase flow. These equations are in a form compatible with numerical evaluations using advanced generation, two fluid computer codes. The approach presented herein is a three-dimensional generalization of one-dimensional spatial averaging methods previously discussed by Boure and Delhaye and Banerjee. Readers interested in a detailed description of one dimensional averaging techniques should consult these references, and the work of Ishii, which includes a rigorous treatment of time averaging. 17 refs., 6 figs., 3 tabs.

Lahey, R.T. Jr.; Drew, D.A. [Rensselaer Polytechnic Institute, Troy, NY (United States)



Distributions of time averages for weakly chaotic systems: The role of infinite invariant density  

NASA Astrophysics Data System (ADS)

Distributions of time averaged observables are investigated using deterministic maps with N indifferent fixed points and N-state continuous time random walk processes associated with them. In a weakly chaotic phase, namely when separation of trajectories is subexponential, maps are characterized by an infinite invariant density. We find that the infinite density can be used to calculate the distribution of time averages of integrable observables with a formula recently obtained by Rebenshtok and Barkai. As an example we calculate distributions of the average position of the particle and average occupation fractions. Our work provides the distributional limit theorem for time averages for a wide class of nonintegrable observables with respect to the infinite invariant density, in other words it deals with the situation where the Darling-Kac-Aaronson theorem does not hold.

Korabel, Nickolay; Barkai, Eli



Include tech service engineers in turnaround inspections  

SciTech Connect

Process Unit startup problems can be minimized by having technical service or process engineers perform equipment turnaround inspections. Most plants have inspectors who are specifically trained to identify corrosion, fatigue and other conditions that lead to deterioration of equipment mechanical integrity. Inspectors are also concerned with the proper assembly of equipment to ensure that all construction agrees with the field drawings. Training and knowledge of how process equipment actually operates often varies among individual inspectors. This, unfortunately, can lead to process equipment performance problems after startup with equipment that is reportedly in good mechanical condition.

Miller, J.E.



Turnaround Necessities: Basic Conditions for an Effective, Sustainable, and Scalable School Turnaround  

ERIC Educational Resources Information Center

|Turning around chronically low-performing schools is challenging work requiring fundamental rethinking of the change process, and a systemic rather than school-by-school approach. Without a doubt, high-impact school leaders are critical to turnaround success, and pockets of success around the country demonstrate this. However, transformational…

Robinson, William S.; Buntrock, LeAnn M.



Decomposition-order effects of time integrator on ensemble averages for the Nosé-Hoover thermostat  

NASA Astrophysics Data System (ADS)

Decomposition-order dependence of time development integrator on ensemble averages for the Nosé-Hoover dynamics is discussed. Six integrators were employed for comparison, which were extensions of the velocity-Verlet or position-Verlet algorithm. Molecular dynamics simulations by these integrators were performed for liquid-argon systems with several different time steps and system sizes. The obtained ensemble averages of temperature and potential energy were shifted from correct values depending on the integrators. These shifts increased in proportion to the square of the time step. Furthermore, the shifts could not be removed by increasing the number of argon atoms. We show the origin of these ensemble-average shifts analytically. Our discussion can be applied not only to the liquid-argon system but also to all MD simulations with the Nosé-Hoover thermostat. Our recommended integrators among the six integrators are presented to obtain correct ensemble averages.

Itoh, Satoru G.; Morishita, Tetsuya; Okumura, Hisashi



Calculation of monthly average collector operating time and parasitic energy requirements  

NASA Astrophysics Data System (ADS)

The utilizability concept of Whillier (1953) and Liu and Jordan (1963, 1977) has recently been extended to the calculation of monthly-average useful energy gain of solar collectors. Klein (1978) has presented a utilizability method for determining the useful energy gain of flat-plate collectors. Collares-Pereira and Rabl (1979) present another utilizability method for both flat-plate and concentrating collectors. The considered investigation shows that these utilizability methods can also be used for calculating average collector operating time and parasitic energy use. Attention is given to utilizability, the relationship of collector operating time to utilizability, an estimation of the average critical level, and the estimation of the average daily operating time for the collector fluid circulation pump in a residential water heating system during July in Albuquerque, New Mexico.

Mitchell, J. C.; Theilacker, J. C.; Klein, S. A.


Continuous Time Average Cost Flexible Manufacturing and Operator Scheduling Model Solved by Deconvexificiation over Time.  

National Technical Information Service (NTIS)

A continuous time flexible manufacturing and operator scheduling problem is introduced and solved. The principal concern is with scheduling operators over time to various activities of a manufacturing system with the purpose of optimizing some steady stat...

B. C. Eaves U. G. Rothblum



Approximation Schemes for Minimizing Average Weighted Completion Time with Release Dates  

Microsoft Academic Search

We consider the problem of scheduling jobs with re- lease dates on machines so as to minimize their average weighted completion time. We present the first known poly- nomial time approximation schemes for several variants of this problem. Our results include PTASs for the case of identical parallel machines and a constant number of unre- lated machines with and without

Foto N. Afrati; Evripidis Bampis; Chandra Chekuri; David R. Karger; Claire Kenyon; Sanjeev Khanna; Ioannis Milis; Maurice Queyranne; Martin Skutella; Clifford Stein; Maxim Sviridenko



Time Series Forecasting of Averaged Data With Efficient Use of Information  

Microsoft Academic Search

Time series has been a popular tool for the analysis and forecasting of a large number of data. Very often, the applied approaches forecasts had limited success and the main reason was the lack of statistically significant historical information. We focus our attention on three common series, which are formed from the averaging of data collected over a shorter time

Athanasios Sfetsos; Costas Siriopoulos



"Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform  

ERIC Educational Resources Information Center

|"Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas. The neoliberal…

Johnson, Amanda Walker



Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?  

ERIC Educational Resources Information Center

This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…

Mette, Ian M.



Using Competencies to Improve School Turnaround Principal Success  

ERIC Educational Resources Information Center

|This paper aims first to shed light on one element of leadership: the characteristics--or "competencies"--of turnaround leaders who succeed in driving rapid, dramatic change. Second, it recounts the elements of support that districts must provide these leaders to enable and sustain a portfolio of successful school turnarounds. Fortunately,…

Steiner, Lucy; Hassel, Emily Ayscue



Average Properties of the Time Behaviour of Gamma-Ray Bursts  

Microsoft Academic Search

The average peak-aligned profile of all bursts detected by BATSE with sufficient data quality has a simple ``stretched'' exponential shape, F ~ exp[-(t\\/t_0)^{1\\/3}], where $t$ is the time measured from the time for the peak flux of the event, and $t_0$ is a time constant. We study the behaviour of $t_0$ of both the post-peak and the pre-peak slopes of

Boris Stern; Roland Svensson; Juri Poutanen



Factors affecting the Turnaround Time for Manufacturing, Testing and Release of Cellular Therapy Products prepared at Multiple Sites in support of Multicenter Cardiovascular Regenerative Medicine Protocols - a Cardiovascular Cell Therapy Research Network (CCTRN) Study  

PubMed Central

BACKGROUND Cellular therapy studies are often conducted at multiple clinical sites in order to accrue larger patient numbers. In many cases this necessitates use of localized Good Manufacturing Practices (GMP) facilities to supply the cells. To assure consistent quality, oversight by a quality assurance group is advisable. In this study we report the findings of such a group established as part of the Cardiovascular Cell Therapy Research Network (CCTRN) studies involving use of autologous bone marrow mononuclear cells (ABMMC) to treat myocardial infarction and heart failure. STUDY DESIGN Factors affecting cell manufacturing time were studied in 269 patients enrolled on 3 CCTRN protocols using Sepax-separated ABMMC. The cells were prepared at 5 GMP cell processing facilities and delivered to local treatment sites or more distant satellite Centers. RESULTS Although the Sepax procedure takes only 90 minutes, the total time for processing was approximately seven hours. Contributing to this were incoming testing and device preparation, release testing, patient randomization and product delivery. The average out-of body-time (OBT), which was to be <12 hours, averaged 9 hours. A detailed analysis of practices at each Center revealed a variety of factors that contributed to this OBT. CONCLUSION We conclude that rapid cell enrichment procedures may give a false impression of the time actually required to prepare a cellular therapy product for release and administration. Institutional procedures also differ and can contribute to delays; however, in aggregate it is possible to achieve an overall manufacturing and testing time that is similar at multiple facilities.

Richman, Sara; Gee, Adrian P.; McKenna, David; Traverse, Jay; Henry, Timothy; Fisk, Diann; Pepine, Carl; Bloom, Jeannette; Willerson, James; Prater, Karen; Zhao, David; Koc, Jane Reese; Anwaruddian, Saif; Taylor, Doris; Cogle, Christopher; Moye, Lemuel; Simari, Robert; Skarlatos, Sonia



A diagnostic study of time variations of regionally averaged background error covariances  

NASA Astrophysics Data System (ADS)

In variational data assimilation systems, background error covariances are often estimated from a temporal and spatial average. For a limited area model such as the Aire Limited Adaptation Dynamique Developpment International (ALADIN)/France, the spatial average is calculated over the regional computation domain, which covers western Europe. The purpose of this study is to revise the temporal stationarity assumption by diagnosing time variations of such regionally averaged covariances. This is done through examination of covariance changes as a function of season (winter versus summer), day (in connection with the synoptic situation), and hour (related to the diurnal cycle), with the ALADIN/France regional ensemble Three-Dimensional Variational analysis (3D-Var) system. In summer, compared to winter, average error variances are larger, and spatial correlation functions are sharper horizontally but broader vertically. Daily changes in covariances are particularly strong during the winter period, with larger variances and smaller-scale error structures when an unstable low-pressure system is present in the regional domain. Diurnal variations are also significant in the boundary layer in particular, and, as expected, they tend to be more pronounced in summer. Moreover, the comparison between estimates provided by two independent ensembles indicates that these covariance time variations are estimated in a robust way from a six-member ensemble. All these results support the idea of representing these time variations by using a real-time ensemble assimilation system.

Monteiro, Maria; Berre, LoïK.



Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages.  

National Technical Information Service (NTIS)

A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scal...

P. K. Kundu T. L. Bell



On a Life Distribution Family Based on Total Time on Test on Average: NBUTA  

Microsoft Academic Search

A new class of life distributions, namely new better than used in the total time on test on average transform ordering (TTTA), is introduced. The relationship of this class to other classes of life distributions, and closure properties under some reliability operations, are discussed. We provide a simple argument based on stochastic orders that the class is closed under the

A. N. Ahmed; H. M. Hewedi; E. A. Rakha; E. M. Shokry


Impacts of different data averaging times on statistical analysis of distributed domestic photovoltaic systems  

SciTech Connect

The trend of increasing application of distributed generation with solar photovoltaics (PV-DG) suggests that a widespread integration in existing low-voltage (LV) grids is possible in the future. With massive integration in LV grids, a major concern is the possible negative impacts of excess power injection from on-site generation. For power-flow simulations of such grid impacts, an important consideration is the time resolution of demand and generation data. This paper investigates the impact of time averaging on high-resolution data series of domestic electricity demand and PV-DG output and on voltages in a simulated LV grid. Effects of 10-minutely and hourly averaging on descriptive statistics and duration curves were determined. Although time averaging has a considerable impact on statistical properties of the demand in individual households, the impact is smaller on aggregate demand, already smoothed from random coincidence, and on PV-DG output. Consequently, the statistical distribution of simulated grid voltages was also robust against time averaging. The overall judgement is that statistical investigation of voltage variations in the presence of PV-DG does not require higher resolution than hourly. (author)

Widen, Joakim; Waeckelgaard, Ewa [Department of Engineering Sciences, The Aangstroem Laboratory, Uppsala University, P.O. Box 534, SE-751 21 Uppsala (Sweden); Paatero, Jukka; Lund, Peter [Advanced Energy Systems, Helsinki University of Technology, P.O. Box 2200, FI-02015 HUT (Finland)



Shock structures in time-averaged patterns for the kuramoto-sivashinsky equation  


The Kuramoto-Sivashinsky equation with fixed boundary conditions is numerically studied. Shocklike structures appear in the time-averaged patterns for some parameter range of the boundary values. Effective diffusion constant is estimated from the relation of the width and the height of the shock structures. PMID:11138191




Long time-averaged solutions of turbulent flow past a circular cylinder  

Microsoft Academic Search

A critical analysis of numerical predictions of the long time averaged flow past a circular cylinder is presented. The flow regimes considered cover a wide range of Reynolds numbers with an emphasis on the transitional range, i.e. (104 to 107). In this study calculations are made using the standard ?-? model with an empirically imposed transition criteria. Our own simulations

Ismail Celik; Franklin D. Shaffer



The unbalance and bounds on the average waiting time for periodic routing to one queue  

Microsoft Academic Search

In this paper we introduce the combinatorial notion of unbalance for a periodic zero-one splitting sequence. Using this unbalance we derive an upper bound for the average expected waiting time of jobs which are routed to one queue according to a periodic zero-one splitting sequence. In the companion paper [16] the upper bound will be extended to the routing to

Arie Hordijk; Dinard van der Laan



Optimal Rate Assignment Strategy to Minimize Average Waiting Time in Wireless Networks  

Microsoft Academic Search

In a wireless network that supports multiple flows, allocation of bandwidth resource among the flows is one of the critical problems. Different allocation strategies have been developed based on different optimization objectives. Unfortunately, these objectives may not reflect directly the time needed for a flow to transmit what it wants. In this paper, we define a new objective, average waiting

Hongfei Zeng; Ronghui Hou; King-Shan Lui



An upper bound to time-averaged space-charge limited diode currents  

SciTech Connect

The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.

Griswold, M. E.; Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States); Wurtele, J. S. [Department of Physics, University of California at Berkeley, Berkeley, California 94720 (United States)



A real-time QRS detection method based on moving-averaging incorporating with wavelet denoising  

Microsoft Academic Search

In this paper, a simple moving average-based computing method for real-time QRS detection is proposed. In addition, for signal preprocessing our detection algorithm also incorporates a wavelet-based denoising procedure to effectively reduce the noise level for electrocardiogram (ECG) data. The overall computational structure of the proposed algorithm allows the QRS detection to be performed and implemented in real-time with high

Szi-wen Chen; Hsiao-chen Chen; Hsiao-lung Chan



Perceptually averaging in a continuous visual world: extracting statistical summary representations over time.  


Beyond processing individual features and objects, the visual system can also efficiently summarize scenes--for example, allowing observers to perceive the average size of a group of objects. Extraction of such statistical summary representations (SSRs) is fast and accurate, but researchers do not yet have a clear picture of the circumstances in which they operate. Previous studies have always used discrete input--either spatial arrays of shapes or temporal sequences of shapes presented one at a time. Real-world environments, in contrast, are intrinsically continuous and dynamic. We investigated the ability to compute average size in displays of objects (or sometimes a single object) that changed continuously, expanding and contracting over time. The results indicated that perceptual averaging can operate continuously in dynamic displays--sampling multiple times during a single continuous transformation with no discrete boundaries. Moreover, some dynamic changes (expansion) influence the resulting perceptual averages more than others (contraction), perhaps because of attentional capture. These results collectively illustrate how SSRs may be well adapted to dynamically changing real-world environments. PMID:20424102

Albrecht, Alice R; Scholl, Brian J



Inverse methods for estimating primary input signals from time-averaged isotope profiles  

NASA Astrophysics Data System (ADS)

Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.



Minimum average-cost path for real time 3D coronary artery segmentation of CT images.  


In this paper, we propose a Minimum Average-cost Path (MACP) model for segmenting 3D coronary arteries by minimizing the average edge cost along path in discrete 4D graph constructed by image voxels and associated radii. Prim's Minimum Spanning Tree method is used for efficient optimization of the MACP model. The centerline and the radii of the cross sections of the coronary artery are extracted simultaneously during the optimization. The method does not need any image preprocessing steps and has been intensively validated as an effective approach with the Rotterdam Coronary Artery Algorithm Evaluation Framework. The computational cost of the proposed method is particularly low (7.467 seconds per segment, 18.5mm/s on average), which makes real time segmentation of coronary artery possible. Shortcut problem, which is a classic issue of the minimal path techniques, can also be overcome by the proposed method. PMID:22003729

Zhu, Ning; Chung, Albert C S



Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry  

SciTech Connect

The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.

Decker, A.J.



Time-average-based Methods for Multi-angular Scale Analysis of Cosmic-Ray Data  

NASA Astrophysics Data System (ADS)

Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10°, disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

Iuppa, R.; Di Sciascio, G.



Detecting Non-Geocentric Axial Dipole Structure in the Time-Averaged Field  

NASA Astrophysics Data System (ADS)

The approximation of Earth's magnetic field by a geocentric axial dipole (GAD) is central to applications of paleomagnetism in global tectonics. Quantifying non-GAD contributions to the geomagnetic field is not only important because of the consequences for tectonic studies, but is essential to understanding the role of inner core growth and core-mantle boundary influences on field generation. Significant departures from GAD, in particular those that can be represented by a zonal octupole field, have been suggested over long time periods (108 -- 109 yr) prior to 250 Myr ago. However, biased estimates of the time-averaged field direction are obtained from unit vectors (i.e., in the absence of paleointensity data); deviations of inclination from that predicted by a GAD field are well approximated by a zonal octupole contribution. Simulations from statistical models for paleosecular variation that match 0--5 Myr paleodirection and paleointensity data indicate that the biases in predicted inclinations are on the order of 2o. Increased secular variation would result in larger biases, so this effect may be important during periods of low paleointensity and/or earlier in Earth's history when the inner core was smaller. Over shorter time intervals (106 yr), the time-averaged field shows smaller, but observable departures from GAD. Although a zonal quadrupole contribution has been considered robust, proposed longitudinal (non-zonal) structure in the time-averaged field has been challenged on the grounds of inadequate spatial and temporal data coverage, data quality, and contamination by local tectonic effects. It is now possible to compile regional data sets comprising paleodirection and occasionally paleointensity data from tens to hundreds of sites, and spanning the period 0--5 Ma. These include Hawaii, Reunion, Japan, French Polynesia, New Zealand, and North America. In addition, a recent sampling program has focused on obtaining paleomagnetic data from 0--5 Ma lava flows from previously under-sampled high latitude and the southern hemisphere regions. New time-averaged field models constructed from 0--5 Ma normal polarity data have improved data coverage in the southern hemisphere and suggest the presence of southern hemisphere flux lobes, undetectable with previous data sets. We use 0--5 Ma data sets, new time-averaged field models, and statistical models to examine the conditions required regionally and globally to detect non-GAD field structure.

Johnson, C. L.; Constable, C. G.; Lawrence, K.



Time-averaged in-line digital holographic interferometry for vibration analysis  

SciTech Connect

Time-averaged in-line digital holography is applied for vibration analysis. In particular, by use of a double-exposure approach, simultaneous determination of vibration mode shape and mean static state deformation during a vibration cycle are obtained. The subtraction of two numerically reconstructed digital holograms recorded at the same resonant frequency but with a small difference in amplitude shows the mixing of Bessel-type time-averaged fringes owing to vibration and of the double-exposure fringes owing to differences in the mean deformation of the object. It is shown that separation of these fringe patterns can be readily accomplished numerically. An experimental demonstration of this effect by use of in-line digital holography for relatively small membranes is demonstrated.

Asundi, Anand; Singh, Vijay Raj



Average discharge rate representation of voice onset time in the chinchilla auditory nerve  

Microsoft Academic Search

Responses of chinchilla auditory-nerve fibers to synthesized stop consonants differing in voice onset time (VOT) were obtained. The syllables, heard as \\/ga\\/--\\/ka\\/ or \\/da\\/--\\/ta\\/, were similar to those previously used by others in psychophysical experiments with human and with chinchilla subjects. Average discharge rates of neurons tuned to the frequency region near the first formant generally increased at the onset

Donal G. Sinex; L. P. McDonald



Maximizing average efficiency of process time for pressure die casting in real foundries  

Microsoft Academic Search

Scheduling of casting processes is the problem of determining the number of products that will be manufactured in each casting\\u000a shift so as to optimize a predetermined objective function. In this paper, we propose a linear programming (LP) model that\\u000a maximizes the average efficiency of process time for casting in real foundries. The considered casting process is pressure\\u000a die casting,

Yong Kuk Park; Jung-Min Yang



PIV measurements of the time-averaged flow velocity downstream of flow conditioners in a pipeline  

Microsoft Academic Search

The flow downstream of three different flow conditioners, a tube bundle and two perforated plates, was investigated by measuring the time-averaged, axial velocity component with Particle Image Velocimetry (PIV). The conditioners were exposed to the flow disturbed by a 90° out-of-plane double-bend. The experiments were performed with air flow through a pipeline of 100 mm i.d. and at Reynolds numbers

Th. Schlüter; W. Merzkirch



A fast, time averaging data acquisition system for the PC-AT bus  

Microsoft Academic Search

A data acquisition system designed for magnetic resonance instrumentation is presented. Its performance may be suitable for a broad set of applications where repetitive and noisy signals must be time-averaged in order to increase the signal-to-noise ratio. The system is built from easily available, low cost parts on a standard board for the PC-AT (16-bit) bus. 12-bit data acquisition and

Roberto Ambrosetti; Domenico Ricci



Inverse methods for estimating primary input signals from time-averaged isotope profiles  

Microsoft Academic Search

Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This

Benjamin H. Passey; Thure E. Cerling; Gerard T. Schuster; Todd F. Robinson; Beverly L. Roeder; Stephen K. Krueger



Simple time weighted average level air-monitoring method for sulfur mustard in work places  

Microsoft Academic Search

A simple air-monitoring method was developed and validated for vesicant sulfur mustard (HD) in the work place at time weighted average levels. XAD-2 adsorbent was used for the collection of HD vapours for 8 h followed by liquid desorption with carbon tetrachloride. The trace level analysis was carried out by gas chromatography–photoionization detection and was confirmed by gas chromatography–mass spectrometric

M. V. S Suryanarayana; R. K Shrivastava; Diwakar Pandey; R Vaidyanathaswamy; Sumeet Mahajan; Debdas Bhoumik



Time-averaged and time-dependent energy-related quantities of harmonic waves in inhomogeneous viscoelastic anisotropic media  

Microsoft Academic Search

SUMMARY The energy-flux vector and other energy-related quantities play an important role in vari- ous wave propagation problems. In acoustics and seismology, the main attention has been devoted to the time-averaged energy flux of time-harmonic wavefields propagating in non- dissipative, isotropic and anisotropic media. In this paper, we investigate the energy-flux vector and other energy-related quantities of wavefields propagating in

Ivan Psenc ´ ik; Ke Karlovu


Semi-Markov Control Processes with Unknown Holding Times Distribution Under an Average Cost Criterion  

SciTech Connect

This paper deals with a class of semi-Markov control models with Borel state and control spaces, possibly unbounded costs, and unknown holding times distribution F. Assuming that F does not depend on state-action pairs, we combine suitable methods of statistical estimation of the mean holding time with control procedures to construct an average cost optimal Markovian policy {pi}-hat={l_brace}f{sub n}{r_brace}, and an optimal stationary policy {l_brace}f{sub {infinity}}{r_brace}, where f{sub n} converges to f{sub {infinity}} in the sense of Schael.

Luque-Vasquez, Fernando, E-mail:; Minjarez-Sosa, J. Adolfo, E-mail:; Rosas-Rosas, Luz del Carmen, E-mail: lcrosas@gauss.mat.uson.m [Universidad de Sonora, Departamento de Matematicas (Mexico)



Comparison of Approaches for Estimating Time-Averaged Precipitation Using Data from the USA  

NASA Astrophysics Data System (ADS)

Spatial and temporal sampling errors inherent in large- scale, weather-station (raingauge) climatologies of precipitation are evaluated. A primary goal is to assess whether more representative large-scale precipi- tation climatologies emerge when (i) more station means are included, even when they are based on unequal periods of record, or (ii) fewer station means are included but all are derived from the same period of record. Observations drawn from the Historical Climatology Network (HCN) are used to estimate temporally averaged precipitation over 10-, 20-, and 30-year intervals at 457 stations within the USA. Two strategies for estimating these observed means are examined, one based on temporal substitution within each station record, and the other based on spatial interpolation from surrounding stations. Temporally estimated m-year means were obtained by substituting other m-year means, from within the same station record, for each observed m-year mean, where m is the length of the averaging period of interest. Spatially interpolated m-year means were estimated from m-year means associated with nearby stations. Climatologies containing a greater number of station averages, even if they are computed over unequal averaging periods, appear to better represent the space-time variability in mean precipitation than climatologies containing fewer, but temporally commensurate, station means. Our results (for precipitation-station networks on the order of 50-60 stations per 106 km2) indicate that the within-station- record substitution of means is about 1?3 to 2?5 times more accurate than is interpolation from surrounding station means. Within-station substitu tion errors - associated with estimating any 10-year mean precipitation from any other 10-year mean - for example, were about 8 per cent of the long-term spatial precipitation mean for the USA, or 67?6 mm. Spatially interpolated 10-year means, from nearby stations, were in error by more than 10 per cent, or 88?8 mm on average. Much of the space-time variability in mean precipitation was not resolved adequately by the 457 HCN stations, especially high-frequency spatial variability caused by orographic and convective mechanisms. For many regions of the world, temporally homogeneous precipitation station networks are considerably more sparse than in the USA, further degrading the reliability of interpolated and spatially integrated mean precipitation fields derived solely from those networks.

Willmott, Cort J.; Robeson, Scott M.; Janis, Michael J.



Polarization averaged short-time Fourier transform technique for distributed fiber birefringence characterization using Brillouin gain.  


A polarization averaged short-time Fourier transform (PASTFT) technique is developed for distributed fiber birefringence characterization based on counterpropagating stimulated Brillouin scattering (SBS) gain signal. This technique can be used for the birefringence characterization of the general elliptical birefringent fiber. A theoretical model on polarization matching of counterpropagating SBS process is established. The performance of the short-time Fourier transform (STFT) method and the PASTFT technique is analyzed by using the simulation of the theoretical model. Simulation results show that the process of polarization average could effectively reduce the birefringence characterization error caused by the polarization dependence of the local period of SBS gain. A less than 8% normalized root mean square error is achieved for the characterization of the length of the birefringence vector on elliptical birefringent fibers. The PASTFT technique is experimentally verified by the distributed measurement of beat length and differential group delay of a standard single-mode fiber via the Brillouin optical time domain analysis system. PMID:22772108

Xie, Shangran; Chen, Liang; Bao, Xiaoyi



Average relaxation time of internal spectrum for carbosilane dendrimers: Nuclear magnetic resonance studies  

NASA Astrophysics Data System (ADS)

A new theoretical description of the interior mobility of carbosilane dendrimers has been tested. Experiments were conducted using measurements of the 1H NMR spin-lattice relaxation time, T1H, of two-, three- and four-generation carbosilane dendrimers with three different types of terminal groups in dilute chloroform solutions. Temperature dependences of the NMR relaxation rate, 1/T1H, were obtained for the internal CH2 - groups of the dendrimers in the range of 1/T1H maximum, allowing us to directly evaluate the average time of the internal spectrum for each dendrimer. It was found that the temperature of 1/T1H maximum is practically independent of the number of generations, G; therefore, the theoretical prediction was confirmed experimentally. In addition, the average time of the internal spectrum of carbosilane dendrimers was found to be near 0.2 ns at room temperature, and this value correlates well with the values previously obtained for other dendrimer structures using other experimental techniques.

Markelov, Denis A.; Matveev, Vladimir V.; Ingman, Petri; Lähderanta, Erkki; Boiko, Natalia I.



A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms  

NASA Astrophysics Data System (ADS)

A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm be can be determined with a formulation similar to what was first described by Holzer and Saxon in 1952. From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.



Average relaxation time of internal spectrum for carbosilane dendrimers: nuclear magnetic resonance studies.  


A new theoretical description of the interior mobility of carbosilane dendrimers has been tested. Experiments were conducted using measurements of the (1)H NMR spin-lattice relaxation time, T(1H), of two-, three- and four-generation carbosilane dendrimers with three different types of terminal groups in dilute chloroform solutions. Temperature dependences of the NMR relaxation rate, 1/T(1H), were obtained for the internal CH(2)-groups of the dendrimers in the range of 1/T(1H) maximum, allowing us to directly evaluate the average time of the internal spectrum for each dendrimer. It was found that the temperature of 1/T(1H) maximum is practically independent of the number of generations, G; therefore, the theoretical prediction was confirmed experimentally. In addition, the average time of the internal spectrum of carbosilane dendrimers was found to be near 0.2 ns at room temperature, and this value correlates well with the values previously obtained for other dendrimer structures using other experimental techniques. PMID:21974558

Markelov, Denis A; Matveev, Vladimir V; Ingman, Petri; Lähderanta, Erkki; Boiko, Natalia I



Average discharge rate representation of voice onset time in the chinchilla auditory nerve  

SciTech Connect

Responses of chinchilla auditory-nerve fibers to synthesized stop consonants differing in voice onset time (VOT) were obtained. The syllables, heard as /ga/--/ka/ or /da/--/ta/, were similar to those previously used by others in psychophysical experiments with human and with chinchilla subjects. Average discharge rates of neurons tuned to the frequency region near the first formant generally increased at the onset of voicing, for VOTs longer than 20 ms. These rate increases were closely related to spectral amplitude changes associated with the onset of voicing and with the activation of the first formant; as a result, they provided accurate information about VOT. Neurons tuned to frequency regions near the second and third formants did not encode VOT in their average discharge rates. Modulations in the average rates of these neurons reflected spectral variations that were independent of VOT. The results are compared to other measurements of the peripheral encoding of speech sounds and to psychophysical observations suggesting that syllables with large variations in VOT are heard as belonging to one of only two phonemic categories.

Sinex, D.G.; McDonald, L.P.



Time Average Holography Study of Human Tympanic Membrane with Altered Middle Ear Ossicular Chain  

NASA Astrophysics Data System (ADS)

Computer-assisted time average holographic interferometry was used to study the vibration of the human tympanic membrane (TM) in cadaveric temporal bones before and after alterations of the ossicular chain. Simultaneous laser Doppler vibrometer measurements of stapes velocity were performed to estimate the conductive hearing loss caused by ossicular alterations. The quantified TM motion described from holographic images was correlated with stapes velocity to define relations between TM motion and stapes velocity in various ossicular disorders. The results suggest that motions of the TM are relatively uncoupled from stapes motion at frequencies above 1000 Hz.

Cheng, Jeffrey T.; Ravicz, Michael E.; Rosowski, John J.; Hulli, Nesim; Hernandez-Montes, Maria S.; Furlong, Cosme



Time-averaged quantum dynamics and the validity of the effective Hamiltonian model  

SciTech Connect

We develop a technique for finding the dynamical evolution in time of an averaged density matrix. The result is an equation of evolution that includes an effective Hamiltonian, as well as decoherence terms in Lindblad form. Applying the general equation to harmonic Hamiltonians, we confirm a previous formula for the effective Hamiltonian together with a additional decoherence term which should, in general, be included and whose vanishing provides the criteria for validity of the effective Hamiltonian approach. Finally, we apply the theory to examples of the ac Stark shift and three-level Raman transitions, recovering a decoherence effect in the latter.

Gamel, Omar; James, Daniel F. V. [Department of Physics, University of Toronto, 60 St. George Street, Toronto, Ontario M5S 1A7 (Canada)



Time- and Volume-Averaged Conservation Equations for Multiphase Flow Using Mass-Weighted Velocity and Internal Energy.  

National Technical Information Service (NTIS)

Conservation equations of mass, momentum, and energy for multiphase flow, formulated on the basis of local volume averaging followed by time-averaging for turbulent flows, are presented. They are differential equations of transport with area integrals ass...

B. T. Chao S. L. Soo W. T. Sha



Modal identification based on Gaussian continuous time autoregressive moving average model  

NASA Astrophysics Data System (ADS)

A new time-domain modal identification method of the linear time-invariant system driven by the non-stationary Gaussian random force is presented in this paper. The proposed technique is based on the multivariate continuous time autoregressive moving average (CARMA) model. This method can identify physical parameters of a system from the response-only data. To do this, we first transform the structural dynamic equation into the CARMA model, and subsequently rewrite it in the state-space form. Second, we present the exact maximum likelihood estimators of parameters of the continuous time autoregressive (CAR) model by virtue of the Girsanov theorem, under the assumption that the uniformly modulated function is approximately equal to a constant matrix over a very short period of time. Then, based on the relation between the CAR model and the CARMA model, we present the exact maximum likelihood estimators of parameters of the CARMA model. Finally, the modal parameters are identified by the eigenvalue analysis method. Numerical results show that the method we introduced here not only has high precision and robustness, but also has very high computing efficiency. Therefore, it is suitable for real-time modal identification.

Xiuli, Du; Fengquan, Wang



The intensity of the time-averaged geomagnetic field: the last 5 Myr  

NASA Astrophysics Data System (ADS)

The existing database for paleointensity estimates of the ancient geomagnetic field contains more than 1500 data points collected through decades of effort. Despite the huge amount of work put into obtaining these data, there remains a strong bias in the age and global distribution of the data toward very young results from a few locations. Also, few of the data meet strict criteria for reliability and most are of unknown quality. In order to improve the age and spatial distribution of the paleointensity database, we have carried out paleointensity experiments on submarine basaltic glasses from a number of DSDP sites. Of particular interest are the sites that provide paleointensity data spanning the time period 0.3-5 Ma, a time of relatively few high quality published data points. Our new data are concordant with contemporaneous data from the published literature that meet minimum acceptance criteria, and the combined data set yields an average dipole moment of 5.49+/-2.36×1022 Am2. This average value is comparable to the average paleofield for the period 5-160 Ma (4.2+/-2.3×1022 Am2) [T. Juarez, L. Tauxe, J.S. Gee and T. Pick (1998) Nature 394, 878-881] and is substantially less than the value of approximately 8×1022 Am2 often quoted for the last 5 Myr (e.g. [McFadden and McElhinny (1982) J. Geomagn. Geoelectr. 34, 163-189 A.T. Goguitchaichvili, M. Prévot and P. Camps (1999) Earth Planet. Sci. Lett. 167, 15-34]).

Juarez, M. T.; Tauxe, L.



Paleosecular variation and time-averaged field recorded in lavas flows from Mexico  

NASA Astrophysics Data System (ADS)

An analysis of secular variation and time-averaged paleomagnetic field in the Mexico, using data both newly obtained and from the literature is presented. Interpretation regarding time-averaged field and secular variation can best be constrained after excluding from the dataset all the sites from the western part of the Trans-Mexican Volcanic Belt and the sites older than Quaternary from the central and eastern part of the TMVB, which appear to be affected by counterclockwise tectonic rotations. The mean direction among the remaining sites of Quaternary age is Dec = 359.1\\deg, Inc = 31.3\\deg, \\alpha95= 2.0\\deg, N = 157. This direction doesn't overlap the expected GAD but is consistent with a geocentric axial dipole field (GAD) with a 5% quadrupole. The VGP scatter of this group of sites (12.3\\deg, with lower and upper 95% confidence limits of 11.4\\deg and 13.3\\deg) is close to the value expected from Model G (13.5\\deg).

Mejia, V.; Bohnel, H.; Opdyke, N. D.; Ortega-Rivera, A.; Lee, J. K.; Aranda-Gómez, J.



Damage Detection in Plate Structures Based on Space-time Autoregressive Moving Average Processes  

NASA Astrophysics Data System (ADS)

This paper presents a method for damage detection in plate structures based on space-time series analysis. The plate is divided into several spatial sub-regions and a space-time autoregressive moving average process (STARMA) is identified with the vibration responses of certain sub-region. And then the responses of other sub-regions are predicted with the identified model. The predicted errors are used as the criterion to detect and locate the damage by comparing the predicted responses with the practical ones. The validity of the methodology is demonstrated by a numerical model of a cantilever plate with cracks. The results show that this method can effectively detect the location and the size of the cracks even in the absence of the prior knowledge about the undamaged structure.

Hu, X. T.; Qin, Z. Y.; Chu, F. L.



Electrocortical correlates of intra-subject variability in reaction times: average and single-trial analyses.  


Intra-subject variability of reaction times (ISV), generally considered secondary to measures of central tendency, has become increasingly important for cognitive neuroscience. Event-related potential (ERP) research suggests an inverse relationship between ISV and P3b amplitude; however ERP methods' reliance on averaged waveforms may be unsuitable for studying ISV due to potentially distorting effects of latency jitter. The present study investigates the neural bases of ISV through single-trial analysis of P3bs in groups with low and high ISV, using Infomax-rotated principal components analysis of ERP data. Results indicate that while latency jitter contributes to the reduced P3b amplitude seen in average ERPs of high ISV participants, group differences in amplitude survive the use of single-trial methods that are robust to such distortions. A decoupling of P3b and RT was also seen in the high ISV group. The results are discussed in the context of the P3b's dimensionality, and its possible catecholaminergic underpinnings. PMID:21335053

Saville, Christopher W N; Dean, Reem O; Daley, David; Intriligator, James; Boehm, Stephan; Feige, Bernd; Klein, Christoph



The Influence of Record Length and Averaging Time on Flow Duration Curves  

NASA Astrophysics Data System (ADS)

Flow duration curves (FDCs) provide a method for quantifying differences in the flow characteristics of different streams and for examining changes in flow due to the effects of land management. This study examines the effects of length of record, streamflow sampling frequency, and flow increment used for calculating FDCs. FDCs are based on a specific period of record (e.g., 10 years), and can use data of different sampling intervals (e.g., hourly) and different averaging times (e.g., average daily flow). Knowledge of how these factors affect FDCs is important for interpretation. If sufficient data are available, and this period of record represents the long-term flow regime of the stream, the curve can be viewed as a "probability" curve, allowing the estimation of future streamflow characteristics. We examine the length of record needed to adequately approximate the long-term FDC. The effects of season on FDCs are also investigated. The larger objective of this study is to evaluate the effectiveness of best management practices on agricultural watersheds in terms of hydrology and water quality. The analyses presented here provide the base for extending the duration curve method to include water quality analyses thru concentration and load duration curves.

Hicks, N. S.; Bonta, J. V.; Morrison, M. A.; Gburek, W.




SciTech Connect

We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.

Hankins, Timothy H. [Physics Department, New Mexico Tech, Socorro, NM 87801 (United States); Rankin, Joanna M. [Physics Department, University of Vermont, Burlington, VT 05401 (United States)], E-mail:, E-mail:



A Continuous Time Model for Interest Rate with Autoregressive and Moving Average Components  

NASA Astrophysics Data System (ADS)

In this paper we present a multi-factor continuous-time autoregressive moving-average (CARMA) model for the short and forward interest rates. This models is able to present a more adequate statistical description of the short and forward rate dynamics. We show that this is a tractable term structure model and provide closed-form solutions to bond and bond option prices, bond yields, and the forward rate volatility term structure. We demonstrate the capabilities of our model by calibrating it to market data and show that it can reproduce rather complex shapes of the empirical volatility term structure. In particular, a three-factor CARMA model can easily capture the dynamics of the level, slope, and curvature factors widely documented in term structure models.

Benth, F. E.; Koekebakker, S.; Zakamouline, V.



Time averaged charge and energy spectra of galactic nuclei with Zgreater than or equal to50  

SciTech Connect

Charge and energy spectra a group of ultraheavy galactic cosmic-ray nuclei averaged over a time 0--76 million years were obtained by analyzing old tracks in olivines from the Marjalahti meteorite. The olivines were selected from two localities situated at depths of 6 +- 0.6 and 9 +- 1 cm from the preatmospheric surface of the Marjalahti meteorite. Analysis of two track length distributions obtained has shown that the energy decrease exponents ..gamma.. for the spectrum of the galactic nuclei are 2.2 +- 0.7, 2.5 +- 0.6, and 2.8 +- 0.9 for the atomic number intervals 50less than or equal toZ<70, 70less than or equal toZ<90, and Zgreater than or equal to90, respectively.

Akopova, A.B.; Gogoryan, M.M.; Melkumyan, M.M.; Perelygin, V.P.; Stetsenko, S.G.



A Lossless Compression Method of Time-Series Data Based on Increasing Average of Neighboring Signals  

NASA Astrophysics Data System (ADS)

Golomb-Rice coding is a well-known compression algorithm for sensor data. When time-series data changes drastically with the large amplitudes such as a pulse signal, the code length based on Golomb-Rice coding becomes very long. In order to shorten the code length, amplitude of signal is decreased by calculating differential signal between a raw signal with a similar signal. In this paper, we develop a lossless compression method for time-series data such as sensor data. In traditional methods, finding the past-signal from which a differential signal with low amplitude can be generated is the main topic. However, if there are no past-signals to reduce sufficiently the amplitude of differential signal, the data compression procedure takes only low effects. In our approach, a signal which decreases energy of a pulse signal or increases energy of the neighboring signal of a pulse signal is adopted to generate differential signals. In order to select an effective signal, we propose a method for detecting reference signals based on cumulative distribution features of time-series data. As results of experiments, we confirm that our proposed method can generate codes whose length is shortened. The code length was decreased to 97% on average and up to 81% in comparison with the traditional method.

Takezawa, Tetsuya; Asakura, Koichi; Watanabe, Toyohide


Arrival-time structure of the time-averaged ambient noise cross-correlation function in an oceanic waveguide.  


Coherent deterministic arrival times can be extracted from the derivative of the time-averaged ambient noise cross-correlation function between two receivers. These coherent arrival times are related to those of the time-domain Green's function between these two receivers and have been observed experimentally in various environments and frequency range of interest (e.g., in ultrasonics, seismology, or underwater acoustics). This nonintuitive result can be demonstrated based on a simple time-domain image formulation of the noise cross-correlation function, for a uniform distribution of noise sources in a Pekeris waveguide. This image formulation determines the influence of the noise-source distribution (in range and depth) as well as the dependence on the receiver bandwidth for the arrival-time structure of the derivative of the cross-correlation function. These results are compared with previously derived formulations of the ambient noise cross-correlation function. Practical implications of these results for sea experiments are also discussed. PMID:15704409

Sabra, Karim G; Roux, Philippe; Kuperman, W A



Static and dynamic micro deformable mirror characterization by phase-shifting and time-averaged interferometry  

NASA Astrophysics Data System (ADS)

Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing and characterizing blocks of this key-component. An in-house designed tiltable mirror (170*100 ?m2) has been processed by surface micromachining in the Cronos foundry, and a dedicated characterization bench has been developed for the complete analysis of building blocks as well as operational deformable mirrors. This modular Twyman-Green interferometer allows high in-plane resolution (4?m) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps. Range is increased without loosing accuracy by using two-wavelength phase-shifting interferometry authorizing large steps measurements such as 590 nm print-through steps caused by the Cronos process. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometry. Rotation mode frequency of 31-3kHz of the micro tiltable mirror, and a resonance with a tuned damping at 1.1kHz of the commercial OKO deformable mirror are revealed.

Liotard, Arnaud; Zamkotsian, Frederic



Static and dynamic micro deformable mirror characterization by phase-shifting and time-averaged interferometry  

NASA Astrophysics Data System (ADS)

The micro-opto-electro-mechanical systems (MOEMS), based on mature technologies of micro-electronics, are essential in the design of future astronomical instruments. One of these key-components is the micro-deformable mirror for wave-front correction. Very challenging topics like search of exo-planets could greatly benefit from this technology. Design, realization and characterization of micro-Deformable Mirrors are under way at Laboratoire d'Astrophysique de Marseille (LAM) in collaboration with Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS). In order to measure the surface shape and the deformation parameters during operation of these devices, a high-resolution Twyman-Green interferometer has been developed. Measurements have been done on a tiltable micro-mirror (170×100?m2) designed by LAM-LAAS and realized by an American foundry, and also on an OKO deformable mirror (15mm diameter). Static characterization is made by phase shifting interferometry and dynamic measurements have been made by quantitative time-averaged interferometry. The OKO mirror has an actuator stroke of 370+/-10nm for 150V applied and its resonant frequency is 1170+/-50 Hz, and the tiltable mirror has a rotation cut-off frequency of 31±3kHz.

Liotard, Arnaud; Zamkotsian, Frédéric



Cochlear Modeling Using Time-Averaged Lagrangian" Method:. Comparison with VBM, PST, and ZC Measurements  

NASA Astrophysics Data System (ADS)

In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (?CT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.

Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.



The Turnaround Mindset: Aligning Leadership for Student Success  

ERIC Educational Resources Information Center

|This book provides a valuable balance between what one must know and what one must do to turn around low-performing schools. The 3-E framework simplifies this complex process by focusing resources on the environment, the executive, and the execution of the turnaround plan. Central to each of these components is a spotlight on the values…

Fairchild, Tierney Temple; DeMary, Jo Lynne



Can Principals' Emotional Intelligence Matter to School Turnarounds?  

ERIC Educational Resources Information Center

|Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: the…

Cai, Qijie



Dollar Turnaround and U.S. Agricultural Exports.  

National Technical Information Service (NTIS)

U.S. agricultural exports have deteriorated substantially in the 1980s. One explanation for this decline is the appreciation of the dollar over the 1979-85 period making U.S. exports less competitive in world markets. In 1986, in a sharp turnaround, the d...

B. Krissoff A. Morey



129. Back side technical facilities passageways between turnaround trans. & ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

129. Back side technical facilities passageways between turnaround trans. & building no. 104, "plans, sections & elevations" - architectural, AS-BLT AW 36-25-13, sheet 9 of 40, dated 23 November, 1960. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK


Turnaround on the Labour Market Despite Sluggish Export Activity  

Microsoft Academic Search

Austrian industry has not yet overcome the period of stagnation it experienced since last autumn as a result of falling export orders. However, the outcome of WIFO's cyclical test points to a stabilisation of production expectations. Due to growing domestic demand and a slower increase in the supply of labour, the expected turnaround on the labour market was achieved. Austrian

Ewald Walterskirchen



Turnaround: Leading Stressed Colleges and Universities to Excellence  

ERIC Educational Resources Information Center

|Nearly one thousand colleges and universities in the United States face major challenges--from catastrophic hurricanes to loss of accreditation to sagging enrollment. What can leaders of such at-risk institutions do to improve their situation? "Turnaround" gives college and university leaders the tools they need to put their fragile institutions…

Martin, James; Samels, James E.



Key Decisions of a First-Year "Turnaround" Principal  

ERIC Educational Resources Information Center

|This article examines the decisions made by one principal in her first year as a school turnaround specialist in a low-performing urban elementary school. Researchers focused on decisions related to the principal's three high-priority concerns: (1) elimination of an ineffective instructional program; (2) creation of a culture of teacher…

Duke, Daniel; Salmonowicz, Michael



Incident invasive breast cancer, geographic location of residence, and reported average time spent outside.  


There have been reports of greater breast cancer incidence and mortality at northern compared with southern latitudes postulated to be related to vitamin D exposure. Among 71,662 participants in the Women's Health Initiative Observational Study (WHIOS) free of cancer at baseline (1993-1998), associations were explored between incident invasive postmenopausal breast cancer (n = 2,535), over approximately 8.6 years follow-up, and the following: (a) region of residence at birth, age 15 years, age 35 years; (b) region of residence at WHIOS baseline; and (c) clinic center solar irradiance. Hazard ratios and 95% confidence intervals (CI) for breast cancer were estimated after adjustment for individual level confounders. There was no difference in breast cancer risk by region of earlier life, baseline residence, or solar irradiance measured in Langelys (gm-cal) per cm(2). There was an observed 15% decreased risk among women residing in areas of low versus high solar irradiance measured in Watts per m(2) (95% CI, 2-26%). However, the associated P(trend) of 0.20 was not significant. Conversely, women who reported spending on average <30 minutes versus >2 hours outside in daylight year round at WHIOS year 4 follow-up (n = 46,926), had a 20% (95% CI, 2-41%; P(trend) = 0.001) increased risk of breast cancer. In conclusion, region of residence and geographic solar irradiance are not consistently related to risk of breast cancer and may not be sufficient proxy measures for sunlight/vitamin D exposure. The observed association between time spent outside and breast cancer risk support the hypothesis that vitamin D may protect against breast cancer. PMID:19190147

Millen, Amy E; Pettinger, Mary; Freudenheim, Jo L; Langer, Robert D; Rosenberg, Carol A; Mossavar-Rahmani, Yasmin; Duffy, Christine M; Lane, Dorothy S; McTiernan, Anne; Kuller, Lewis H; Lopez, Ana Maria; Wactawski-Wende, Jean



Characteristics of Occupational Exposure to Benzene during Turnaround in the Petrochemical Industries  

PubMed Central

Objectives The level of benzene exposure in the petrochemical industry during regular operation has been well established, but not in turnaround (TA), where high exposure may occur. In this study, the characteristics of occupational exposure to benzene during TA in the petrochemical companies were investigated in order to determine the best management strategies and improve the working environment. This was accomplished by evaluating the exposure level for the workers working in environments where benzene was being produced or used as an ingredient during the unit process. Methods From 2003 to 2008, a total of 705 workers in three petrochemical companies in Korea were studied. Long- and short-term (< 1 hr) samples were taken during TAs. TA was classified into three stages: shut-down, maintenance and start-up. All works were classified into 12 occupation categories. Results The long-term geometric mean (GM) benzene exposure level was 0.025 (5.82) ppm (0.005-42.120 ppm) and the short-term exposure concentration during TA was 0.020 (17.42) ppm (0.005-61.855 ppm). The proportions of TA samples exceeding the time-weighted average, occupational exposure level (TWA-OEL in Korea, 1 ppm) and the short-term exposure limit (STEL-OEL, 5 ppm) were 4.1% (20 samples of 488) and 6.0% (13 samples of 217), respectively. The results for the benzene exposure levels and the rates of exceeding the OEL were both statistically significant (p < 0.05). Among the 12 job categories of petrochemical workers, mechanical engineers, plumbers, welders, fieldman and scaffolding workers exhibited long-term samples that exceeded the OEL of benzene, and the rate of exceeding the OEL was statistically significant for the first two occupations (p < 0.05). Conclusion These findings suggest that the periodic work environment must be assessed during non-routine works such as TA.

Shin, Jung-Ah; Lee, Byung-Kyu; Kwon, Jiwoon; Lee, Naroo; Chung, Kwang-Jae; Lee, Jong-Han; Lee, In-Seop; Kang, Seong-Kyu; Jang, Jae-Kil



Extending Turnaround Process Research: Important Lessons From the U.S. Civil War  

Microsoft Academic Search

Although research has made progress in building our understanding of effective turnaround strategies, significant gaps remain in our knowledge about the strategic processes managers can employ to formulate and implement these strategies. Using a historical case of organizational turnaround employed by the U.S. government during the U.S. Civil War, the authors generate new insights into the turnaround process while confirming

Franz T. Lohrke; David Ahlstrom; Garry D. Bruton



75 FR 15726 - Comment Request for Information Collection for Quick Turnaround Surveys of the Workforce...  

Federal Register 2010, 2011, 2012, 2013

...for Quick Turnaround Surveys of the Workforce Investment...about quick turnaround surveys of the Workforce Investment...Washington, DC 20210. Telephone number: (202) 693-3680...of quick turnaround surveys in which data will be...level information). Examples of broad topic...



78 FR 48464 - Comment request for Information Collection for Quick Turnaround Surveys of All Statutes and...  

Federal Register 2010, 2011, 2012, 2013

...for Quick Turnaround Surveys of All Statutes and...about quick turnaround surveys for statutes and programs...Washington, DC 20210. Telephone number: (202) 693-3680...of quick turnaround surveys in which data will be...pressing policy interest. Examples of broad topic...



Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques  

USGS Publications Warehouse

Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..

Guedes, R. M. C.; Calliari, L. J.; Holland, K. T.; Plant, N. G.; Pereira, P. S.; Alves, F. N. A.



Diagnostic quality of time-averaged ECG-gated CT data  

NASA Astrophysics Data System (ADS)

Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.

Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan



Input-to-state stability analysis via averaging for parameterized discrete-time systems  

Microsoft Academic Search

The paper studies semi-global practical input-to-state stability (SGP-ISS) of a parameterized family of discrete-time systems that may arise when an approximate discrete-time model of a sampled-data system with disturbances is used for controller design. It is shown under appropriate conditions that if the solutions of the time varying family of discrete-time systems with disturbances converge uniformly on compact time intervals

Wei Wang; Dragan Nesic



Quantification of deep-time orbital forcing by average spectral misfit  

Microsoft Academic Search

Quantification of Milankovitch orbital cyclicity within ancient strata has become a principal tool for refinement of the geologic time scale. However, accurate characterization of the orbital signal in deep time paleoclimate records is commonly challenged by inadequate radiometric time constraints for calibration of the spatial rhythms to temporal periods. This problem can potentially introduce large errors into derivative orbital timescales.




Development of a low-cost system for measuring conditional time-averaged gradients of SO 2 and NH 3  

Microsoft Academic Search

A conditional time-averaged gradient (COTAG) system has been developed to provide direct long-term (weekly to monthly) average\\u000a flux gradient measurements for a range of trace gases, between land and atmosphere. Over daily periods, atmospheric conditions\\u000a can range from high stability, where the vertical gradients of ambient concentration are enhanced due to very small diffusivity,\\u000a to highly unstable conditions, in which

D. Famulari; D. Fowler; E. Nemitz; K. J. Hargreaves; R. L. Storeton-West; G. Rutherford; Y. S. Tang; M. A. Sutton; K. J. Weston



Exploring Granger causality between global average observed time series of carbon dioxide and temperature  

SciTech Connect

Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

Kodra, Evan A [ORNL; Chatterjee, Snigdhansu [University of Minnesota; Ganguly, Auroop R [ORNL



Extending Hierarchical Reinforcement Learning to Continuous-Time, Average-Reward, and Multi-Agent Models.  

National Technical Information Service (NTIS)

Hierarchical reinforcement learning (HRL) is a general framework that studies how to exploit the structure of actions and tasks to accelerate policy learning in large domains. Prior work on HRL has been limited to the discrete-time discounted reward semi-...

M. Ghavamzadeh S. Mahadevan R. Makar



Inventory Average Costs: Non-Unit Order Sizes and Random Lead Times.  

National Technical Information Service (NTIS)

This paper addresses inventory system with non-unit order sizes and variable lead times under a continuous review (S-s) policy. Under the assumption of at most one order outstanding, some quite general results are obtained. An example with a specific form...

R. Urbach



When Time Is of the Essence: Averaging, Aspiration, and the Short Run  

Microsoft Academic Search

A variety of experiments have demonstrated that people's choices among gambles differ according to whether the gamble is to be played just once (the unique case) or multiple times (the repeated case). The reason for this difference appears to be that people are acting in each case so as to increase the likelihood that the chosen alternative will leave them

Lola L. Lopes



How complex is the time-averaged geomagnetic field over the past 5 Myr?  

NASA Astrophysics Data System (ADS)

A basic tenet of palaeomagnetism is that the Earth's magnetic field behaves on average like that of a central axial dipole (g01 ). Nevertheless, the question of possible persistent second-order features is still open. Recently Johnson & Constable (1995, 1996) performed a regularized non-linear inversion and found evidence for persistent non-zonal features. Formal uncertainties would indicate that there are significant (non-zero) terms at least up to degree and order 4. Using a recent compilation of two different data sets from lavas (0 to 5 Ma) and the Johnson & Constable codes, we test the robustness of this result. The data set has been divided into three subsets: the Brunhes polarity data (B), all normal polarity data (N) and all reverse data (R). In each subset of data, a prominent g02 , of the order of 5 per cent of g01 , is clearly present, as previously established by several authors. In some subsets, smaller terms appear: g22 and g11 in the Brunhes data, h13 and h12 in N, and h12 , g03 and g33 in R. A threshold under which terms resulting from the inversion cannot yet be considered as robust appears to be of the order of 300 nT. Indeed, tests show that many terms, which are different for each epoch (B, N or R), may be artefacts due to aliasing because of poor site distribution, or due to the underestimation of a priori errors in the data; these could result from undetected tectonic rotations, non-horizontal palaeoslopes, or viscous overprints. Because of these limitations in resolution, it may not yet be possible to identify robustly terms other than the axial dipole and quadrupole. The persistence of high-latitude flux concentrations, hemispheric asymmetry or normal versus reversed field asymmetry cannot yet be considered as demonstrated.

Carlut, Julie; Courtillot, Vincent



Sensitivity of tropical and mid-latitude time-averaged atmospheric states to interannually varying global ocean surface temperature distribution  

Microsoft Academic Search

A parallel pair of 15-yr integrations was performed with an annual cycle version of the original NCAR Community Climate Model with lower boundary forcing prescribed by the May 1958 to April 1973 monthly mean global ocean surface temperature analyses from Oort (1983), to study the effect of anomalous forcing from ocean surface temperatures on time averaged atmospheric states. The first

Robert M. Chervin



On the relationship between global, hemispheric and latitudinal averaged air surface temperature (GISS time series) and solar activity  

Microsoft Academic Search

The air surface temperature is a basic meteorological parameter and its variation is a primary measure of global, regional and local climate changes. In this work, the global, hemispheric and latitudinal averaged air surface temperature time series, obtained from the NASA\\/Goddard Institute for Space Studies (GISS), and the Sunspot Number (Rz) for the interval 1880–2005, are decomposed in frequency bands

M. P. Souza Echer; E. Echer; N. R. Rigozo; C. G. M. Brum; D. J. R. Nordemann; W. D Gonzalez


The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.  

ERIC Educational Resources Information Center

The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

Doerann-George, Judith


A Critical Review of Time-Weighted Average as an Index of Exposure and Dose, and of Its Key Elements  

Microsoft Academic Search

Time-weighted average (TWA) is widely used in research and practice, in occupational health, as an index of exposure and dose. Its key element, CT, where C is concentration of contaminant and T is duration of contamination, is recognizable as Haber's rule. Neither TWA nor similar measurements have been scientifically validated, and it does not seem appropriately named. In the late




Improving the time-averaged thermal efficiency of a laser beam by acoustooptic correction of the directional pattern  

NASA Astrophysics Data System (ADS)

Applicability of the acoustooptic method for raising the time-averaged thermal efficiency of laser radiation is substantiated theoretically and confirmed experimentally. The effect produced by laser radiation on materials being processed (laser cutting, welding, engraving, etc.) has a threshold in light intensity. Importantly, a beam with the most frequently used normal (Gaussian) angular distribution of intensity is not optimal from the technological viewpoint. A method proposed for its optimization is based on acoustooptic refraction, i.e., fast nonlinear scanning of the initial beam around its central position, which improves (at certain values of the parameters) the time-averaged angular distribution of the beam intensity. In the experiment, the thermal efficiency of laser radiation is raised by several times.

Antonov, S. N.; Byshevskii-Konopko, O. A.; Vainer, A. V.; Proklov, V. V.; Rezvov, Yu. G.



New Paleomagnetic Results From Pleistocene and Holocene Extrusives From Argentina: Contribution to Time Averaged Field Models  

NASA Astrophysics Data System (ADS)

We have conducted two field trips in the Argentina provinces of Mendoza and Neuquen to sample back-arc volcanism East of the Andean Cordillera margin. The volcanic activity ranges from Pliocene to very recent time, with a large variety of products, from basalts to highly differentiated lavas. Paleomagnetic volcanic samples were collected on more than 80 sites from this area. Paleomagnetic directions K-Ar dating and chemical analysis of this set aim at having a better understanding of the volcanic activity in this area but also at contributing to the study of the statistical behavior of the paleomagnetic field over the interval 0-5 Ma. We present new paleomagnetic results from 34 flow units belonging to two volcanic massifs: the Payun Matru and the Cerro Nevado, which each encompasses about 300 kyr of activity, from 0 to 300 ka and from 1.3 to 1.0 Ma, respectively. Most samples have NRM intensities between about 1 and 20 A/m and depict progressive removal of magnetization components in a consistent fashion during stepwise AF or thermal demagnetization allowing us to precisely obtain the characteristic components of magnetization. No transitional directions were recorded. The mean magnetic field direction from this set is compatible with a geocentric axial dipole (GAD) field but we cannot rule out the presence of a small persistent axial quadrupole (g2^{0}). Finally, these results are compared to other recent records from South America allowing to investigate properties of the paleosecular variation as the structure of latitudinal dependence of angular standard deviation (ASD) of virtual geomagnetic poles (VGPs) over the past 5 Myr for this continent.

Carlut, J.; Quidelleur, X.; Germa, A.; Gillot, P.; Tchilinguirian, P.



Meeting the challenge of a group practice turnaround.  


Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing. PMID:11258274

Porn, L M



Sampled-Data Based Average Consensus Control for Networks of Continuous-Time Integrator Agents With Measurement Noises  

Microsoft Academic Search

In this paper, sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order integrator agents under a noisy distributed communication environment. The impact of the sampling size and the number of network nodes on the system performances is analyzed. The control input of each agent is based only on the information measured at the sampling instants from its

Li Tao; Zhang Jifeng



Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet  

NASA Astrophysics Data System (ADS)

The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.

Hsu, C. M.; Huang, R. F.



Improved diagnostic value of combined time and frequency domain analysis of the signal-averaged electrocardiogram after myocardial infarction  

Microsoft Academic Search

BackgroundTime domain analysis (TD) of the signal-averaged electrocardiogram (SAECG) presents a higher incidence of false positives in inferior myocardial infarction (MI), whereas spectral turbulence analysis (STA) suffers from a higher incidence of false positives in anterior MI. We investigated the hypothesis that a combined TD and STA (TD+STA) analysis of the SAECG could improve its predictive accuracy for major arrhythmic

Rafael Vázquez; Edward B Caref; Francisco Torres; Margarita Reina; Aurora Espina; Nabil El-Sherif



A time-averaged model for gas–solids flow in a one-dimensional vertical channel  

SciTech Connect

In this study, we are interested in deriving time-smoothed governing and constitutive equations for gas–solids flow in moderately dense systems where particle–particle collision is the main energy dissipation mechanism. Results obtained from dynamic simulations of a gas–solids flow in a 1D channel are used to show that it is possible to obtain expressions for the time-averaged constitutive relations based on Taylor series expansion. We demonstrate, by comparing with time-averaged transient results, that the 1st term (or laminar) in the series expressions of most non-linear constitutive relations can yield inaccurate quantitative and qualitative results. This means that steady-state models derived by simply removing the partial time derivative from the governing equations are not suitable for gas–solids flows. This study shows that it was necessary to include many terms of the Taylor series expression of non-linear constitutive relations (such as the granular energy dissipation term) due to large-scale oscillations that were computed for all flow variables at all locations in the 1D domain. In some cases, the Taylor series expansion diverged and the Euler transformation was used to improve the convergence of these series. In this moderately dense flow system, turbulence in the gas-phase was found to be just a reaction to turbulence in the solids phase that resulted from the large-scale motion of solids clusters. This resulted in a negative turbulent gas viscosity computed due to the fact that gas (in the horizontal direction) flows only to occupy regions vacated by clusters of solids. The steady-state results obtained using the time-smoothed gas–solids flow model compared well with the time-averaged results obtained using the transient model for all flow variables.

Benyahia, S.



QRS complex duration enhancement as ventricular late potential indicator by signal-averaged ECG using time-amplitude alignments.  


Ventricular late potentials (VLPs) are small-amplitude waves with a short duration that appear at the end part of the QRS complex, making a QRS complex duration larger. The signal-averaged electrocardiography (ECG) technique enhances VLPs and beats, assuming noise as the only random variable. However, ECG signals are not completely stationary and different elongations appear in both time and amplitude in each beat. This research proposes to use piecewise linear approximation to segment each beat and performs the alignment of the beats using the technique known as derivative dynamic time-warping to have beats better aligned and consequently enhance the presence of VLPs. We recorded high-resolution ECGs (HRECGs) from 50 subjects in supine position with no heart-stroke antecedents. VLPs were created synthetically and added to the HRECGs. Two cases were evaluated: (i) duration of the QRS complexes with VLPs without beats alignment, and (ii) duration of QRS complexes with VLPs using beats alignment in time and amplitude. Considering QRS duration as an indicative of VLP presence, results show that when using beats alignment in time and amplitude it is possible to reach a sensitivity of 0.96 and a specificity of 0.52, as opposed to 0.72 and 0.40, respectively, when using only averaging without beats alignment in time and amplitude. PMID:23446922

Avitia, Roberto L; Reyna, Marco A; Bravo-Zanoguera, Miguel E; Cetto, Lucio A



Development of a low-cost system for measuring conditional time-averaged gradients of SO2 and NH3.  


A conditional time-averaged gradient (COTAG) system has been developed to provide direct long-term (weekly to monthly) average flux gradient measurements for a range of trace gases, between land and atmosphere. Over daily periods, atmospheric conditions can range from high stability, where the vertical gradients of ambient concentration are enhanced due to very small diffusivity, to highly unstable conditions, in which concentration gradients are small due to the intense turbulent activity of the surface layer. The large vertical gradients generated by high stability would bias the estimate of the actual flux: to avoid this, the COTAG system samples conditionally, within a carefully refined range of stability. A comparison with a continuous flux gradient system suggested that the removal of stable conditions from the sampling period does not substantially modify the evaluation of the long-term fluxes. PMID:19184491

Famulari, D; Fowler, D; Nemitz, E; Hargreaves, K J; Storeton-West, R L; Rutherford, G; Tang, Y S; Sutton, M A; Weston, K J



General structure for real-time fringe pattern preprocessing and implementation of median filter and average filter on FPGA  

NASA Astrophysics Data System (ADS)

In recent years, many algorithms were proposed for fringe pattern analysis and phase unwrapping including median filter, Fourier transform, windowed Fourier transform and wavelet transforms. However most of them are computationally expensive, which may be a barrier for real-time analysis for fringe pattern. In this paper an FPGA-based system and optimization framework for fringe pattern processing is presented. Median filter and average filter are taken as examples to show the performance of FPGA system over normal PC in real-time fringe pattern processing. The algorithms have been implemented on Celoxica RC340 FPGA development board using Handel-C - a C-like language with inherent parallelism making the full use of FPGA hardware. Furthermore, evaluation and comparison for computation speed are given between FPGA-based system and normal PC to demonstrate high potential of our hardware system in fringe pattern analysis.

Gao, Wenjing; Qian, Kemao; Wang, Haixia; Lin, Feng; Seah, Hock Soon; Cheong, Lee Sing



A Sensitive Branched DNA HIV-1 Signal Amplification Viral Load Assay with Single Day Turnaround  

PubMed Central

Branched DNA (bDNA) is a signal amplification technology used in clinical and research laboratories to quantitatively detect nucleic acids. An overnight incubation is a significant drawback of highly sensitive bDNA assays. The VERSANT® HIV-1 RNA 3.0 Assay (bDNA) (“Versant Assay”) currently used in clinical laboratories was modified to allow shorter target incubation, enabling the viral load assay to be run in a single day. To dramatically reduce the target incubation from 16–18 h to 2.5 h, composition of only the “Lysis Diluent” solution was modified. Nucleic acid probes in the assay were unchanged. Performance of the modified assay (assay in development; not commercially available) was evaluated and compared to the Versant Assay. Dilution series replicates (>950 results) were used to demonstrate that analytical sensitivity, linearity, accuracy, and precision for the shorter modified assay are comparable to the Versant Assay. HIV RNA-positive clinical specimens (n?=?135) showed no significant difference in quantification between the modified assay and the Versant Assay. Equivalent relative quantification of samples of eight genotypes was demonstrated for the two assays. Elevated levels of several potentially interfering endogenous substances had no effect on quantification or specificity of the modified assay. The modified assay with drastically improved turnaround time demonstrates the viability of signal-amplifying technology, such as bDNA, as an alternative to the PCR-based assays dominating viral load monitoring in clinical laboratories. Highly sensitive bDNA assays with a single day turnaround may be ideal for laboratories with especially stringent cost, contamination, or reliability requirements.

Baumeister, Mark A.; Zhang, Nan; Beas, Hilda; Brooks, Jesse R.; Canchola, Jesse A.; Cosenza, Carlo; Kleshik, Felix; Rampersad, Vinod; Surtihadi, Johan; Battersby, Thomas R.



Time Average Field and Secular Variations of Pleistocene to Recent Lava Flows From the Ruiz-Tolima Volcanic Complex (Colombia)  

NASA Astrophysics Data System (ADS)

Thirty three Pleistocene to recent lava flows from the Ruiz-Tolima Volcanic Complex (Colombian Andes) have been sampled for time average field (TAF) and paleosecular variation studies. A total of 10 cores were drilled per flow (site) and stepwise AF demagnetization has been carried out. After principal component analysis and mean-site direction calculations, 29 sites (25 and 4 with normal and reverse polarity, respectively), with ?95 < 5.5° were selected for further calculations. The overall mean direction among the sites (D = 1.8°, I = 6.3°, ?95 = 5.6°) closely fits (at the 95% confidence level) the expected paleomagnetic direction (at the area of study) of a geomagnetic field composed primarily by a geocentric axial dipole with 5% axial quadrupole component (I = 5.72°), but also coincides with a simple GAD model. VGP scatter (13°) is similar to that expected from Model G (12.8°).

Mejia, V.; Sánchez-Duque, A.; Opdyke, N. D.; Huang, K.; Rosales, A.




PubMed Central

In our previous work, the basilar membrane velocity VBM for a gerbil cochlea was calculated and compared with physiological measurements. The calculated VBM showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the VBM for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase.




Time-dependent response of a zonally averaged ocean-atmosphere-sea ice model to Milankovitch forcing  

NASA Astrophysics Data System (ADS)

An ocean-atmosphere-sea ice model is developed to explore the time-dependent response of climate to Milankovitch forcing for the time interval 5-3 Myr BP. The ocean component is a zonally averaged model of the circulation in five basins (Arctic, Atlantic, Indian, Pacific, and Southern Oceans). The atmospheric component is a one-dimensional (latitudinal) energy balance model, and the sea-ice component is a thermodynamic model. Two numerical experiments are conducted. The first experiment does not include sea ice and the Arctic Ocean; the second experiment does. Results from the two experiments are used to investigate (1) the response of annual mean surface air and ocean temperatures to Milankovitch forcing, and (2) the role of sea ice in this response. In both experiments, the response of air temperature is dominated by obliquity cycles at most latitudes. On the other hand, the response of ocean temperature varies with latitude and depth. Deep water formed between 45°N and 65°N in the Atlantic Ocean mainly responds to precession. In contrast, deep water formed south of 60°S responds to obliquity when sea ice is not included. Sea ice acts as a time-integrator of summer insolation changes such that annual mean sea-ice conditions mainly respond to obliquity. Thus, in the presence of sea ice, air temperature changes over the sea ice are amplified, and temperature changes in deep water of southern origin are suppressed since water below sea ice is kept near the freezing point.

Antico, Andrés; Marchal, Olivier; Mysak, Lawrence A.



Determination of Average Failure Time and Microstructural Analysis of Sn-Ag-Bi-In Solder Under Electromigration  

NASA Astrophysics Data System (ADS)

Despite the extensive use of Sn-Ag-Cu as a Pb-free solder alloy, its melting point is significantly higher than that of eutectic Sn-Pb solder. Sn-Ag-Bi-In solder is an alternative Pb-free solder, with a melting temperature close to that of eutectic Sn-Pb. This study elucidates the electromigration behavior of Sn-Ag-Bi-In solder and then compares the results with those of the Sn-Ag-Bi system. The behavior of Pb-free Sn-Ag-Bi-In solder strips under electromigration is examined by preparing them in Si (001) U-grooves. The samples are then tested under various temperatures and current densities. Although the compounds thicken near both electrodes with current stressing, the thickness at the anode exceeds that at the cathode. Experimental results of the average failure time indicate that Sn-Ag-Bi-In solder has a longer lifetime than does Sn-Ag-Bi, which is attributed to the ? phase. Additionally, the ? phase dissolved by the current in the early stage replenishes the outgoing atomic flux. These atomic fluxes also enhance the growth of abnormally large particles in the middle of the strips. Field-emission electron probe microanalysis (FE-EPMA) results indicate that the amount of indium is reduced after the ? phase near the cathode is exhausted for extended current stressing time.

Wu, Albert T.; Sun, K. H.



Examination of a Technique for the Early Detection of Failure in Gears by Signal Processing of the Time Domain Average of the Meshing Vibration.  

National Technical Information Service (NTIS)

An existing technique for the signal processing to the time domain average of the tooth meshing vibration of gears is examined with application to the early detection of failure. It is shown that the regular signal extracted from the time domain average o...

P. D. McFadden



Teacher Pay-for-Performance in School Turnaround: How Bonuses and Differentiated Pay Scales Can Help Support School Turnaround. Meeting the Turnaround Challenge: Strategies, Resources & Tools to Transform a Framework into Practice  

ERIC Educational Resources Information Center

Given the importance of good teaching and leadership for school success, turnaround schools should think carefully about how to structure professional environments that reward and motivate excellence. A system of "Pay-for-Contribution" that includes tools such as hard-to-staff and skill shortage pay, performance pay, and/or retention pay, will…

Mass Insight Education (NJ1), 2009



The time course and relation of positive signal-averaged electrocardiograms by time-domain and spectral temporal mapping analyses after infarction.  


We evaluated the time course of development of positive signal-averaged electrocardiograms (SA-ECGs) by time-domain and Spectral Temporal Mapping (STM) analyses after myocardial infarction in 88 patients without bundle branch block. The incidence of positive SA-ECGs by time-domain analysis peaked at 4 to 8 weeks postinfarction whereas the peak incidence by STM analysis varied from 4 days to 4 to 10 months postinfarction. Positive time-domain SA-ECGs demonstrated a significantly reduced factor of normality (NF) compared with negative time-domain SA-ECGs by X, Z, or vector STM analyses, but marked overlap was present for the standard deviations of positive and negative SA-ECGs in all STM leads. Chi square analysis demonstrated a significant correlation only between X-lead STM analysis and time-domain analysis; however, the two methods were markedly discordant. Although there is a statistically significant relation between time-domain and STM analyses of SA-ECGs, the two analyses are not clinically interchangeable. PMID:7832095

Graceffo, M A; O'Rourke, R A; Hibner, C; Boulet, A J



Paleosecular variation and time-averaged field recorded in late Pliocene-Holocene lava flows from Mexico  

NASA Astrophysics Data System (ADS)

This paper presents results from 13 paleomagnetic sites from an area west of Mexico City and 7 sites from an area of dispersed monogenetic volcanism in the state of San Luis Potosi, accompanied by seven 40Ar/39Ar radiometric dates. An analysis of secular variation and time-averaged paleomagnetic field in the Trans-Mexican Volcanic Belt (TMVB), using compiled data both newly obtained and from the literature, is presented. Interpretation can best be constrained after excluding from the data set sites that appear to be tectonically affected. The selected data include 187 sites of late Pliocene-Holocene age. The mean direction among these sites is Dec = 358.8°, Inc = 31.6°, ?95 = 2.0°, k = 29. This direction does not overlap the expected geocentric axial dipole (GAD) but is consistent with a GAD plus a 5% quadrupole. The virtual geomagnetic pole scatter of this group of sites (12.7°, with lower and upper 95% confidence limits of 11.9° and 14.1°) is consistent with the value expected from Model G (13.6°).

Mejia, V.; BöHnel, H.; Opdyke, N. D.; Ortega-Rivera, M. A.; Lee, J. K. W.; Aranda-Gomez, J. J.



Time average neutralized migma: A colliding beam/plasma hybrid physical state as aneutronic energy source - A review  

NASA Astrophysics Data System (ADS)

A D+ beam of kinetic energy Ti = 0.7 MeV was stored in a ``simple mirror'' magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times, ion energy distribution, nuclear reaction rate, and reaction product spectrum. Migma formed a disc 20 cm in diameter and 0.5 cm thick. Its ion density was sharply peaked in the center; the ion-to-electron temperature ratio was TiTe ~ 103; ion-electron temperature equilibrium was never reached. The volume average and central D+ density were n = 3.2 × 109 cm-3 and nc = 3 × 1010 cm-3 respectively, compared to the space charge limit density nsc = 4 × 108 cm-3. The energy confinement time was ?c = 20-30 s, limited by the change exchange reactions with the residual gas in the vacuum (5 × 10-9 Torr). The ion energy loss rate was 1.4 keV/s. None of the instabilities that were observed in mirrors at several orders of magnitude lower density occurred. The proton energy spectrum for dd + d --> T + p + 4 MeV shows that dd collided at an average crossing angle of 160°. Evidence for exponential density buildup has also been observed. Relative to Migma III results and measured in terms of the product of ion energy E, density n, and confinement time ?, device performance was improved by a factor of 500. Using the central fast ion density, we obtained the triple product: Tn? ? 4 × 1014 keV s cm-3, which is greater than that of the best fusion devices. The luminosity (collision rate per unit cross section) was ~ 1029 cm-2s-1, with o.7 A ion current through the migma center. The stabilizing features of migma are: (1) large Larmor radius; (2) small canonical angular momentum; (3) short axial length z (disc shape); (4) nonadiabatic motions in r and z; (5) precession and enegy spread; (6) ambipolar potential; (7) radial density gradient;(8) large ion-to-electron temperature ratio; (9) metal walls in z; (10) suitability for external stabilization techniques (small volume, recurrent surfacing of ions); and particularly (11) diamagnetic well. Extrapolition of the results to reactor densities led to a D + 3He reactor ``basic migma disc'' 0.5 m in diameter and 0.25 m thick, suspended in a 10 T field, generating 1.5 MW(th). A 2-MW(e) power plant would consist of 3 such discs in a common volume. referred to as a triplet migmacell. Its specific power is projected to be 1 MW(e)/ton. A large power plant of any size would consist of a large number of migma discs in a common volume. The advantages f such modular systems are: (1) economy of mass production: capital cost per kW will be 25% that of fission and 10% that of DT fusion; (2) plants will be economical for all sizes above 10 kW(e); (3) minimal heat pollution, thanks to direct conversion of the changed ion kinetic energy into electricity; (4) no proliferative potential; and (5) large power-to-weight ratio due to absence of shielding. Anticipated physics problems in density increase are discussed.

Maglich, Bogdan C.



Average Speed  

NSDL National Science Digital Library

This lab is not an inquiry activity but will help students understand the meaning of average velocity. Students do not easily understand from a textbook that an object's velocity changes over a period of acceleration. They have difficulty understanding wh

Horton, Michael



Time-averaged acoustic forces acting on a rigid sphere within a wide range of radii in an axisymmetric levitator  

NASA Astrophysics Data System (ADS)

Acoustic levitation is a physical phenomenon that arises when the acoustic radiation pressure is strong enough to overcome gravitational force. It is a nonlinear phenomenon which can be predicted only if higher order terms are included in the acoustic field calculation. The study of acoustic levitation is usually conducted by solving the linear acoustic equation and bridging the gap with an analytical solution. Only recently, the scientific community has shown interest in the full solution of the Navier-Stokes' equation with the aim of deeply investigating the acoustic radiation pressure. We present herein a numerical model based on Finite Volume Method (FVM) and Dynamic Mesh (DM) for the calculation of the acoustic radiation pressure acting on a rigid sphere inside an axisymmetric levitator which is the most widely used and investigated type of levitators. In this work, we focus on the third resonance mode. The use of DM is new in the field of acoustic levitation, allowing a more realistic simulation of the phenomenon, since no standing wave has to be necessarily imposed as boundary condition. The radiating plate is modeled as a rigid cylinder moving sinusoidally along the central axis. The time-averaged acoustic force exerting on the sphere is calculated for different radii Rs of the sphere (0.025 to 0.5 wavelengths). It is shown that the acoustic force increases proportional to Rs3 for small radii, then decreases when the standing wave condition is violated and finally rises again in the travelling wave radiation pressure configuration. The numerical model is validated for the inviscid case with a Finite Element Method model of the linear acoustic model based on King's approximation.

Foresti, Daniele; Nabavi, Majid; Poulikakos, Dimos



Time Averaging and Fitting of Nonlinear Metabolic Changes: The Issue of the Time Index Choice Applied to 31P MRS Investigation of Muscle Energetics  

NASA Astrophysics Data System (ADS)

We present an exact analytical method dedicated to fitting time-dependent exponential-like changes in MR spectra. As an illustration, this method has been applied to fitting metabolic changes recorded by 31P MRS in human skeletal muscle occurring during a rest-exercise-recovery protocol. When recording metabolic changes with the accumulative method, the time averaging of the MR signals implies the choice of a time index for fitting any changes in the features of the associated MR spectra. A critical examination of the different ways (constant, linear, and exponential) of choosing the time index is reported. By numerical analysis, we have calculated the errors generated by the three methods and we have compared their sensitivity to noise. In the case of skeletal muscle, both constant and linear methods introduce large and uncontrolled errors for the whole set of metabolic parameters derived from [PCr] changes. In contrast, the exponential method affords a reliable estimation of critical parameters in muscle bioenergetics in both normal and pathological situations. This method is very easy to implement and provides an exact analytical solution to fitting changes in MR spectra recorded by the accumulative method.

Simond, G.; Bendahan, D.; Cozzone, P. J.



Time-weighted average sampling of airborne propylene glycol ethers by a solid-phase microextraction device.  


A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler. However, the effects of temperature and humidity have been observed. Therefore, adjustments of experimental sampling constants at different environmental conditions will be necessary. PMID:22651222

Shih, H C; Tsai, S W; Kuo, C H



Conformations of the iduronate ring in short heparin fragments described by time-averaged distance restrained molecular dynamics.  


The polyconformational behavior of l-iduronic acid (L-IdoA2S) in heparin derivatives has been previously analyzed in terms of intra-ring proton-proton vicinal coupling constants ((3)JHH) through mathematical fit of experimental and theoretical values (Ferro DR, Provasoli A, Ragazzi M, Casu B, Torri G, Bossennec V, Perly B, Sinay P, Petitou M, Choay J. 1990. Conformer Populations of l-Iduronic Acid Residues in Glycosaminoglycan Sequences. Carbohydr Res. 195:157-167; Muñoz-García JC, López-Prados J, Angulo J, Díaz-Contreras I, Reichardt N, de Paz JL, Martín-Lomas M, Nieto PM. 2012. Effect of the substituents of the neighboring ring in the conformational equilibrium of iduronate in heparin-like trisaccharides. Chemistry. 18:16319-16331.). However, this methodology is subjected to the experimental uncertainties of the J-coupling measurements, the force field deviations and the goodness of the least-squares fit. In the present work, we have used time-averaged distance restrained molecular dynamics (tar-MD) to largely reduce these errors, which enables accurate quantification of the population of conformers, or puckers, of the L-IdoA2S residue, in a set of eight heparin-like trisaccharides following the general sequence d-glucosamine (GlcN)-IdoA-GlcN, directly from the time evolution of the puckering coordinates ? and . Thus, by carrying out tar-MD simulations in explicit water, with the exclusive nuclear overhauser enhancement (NOE)-derived distance H2-H5 of the L-IdoA2S (2)SO conformer as the unique imposed constraint, we have been able to accurately and easily determine the different extents to which the iduronate ring populates the polar ((1)C4 chair) and equatorial ((2)SO skew-boat) areas of the puckering sphere depending on the sulfation pattern of the flanking GlcN residues, under low temperature conditions (278 K). The results indicate that 6-O-sulfation at the reducing-end GlcN residue facilitates the (1)C4 to (2)SO transitions of the iduronate ring by augmenting the flexibility of the C2-C3 torsion, driving the conformational equilibrium toward a majority of equatorial conformers. PMID:23903025

Muñoz-García, Juan Carlos; Corzana, Francisco; de Paz, José L; Angulo, Jesús; Nieto, Pedro M



Mapping the time-averaged distribution of combustion-derived air pollutants in the San Francisco Bay Area  

NASA Astrophysics Data System (ADS)

Urban air pollution is an ongoing and complicated problem for both residents and policy makers. This study aims to provide a better understanding of the geographic source and fate of organic pollutants in a dynamic urban environment. Natural and artificial hydrophobic substrates were employed for the passive monitoring and mapping of ground-level organic pollutants in the San Francisco Bay area. We focused specifically on volatile and semi-volatile polycyclic aromatic hydrocarbons (PAHs). These compounds are proxies for a broad range of combustion related air pollutants derived from local, regional, and global combustion sources. PAHs include several well-studied carcinogens and can be measured easily and accurately across a broad range of concentrations. Estimates of time-integrated vapor phase and particle deposition were made from measuring accumulated PAHs in the leaves of several widely distributed tree species (including the Quercus agrifolia and Sequoia sempervirens) and an artificial wax film. Samples were designed to represent pollutant exposure over a period of one to several months. The selective sampling and analysis of hydrophobic substrates providess insight into the average geographic distribution of ground-level air pollutants in a simple and inexpensive way. However, accumulated organics do not directly correlated with human exposure and the source signature of PAHs may be obscured by transport, deposition, and flux processes. We attempted to address some of these complications by studying 1) PAH accumulation rates within substrates in a controlled microcosm, 2) differences in PAH abundance in different substrate types at the same locality, and 3) samples near long-term high volume air sampling stations. We also set out to create a map of PAH concentrations based on our measurements. This map can be directly compared with interpolated data from high-volume sampling stations and used to address questions concerning atmospheric heterogeneity of these pollutants (i.e. due to both source localization and dominant wind patterns). Our initial results indicate that exposure to PAHs in the bay area is geographically heterogeneous and individual exposure may vary by more than two orders of magnitude. The signatures of PAH contamination also varies considerably, indicating different sources and differing transportation mechanisms may be important at different sites and times.

Yu, C.; Zinniker, D. A.; Moldowan, J.



Lava discharge rate estimates from thermal infrared satellite data for Pacaya volcano, Guatemala: Implications for time-averaged eruption processes  

NASA Astrophysics Data System (ADS)

The Pacaya volcanic complex has been producing lava flows nearly continuously since 1961. Matías (2009) compiled a detailed database including information such as length, surface area, volume, duration, and effusion rates for each of the 248 lava flows that occurred during this time. In this investigation, time-averaged discharge rates (TADR) were estimated for a subset of lava flows using a satellite-based method initially applied to infrared satellite data for Etna by Harris et al. (1997). Satellite-based estimates potentially provide a quicker, safer, and less expensive alternative to ground-based measurements and are therefore valuable for hazard mitigation. The excellent record of recent activity at Pacaya provides a unique opportunity to calibrate results from the satellite-based method by comparing them with reliable ground-based measurements. Imagery from two sensors of differing temporal and spatial resolutions were analyzed in order to produce a comprehensive dataset: MODIS (one image every 6 hours, 1-km pixels) and GOES (one image every 15 minutes, 4-km pixels). As of August 2011, 2403 MODIS and 2642 GOES images have been analyzed. Due to the relatively low intensity of Pacaya's effusive activity, each image was searched manually for volcanic "hot spots". It was found that MODIS data allowed better estimations of TADR than did GOES data. We suggested that the very small, sub-resolution flows typical of Pacaya may have surpassed the limits of low-resolution GOES imagery for this particular application. TADR derived from MODIS data were used to describe and parameterize eruptive cycles, as well as to explore conduit models. A pattern was found over the past two decades of short high-TADR periods followed by longer low-TADR periods. We suggested that the low TADR experienced during longer "bleeding" of the conduit may approximate the magma supply rate to the shallow system, while high TADR eruptions may represent the release of volumes collected during phases of non-eruptive degassing. The supply rate was estimated as approximately 6777 m3/day. Since Pacaya is a continuously degassing, open-vent volcano, the supply rate to the conduit can also be estimated via sulfur dioxide gas measurements. Developing results will be discussed.

Morgan, H. A.; Harris, A. J.; Rose, W. I.



Time-averaged near-bed suspended sediment concentrations under waves and currents: Comparison of measured and model estimates  

NASA Astrophysics Data System (ADS)

Profiles of suspended sediment concentration and velocity were measured over a 15-day period at a near-shore site off Santa Cruz, CA in Monterey Bay. The concentration and velocity data were collected with an Acoustic Backscattering System (ABS) and Acoustic Current Profiler (ACP) that were mounted on a bottom tripod. High-resolution bottom scanning sonar was also attached to the tripod to provide images of bed features during the experiment. Hourly time-averaged near-bed concentrations of suspended sediment were calculated from three models and compared with the measurements. Surface waves and currents that were generated by a storm of moderate intensity caused bed stresses that exceeded threshold stress for D50=0.02 cm, the median size of the moderately well-sorted bottom sediment, over a period of about 7 days. Estimates of the concentration at 1 cm above the bottom, Ca1, were obtained using the ABS measurements. These observations have been compared with predictions for the concentration at 1 cm above the bottom, C1. Nielsen's models for reference concentration Co [Nielsen, P., 1986. Suspended sediment concentrations under waves. Coastal Engineering 10, 32-31; Nielsen, P., 1992. Coastal Bottom Boundary Layers and Sediment Transport, Advanced Series on Ocean Engineering. World Scientific, Hackensack, NJ.] are purely wave-based and do not include effects of bottom currents on bed stress and bedform scales. C1 calculated from this model compared well with measured Ca1 when currents were weak and small oscillatory ripples were observed in the sonar images. However, during the 3-day period of highest bottom stresses modeled C1 did not compare well to Ca1. The other two models for C1, Glenn and Grant [Glenn, S.M., Grant, W.D., 1987. A suspended sediment stratification correction for combined wave and current flows. Journal of Geophysical Research 92(C8), 8244-8264.] and van Rijn and Walstra [Van Rijn, L.C., Walstra, D.J.R., 2004. Description of TRANSPOR2004 and implementation in Delft3D-ONLINE. Interim Report prepared for DG Rijkswaterstaat, Rijksinstituut voor Kust en Zee. Delft Hydraulics Institute, The Netherlands.], accounted for combined wave-current stresses and included different formulations for predicting bedform scales. C1 calculated from these models compared well with measurements throughout the early and middle period of the storm, but underpredicted measured values during the latter part of the experiment. An alternative method based on dimensional analysis provided a straightforward way to relate C1 to bed stress and sediment parameters. This new relationship based on dimensional analysis explained about 77% of the variability in Ca1 derived from the ABS data.

Cacchione, David A.; Thorne, Peter D.; Agrawal, Yogesh; Nidzieko, Nicholas J.



Solar wind driven radiation belt response functions at 100-min time scales using SAMPEX orbit-averaged electron fluxes  

Microsoft Academic Search

Characterizations of radiation belt dynamics using linear prediction filters (LPFs) were first published by Nagai (1988) and Baker et al. (1990). These studies focused primarily on short-term predictions of daily-averaged radiation fluxes at geostationary orbit using global magnetospheric indices and the speed of the so la r wind impinging on the Earth's magnetosphere. Using solar wind data from the NSSDC

D. Baker; J. Rigler; D. Vassiliadis



A Rapid Turnaround Two-Stage Adiabatic Demagnetization Refrigerator for Cooling to 50 mK  

NASA Astrophysics Data System (ADS)

Many research and development programs require rapid access to very low temperatures (~50 mK). For detector development, relatively large experiment volumes are also needed for tests involving integrated detectors and readout amplifiers (which may need to be stationed at a different temperature). To provide this capability in a versatile, fast turnaround system, we have constructed a two-stage adiabatic demagnetization refrigerator (ADR) that is operated in a simple bucket-style dewar. The ADR/cryostat is separated into two concentric inserts; the outer insert supports two 3.3 T magnets, magnet leads and magnetic shielding, and the inner one consists of a hermetic experiment volume coupled to the ADR's salt pills and heat switches. The magnet insert remains in the dewar at all times, while the ADR insert may be inserted and removed even when the dewar is cold. The cooldown from room temperature takes less than 1.5 hours, and cycling of the ADR as little as 30 minutes. Future tests will investigate the use of neon exchange gas to accelerate cooling of the ADR and more thermally isolated components such as wiring and structural supports. Design and operation of the system are discussed.

Shirron, P. J.; Dipirro, M. J.; Panek, J. S.; Francis, J. J.; Warner, B. A.; Jackson, M. L.



Energy Consumption and Its Reduction in the Hydrocyclone Separation Process. II. Time-Averaged and Fluctuating Characteristics of the Turbulent Pressure in a Hydrocyclone  

Microsoft Academic Search

A resistance wire strain gauge system was used to experimentally study time-averaged and fluctuating characteristics of turbulent pressure in a hydrocyclone for the first time. In the main space inside the hydrocyclone, pressure distribution could be described with a mathematical model in which pressure is a function of positional radius. When positional radius decreases, pressure drops, but the radial gradient




Fast-turnaround alkyl nitrate measurements during the PROPHET 1998 summer intensive  

NASA Astrophysics Data System (ADS)

An automated gas chromatographic system for making fast-turnaround measurements of alkyl nitrates was developed and tested. Every 30 min, samples were automatically acquired on a 1/16-inch OD Tenax trap made from fused silica-lined stainless steel tubing and injected by thermal desorption into the gas chromatograph. The system exhibited good chromatographic resolution, minimal breakthrough, and on-trap stability for C3-C5 nitrates. The detection limit of the instrument was <0.1 parts per trillion by volume for C3-C5 nitrates with an overall uncertainty of 30% for isopropyl nitrate and 50% for all other quantified compounds. Overall uncertainty of the method was limited by calibration precision rather than by sampling method or chromatography. The system was deployed in the field as part of the Program for Research on Oxidants: Photochemistry, Emissions, and Transport intensive held at the University of Michigan Biological Station in northern Michigan during the summer of 1998 from July 15 to August 22. Ambient results show a significant dynamic range with absolute levels affected by air mass history and pollution sources. The sum of measured C3-C5 alkyl nitrates obtained during this intensive ranged from 3.45 to 65.8 pptv, with a mean of 18.6 pptv and contributed 1% on average to total NOy, with a range of 0.5% to 3%. Differences in the contribution to NOy are found when the data are filtered according to air mass origin. A positive relationship between n-propyl nitrate and peroxypropionyl nitrate was observed over the course of the intensive.

Ostling, Kristi; Kelly, Brendan; Bird, Stephen; Bertman, Steven; Pippin, Margaret; Thornberry, Troy; Carroll, Mary Anne



Academic Turnarounds: Restoring Vitality to Challenged American Colleges and Universities. ACE/Praeger Series on Higher Education  

ERIC Educational Resources Information Center

|This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the…

MacTaggart, Terrence, Ed.



A new approach for analyzing average time complexity of population-based evolutionary algorithms on unimodal problems.  


In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically. PMID:19336324

Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin



Time-Averaged Indoor Radon Concentrations and Infiltration RatesSampled in Four U.S. Cities  

SciTech Connect

Indoor radon concentrations, measured in 58 houses over a four to five month period during the winter and spring of 1981-1982, varied from 0.1 to 16 pCi 1{sup -1} (4-590 Bq m{sup -3}). Average infiltration rates were determined for each house over the same period, based on a measurement of the effective leakage area and an infiltration model, and found to range from 0.2 to 2.2 air changes per hour (hr{sup -1}). Indoor radon concentrations correlated poorly with infiltration rates for houses within each city as well as for the entire sample. Differences in radon entry rates among houses thus appear to be more important than differences in infiltration rates in determining whether a house has high indoor radon levels, consistent with previous indications from grab-sample measurements. Radon entry rates and indoor radon concentrations were generally higher in houses in Fargo, ND and Colorado Springs, CO than in houses in Portland, ME and Charleston, NC.

Doyle, S.M.; Nazaroff, W.W.; Nero, A.V.



Residential Preferences and the Population Turnaround in Nonmetropolitan Areas: A Case Study in Northwestern Wisconsin.  

ERIC Educational Resources Information Center

A study was undertaken in northwestern Wisconsin in 1974 to investigate whether the population turnaround in the 19-county area was an indication of people moving according to residential preference, as might be assumed from national surveys showing desire for urban to rural migration and higher rate of population growth in nonmetropolitan areas.…

Lambert, Virginia A.


Turnaround Principals: An Unmistakable Conclusion--Site-Level Success Stems from Superintendent Support  

ERIC Educational Resources Information Center

|The authors of this article were recently privileged to spend five days with a group of 10 talented principals who have been charged with turning around academically low-performing schools in their respective districts. These principals were selected as participants in a Turnaround Specialist Program initiated by Gov. Mark R. Warner of Virginia…

Burbach, Harold J.; Butler, Alfred R., IV



CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success  

ERIC Educational Resources Information Center

|Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…

Gorman, Lynn



On the turnaround of stratospheric ozone trends deduced from the reevaluated Umkehr record of Arosa, Switzerland  

Microsoft Academic Search

In this work, we investigate the issue of the turnaround in ozone trends of the recently homogenized Umkehr ozone record of Arosa, Switzerland, which is the longest Umkehr data set, extending from 1956 to date, using different statistical methods. All methods show statistically significant negative ozone trends from 1970 to 1995 in the upper stratosphere (above 32.6 km) throughout the

P. Zanis; E. Maillard; J. Staehelin; C. Zerefos; E. Kosmidis; K. Tourpali; I. Wohltmann



Chronically Low-Performing Schools and Turnaround: Evidence from Three States  

ERIC Educational Resources Information Center

|The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This paper…

Hansen, Michael; Choi, Kilchan



Empirical research on turnarounds of financial distress firms based on ultimate control  

Microsoft Academic Search

Using a firm-level dataset of financial distress firms from China's A-share stock market over the period of 2001-2004, this paper examines the effects of ultimate control rights, cash flow rights and nature of property rights on turnaround of financial distress firms. We find that both ultimate control rights and cash flow rights are associated with positive \\

Yao Zheng; Jin Tian; Tang Yan-feng; Sun Yan



An application of BPR and RCM methods to an oil refinery turnaround process  

Microsoft Academic Search

This works reports on the application of business process re-engineering (BPR) methods to a specific stage in the maintenance activities of the API refinery in Falconara Marittima (AN), i.e. the scheduled annual turnaround. The aim of the study was to analyse the process as it was always implemented up until the year 2001, identifying any related problems and inconsistencies, and

M. Bevilacqua; F. E. Ciarapica; G. Giacchetta; M. Bertolini



How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds  

ERIC Educational Resources Information Center

|In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…

Kowal, Julie; Ableidinger, Joe




Microsoft Academic Search

Despite the growing awareness of the importance of researching core strategic resources and capabilities for supporting organisational change, the work that has been done to the date has rarely examined and taken into account the relevance of Intellectual Capital (IC) for the success of a company's strategic turnaround program. Moreover, little attention has been given on what encompasses IC and




Turnaround High School Principals: Recruit, Prepare and Empower Leaders of Change. High Schools That Work  

ERIC Educational Resources Information Center

|Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull…

Schmidt-Davis, Jon; Bottoms, Gene



The Centralizing Role of Terminology: A Consideration of Achievement Gap, NCLB, and School Turnaround  

Microsoft Academic Search

American educational policies over the last 40 years have represented an uninterrupted movement toward centralization. This article analyzes three terms that have become critical to the centralization of American education. More specifically, this article reviews the development, language, and source of the thematic terms and associated language of achievement gap, NCLB, and school turnaround. The author contends that the level of

Coby Meyers



Anomalous diffusion and power-law relaxation of the time averaged mean squared displacement in worm-like micellar solutions  

NASA Astrophysics Data System (ADS)

We report the results of single tracer particle tracking by optical tweezers and video microscopy in micellar solutions. From careful analysis in terms of different stochastic models, we show that the polystyrene tracer beads of size 0.52-2.5 ?m after short-time normal diffusion turn over to perform anomalous diffusion of the form ? t? with ? ? 0.3. This free anomalous diffusion is ergodic and consistent with a description in terms of the generalized Langevin equation with a power-law memory kernel. With optical tweezers tracking, we unveil a power-law relaxation over several decades in time to the thermal plateau value under the confinement of the harmonic tweezer potential, as predicted previously (Phys. Rev. E 85 021147 (2012)). After the subdiffusive motion in the millisecond range, the motion becomes faster and turns either back to normal Brownian diffusion or to even faster superdiffusion, depending on the size of the tracer beads.

Jeon, Jae-Hyung; Leijnse, Natascha; Oddershede, Lene B.; Metzler, Ralf



Time-Averaged Aerodynamic Loads on the Vane Sets of the 40- by 80-Foot and 80- by 120-Foot Wind Tunnel Complex.  

National Technical Information Service (NTIS)

Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based pr...

K. Aoyagi L. E. Olson R. L. Peterson G. K. Yamauchi J. C. Ross



Phase-adjusted echo time (PATE)-averaging 1 H MRS: application for improved glutamine quantification at 2.89 T.  


(1) H MRS investigations have reported altered glutamatergic neurotransmission in a variety of psychiatric disorders. The unraveling of glutamate from glutamine resonances is crucial for the interpretation of these observations, although this remains a challenge at clinical static magnetic field strengths. Glutamate resolution can be improved through an approach known as echo time (TE) averaging, which involves the acquisition and subsequent averaging of multiple TE steps. The process of TE averaging retains the central component of the glutamate methylene multiplet at 2.35 ppm, with the simultaneous attenuation of overlapping phase-modulated coupled resonances of glutamine and N-acetylaspartate. We have developed a novel post-processing approach, termed phase-adjusted echo time (PATE) averaging, for the retrieval of glutamine signals from a TE-averaged (1) H MRS dataset. The method works by the application of an optimal TE-specific phase term, which is derived from spectral simulation, prior to averaging over TE space. The simulation procedures and preliminary in vivo spectra acquired from the human frontal lobe at 2.89 T are presented. Three metabolite normalization schemes were developed to evaluate the frontal lobe test-retest reliability for glutamine measurement in six subjects, and the resulting values were comparable with previous reports for within-subject (9-14%) and inter-subject (14-20%) measures. Using the acquisition parameters and TE range described, glutamine quantification is possible in approximately 10 min. The post-processing methods described can also be applied retrospectively to extract glutamine and glutamate levels from previously acquired TE-averaged (1) H MRS datasets. PMID:22407923

Prescot, Andrew P; Richards, Todd; Dager, Stephen R; Choi, Changho; Renshaw, Perry F



Power scalable mid-infrared supercontinuum generation in ZBLAN fluoride fibers with up to 1.3 watts time-averaged power  

Microsoft Academic Search

Mid-infrared supercontinuum (SC) extending to ~4.0 mum is generated with 1.3 W time-averaged power, the highest power to our knowledge, in ZBLAN (ZrF4-BaF2-LaF3-AlF3-NaF...) fluoride fiber by using cladding-pumped fiber amplifiers and modulated laser diode pulses. We demonstrate the scalability of the SC average power by varying the pump pulse repetition rate while maintaining the similar peak power. Simulation results obtained

Chenan Xia; Malay Kumar; Ming-Yuan Cheng; Ravi S. Hegde; Mohammed N. Islam; Almantas Galvanauskas; Herbert G. Winful; Fred L. Terry Jr.; Mike J. Freeman; Marcel Poulain; Gwenael Mazé



Computing the 7Li NMR chemical shielding of hydrated Li+ using cluster calculations and time-averaged configurations from ab initio molecular dynamics simulations.  


Ab initio molecular dynamics (AIMD) simulations have been used to predict the time-averaged Li NMR chemical shielding for a Li(+) solution. These results are compared to NMR shielding calculations on smaller Li(+)(H(2)O)(n) clusters optimized in either the gas phase or with a polarizable continuum model (PCM) solvent. The trends introduced by the PCM solvent are described and compared to the time-averaged chemical shielding observed in the AIMD simulations where large explicit water clusters hydrating the Li(+) are employed. Different inner- and outer-coordination sphere contributions to the Li NMR shielding are evaluated and discussed. It is demonstrated an implicit PCM solvent is not sufficient to correctly model the Li shielding, and that explicit inner hydration sphere waters are required during the NMR calculations. It is also shown that for hydrated Li(+), the time averaged chemical shielding cannot be simply described by the population-weighted average of coordination environments containing different number of waters. PMID:21701731

Alam, Todd M; Hart, David; Rempe, Susan L B



Measurement of Time-Averaged Particle-Wall Collision Properties Using Particle Tracking Velocimetry, CRADA PC93-006, Final Report  

SciTech Connect

This paper describes an experimental facility to measure time-averaged properties of particle-wall collisions. A particle tracking velocimetry (PTV) system measures particle rebound angles and velocities and impact angles and velocities for specific particles. From these measured properties, values of normal and tangential coefficient of restitution are derived. The PTV system rapidly measures thousands of particle-wall collisions enabling determination of time-averaged properties. In this initial work we use 191 {micro}m glass particles colliding with a glass plate at a fixed angle of 17.5{degree} and a velocity of 17.07 m/s. Observations are also made of aspherical particles colliding with gross sliding. In the future, this experimental system will be used with particles and wall materials of industrial significance.

Shaffer, F.; Massah, H.; Sinclair, J.; Shahnam, M.



Novel ultrasound based time averaged flow mapping method for die entry visualization in flow of highly concentrated shear-thinning and shear-thickening suspensions  

Microsoft Academic Search

In this work a methodology for high-resolution time averaged two-dimensional flow mapping of converging flows was explored. Flow of non-transparent, highly concentrated shear-thinning and shear-thickening suspensions was circulating through the entrance flow adapter with adjustable position of the die entry. The entrance region was scanned with the distance resolution of 2.7 mm × 1 mm, radial to axial displacement respectively.

Erich Windhab




Microsoft Academic Search

In this work a methodology for high-resolution time averaged 2D flow mapping (e.g. Takeda, 1999) of converging flows was explored. Flow of non-transparent, highly concentrated shear- thinning and shear-thickening suspensions was circulating through the entrance flow adapter with the adjustable position of the die entry. The entrance region was scanned with the distance resolution of 2.7 x 1 mm, radial

Boris Ouriev


Time-averaged acoustic force and torque exerted on an arbitrarily shaped rigid particle in a viscous fluid using boundary element method  

NASA Astrophysics Data System (ADS)

The time-averaged acoustic forces and torques experienced by an arbitrary shaped rigid particle located in a viscous fluid are formulated analytically and calculated by the BEM (Boundary Element Method) algorithm. There are many advantages of the method established in this paper, for example, no restrictions of the incident wave type and size and shape of the particle, and high efficiency and accuracy in numerical simulations.

Wang, Jingtao; Dual, Jurg



Comparison of the new acceleration spectrum analysis with other time- and frequency-domain analyses of the signal-averaged electrocardiogram  

Microsoft Academic Search

Aim To compare four analysis techniques of the signal- averaged-electrocardiogram, including time-domain, spec- tral temporal mapping, spectral turbulence analysis and the new acceleration spectrum analysis. Methods and Results We studied 634 subjects (77 with bundle branch block) divided into three groups. Group 1 comprised 117 post-myocardial infarction patients tested for inducibility of sustained ventricular tachycardia, and which was induced in

R. Vazquez; E. B. Caref; F. Torres; M. Reina; J. Huet; J. A. Guerrero; N. El-Sherif



Reproducibility of time-domain and three different frequency-domain techniques for the analysis of the signal-averaged electrocardiogram  

Microsoft Academic Search

Because time-domain (TD) analysis of the signal-averaged ECG (SAECG) has some limitations that limit its use, several frequency-domain analysis techniques were developed in an attempt to improve the dignostic ability of the SAECG. However, it is not known how reliable these techniques are at detecting late potentials. This prospective study compares the short-term reproducibility of 4 analysis techniques: TD analysis,

Rafael Vázquez; Edward B. Caref; Francisco Torres; Margarita Reina; Jose A. Guerrero; Nabil El-Sherif



Compliant structures with time-varying moment of inertia and non-zero averaged momentum and their application in angular rate microsensors  

NASA Astrophysics Data System (ADS)

In this work we introduce a new class of fully compliant structures performing vibratory motion, yet characterized by non-zero averaged momentum, appearing due to time-dependency of the inertial parameters. The work is motivated by microelectromechancial systems (MEMS) applications, where an implementation of unidirectional, non-vibratory motion involving relative motion of parts is not desirable for reliability reasons. Instead of changing the mass, which is challenging on the microscale, the moment of inertia of the proof mass performing tilting vibrations is controlled in such a way that it is higher or lower depending on the sign of the velocity. This results in a non-zero angular momentum averaged over the period. The equations describing the dynamics of a generic structure with a time-varying inertia and in a rotating coordinate frame are derived by using a variational principle. Simple approximate expressions for the averaged momentum and steady tilting angle are obtained and validated numerically. Based on the model results for different operational scenarios, we demonstrate that these devices can be efficiently used in fully compliant actuators and vibratory angular rate sensors (microgyros) with a steady response in a sensing mode ("pseudospinning disk gyros"), as well as in a parametrically excited gyro. The structure can be viewed also as a first step toward the realization of dynamic materials (DM) which are substances with material properties that may change in space and time.

Krylov, S.; Lurie, K.; Ya'Akobovitz, A.



H-Coal pilot plant 1980 turnaround report. Revision  

Microsoft Academic Search

Since the start of H-Coal operations on March 11, 1980, the plant has operated for a total of 2949 hours. Of this total time on stream, 2572 hours were expended circulating oil and 377 hours with coal feed to the unit. Approximately 1148 tons of coal were processed during this period. The coal processed was Kentucky number11. On November 14,




Power scalable mid-infrared supercontinuum generation in ZBLAN fluoride fibers with up to 1.3 watts time-averaged power.  


Mid-infrared supercontinuum (SC) extending to ~4.0 mum is generated with 1.3 W time-averaged power, the highest power to our knowledge, in ZBLAN (ZrF(4)-BaF(2)-LaF(3)-AlF(3)-NaF...) fluoride fiber by using cladding-pumped fiber amplifiers and modulated laser diode pulses. We demonstrate the scalability of the SC average power by varying the pump pulse repetition rate while maintaining the similar peak power. Simulation results obtained by solving the generalized nonlinear Schrödinger equation show that the long wavelength edge of the SC is primarily determined by the peak pump power in the ZBLAN fiber. PMID:19532312

Xia, Chenan; Kumar, Malay; Cheng, Ming-Yuan; Hegde, Ravi S; Islam, Mohammed N; Galvanauskas, Almantas; Winful, Herbert G; Terry, Fred L; Freeman, Mike J; Poulain, Marcel; Mazé, Gwenael



The time-averaged inflammatory disease activity estimates the progression of erosions in MRI of the sacroiliac joints in ankylosing spondylitis.  


A method to estimate the individual ankylosing spondylitis (AS) patient radiological progression of semi-quantitative magnetic resonance imaging (MRI) changes in the sacroiliac joints has not been described yet, which this study examines. Inflammatory disease activity and MRIs of the sacroiliac joints of 38 patients with recent onset established AS were analyzed at baseline and during follow-up. Sacroiliac MRIs were semi-quantitatively assessed using a modification of the "Spondylarthritis Research Consortium of Canada" (SPARCC) method. In each patient, the annual inflammatory disease activity was estimated by the time-averaged C-reactive protein (CRP; mg/l), calculated as the area under the curve. The mean (SD) CRP decreased from 1.3 (1.8) at baseline to 0.5 (0.6) at follow-up MRI (p < 0.04), which has been performed after a mean (SD) disease course of 2.8 (1.5) years. The mean (SD) annual increase (?) of SPARCC score from baseline to follow-up MRI was 0.4 (0.4). Baseline individual SPARCC sub-score for bone marrow edema did not statistically significantly correlate with individual ?SPARCC sub-score for erosions (p = N.S.). The individual AS patient correlation between annual time-averaged inflammatory disease activity and each annual ?SPARCC sub-scores was only statistically significant for erosions (p < 0.01; r = 0.71). Our results show that bone marrow edema and contrast-medium enhancement at baseline do not relate to the progression of erosions but the calculation of the individual patient annual time-averaged inflammatory disease activity allows to estimate the annual progression of erosions in sacroiliac MRIs of patients with AS. PMID:22422197

Wick, Marius C; Grundtman, Cecilia; Weiss, Rüdiger J; Gruber, Johann; Kastlunger, Martin; Jaschke, Werner; Klauser, Andrea S



Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers  

USGS Publications Warehouse

Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.



10.5 W Time-Averaged Power Mid-IR Supercontinuum Generation Extending Beyond 4 m With Direct Pulse Pattern Modulation  

Microsoft Academic Search

A novel, all-fiber-integrated supercontinuum (SC) laser is demonstrated and provides up to 10.5 W time-averaged power with a continuous spectrum from ~0.8 to 4 mum. The SC is generated in a combination of standard single-mode fibers and ZrF4-BaF2-LaF3-AlF3-NaF (ZBLAN) fluoride fibers pumped by a laser-diode-based cladding-pumped fiber amplifier system. The output SC pulse pattern can be modulated by directly modulating

Chenan Xia; Zhao Xu; Mohammed N. Islam; Mike J. Freeman; Andy Zakel; Jeremiah Mauricio



Beyond long memory in heart rate variability: an approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity.  


Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation. PMID:23822468

Leite, Argentina; Rocha, Ana Paula; Silva, Maria Eduarda



[Initial and final changes in the signal-averaged QRS in the time and frequency domain in a case of hypertrophic myocardiopathy].  


We report on a patient suffering from asymmetrical hypertrophic cardiomyopathy, with alterations occurring in the signal-averaged electrocardiogram (SAQRS). Recordings of 3-lead orthogonal ECG were obtained and analyzed by a Del Mar Avionics 750A Innovator device. The time domain analysis showed late potentials and a slow inscription of the initial portion of the SAQRS, that we called "early potentials". On the frequency domain there was a high degree of spectral turbulence at the beginning and at the end of the SAQRS. We discuss the possibility that the slow and fragmented conduction of the initial portion of the SAQRS could be related to the electrical instability of the disease. PMID:8013767

Canciani, B; Mormino, P; Scanavacca, G; Della Rocca, F



Field demonstration of rapid turnaround, multilevel groundwater screening  

SciTech Connect

A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.

Tingle, A.R. [Analysas Corp., Oak Ridge, TN (United States); Baker, L. [Automated Sciences Group, Inc., Oak Ridge, TN (United States); Long, D.D. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States). Hazardous Waste Remedial Actions Program; Miracle, M. [Advanced Sciences, Inc., Oak Ridge, TN (United States)



Manipulation of a Bose-Einstein condensate by a time-averaged orbiting potential using phase jumps of the rotating field  

SciTech Connect

We report on the manipulation of the center-of-mass motion ('sloshing') of a Bose-Einstein condensate in a time-averaged orbiting potential (TOP) trap. We start with a condensate at rest in the center of a static trapping potential. When suddenly replacing the static trap with a TOP trap centered about the same position, the condensate starts to slosh with an amplitude much larger than the TOP micromotion. We show, both theoretically and experimentally, that the direction of sloshing is related to the initial phase of the rotating magnetic field of the TOP. We show further that the sloshing can be quenched by applying a carefully timed and sized jump in the phase of the rotating field.

Cleary, P. W.; Hijmans, T. W.; Walraven, J. T. M. [Van der Waals-Zeeman Institute of the University of Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)



Photo-sensitive characteristics of negative resistance turn-around occurring in SIPTH  

NASA Astrophysics Data System (ADS)

Influences of light irradiation on the negative resistance turn-around characteristics of static induction photosensitive thyristor (SIPTH) have been experimentally and theoretically studied. As the gate current of SIPTH is increased by the light irradiation, the potential barrier in the channel is reduced due to the increase in voltage drop across the gate series resistance. Therefore, SIPTH can be quickly switched from the blocking state to the conducting state by relatively low anode voltage. The optimal matching relation for controlling anode conducting voltage of SIPTH by light irradiation has also been represented.

Tao, Ji; Licheng, Yang; Hairong, Li; Shanhu, He; Siyuan, Li



Detection of baroreceptor activity in rat vagal nerve recording using a multi-channel cuff-electrode and real-time coherent averaging.  


Electrical stimulation of afferent nerve fibers originating from pressure sensors can trigger the baroreflex to reduce blood pressure and might be an alternative to treat patients with hypertension. In this study, baroreceptor compound activity was detected using multi-channel cuff-electrode recordings on rat vagal nerve. In order to isolate the vagal nerve signals from external potentials (such as ECG- and EMG-coupling), a tripolar measuring technique was applied. To eliminate noise and neural signals corresponding to other organs, coherent averaging was used. The baroreceptor-correlated signals appear predominantly in one of the electrode channels, presumably close to the corresponding neural substrate. This localization was done in real-time. PMID:23366660

Plachta, Dennis T T; Espinosa, Nayeli; Gierthmuehlen, Mortimer; Cota, Oscar; Herrera, Taliana C; Stieglitz, Thomas



A new method of deriving time-averaged tropospheric column ozone over the tropics using total ozone mapping spectrometer (TOMS) radiances: Intercomparison and analysis using TRACE A data  

NASA Astrophysics Data System (ADS)

Error analysis of archived total O3 from total ozone mapping spectrometer (TOMS) (version 6) presented in earlier studies [Hudson and Kim, 1994; Hudson et al., 1995] is extended to include scan angle effects. Daily total O3 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands (10°N to 14°S), underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric O3 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time-averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) O3 measurements from the DC-8 [Browell et al., this issue] and with ozonesonde measurements over Brazzaville, Congo (4°S, 15°W;), Ascension Island (8°S, 15°W), and Natal, Brazil (5.5°S, 35°W), for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column O3 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions [Thompson et al., this issue (a, b)]. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric O3 produced from biomass burning in South America and Africa dominates the O3 budget in the tropical southern hemisphere during the study period.

Kim, J. H.; Hudson, R. D.; Thompson, A. M.



Simulation of the daily average PM 10 concentrations at Ta-Liao with Box-Jenkins time series models and multivariate analysis  

NASA Astrophysics Data System (ADS)

Southern Taiwan has experienced severe PM 10 problems for over a decade. The present paper describes the establishment of a simulation model for the daily average PM 10 concentrations at Ta-Liao, southern Taiwan. The study used a regression with time series error models (RTSE models) (multivariate ARIMA time series model), including an explanatory variable resulting from principal component analyses to complete the PM 10 simulation. Factor 1 estimated from the factor analyses explained the variance of 44-49%, which indicated the important contribution from the neighbor-city PM 10 at Mei-Nung, Lin-Yuang, Zuoying, Chao-Chou, local ozone and NOx. Factor 1 can be interpreted with regional PM 10 plus photochemical reactions. To improve the predictability of extremely high PM 10, different results from the principal component analysis were introduced to the RTSE models. We constructed four kinds of RTSE models: RTSE model without PC, with PC4S (PM 10 at Mei-Nung, Lin-Yuang, Zuoying, and Chao-Chou), with PCTL (meteorological variables and co-pollutants at Ta-Liao), and with PCTL4S (the combination of the above two) and evaluated the statistics model performance. Ozone, dew point temperature, NOx, wind speed, wind directions, and the PC trigger were the significant variables in the RTSE models most of time. When the neighbor-city PM 10 was included in the PC trigger, the predictability was apparently improved. The closeness of fit with the inclusion of PC4S and PCTL4S was improved by reducing SEE from 0.117 to 0.092. Using the RTSE models with PC4S or PCTL4S, POD was improved by an increase of 33%, FAR was reduced 30%, and CSI was increased 39%, when simulating the daily average PM 10 > 150 ?g m -3. Evidently we need to survey source impacts prior to establishing a simulation model. Factor analysis is a useful method to investigate sources that contributed PM 10 to a target site prior to establishing a simulation model.

Liu, Pao-Wen Grace


Anatomy of a public health agency turnaround: the case of the general health district in Mahoning County.  


A turnaround describes an organization's ability to recover from successive periods of decline. Current and projected declines in US economic conditions continue to place local public health departments at risk of fiscal exigency. This examination focused on turnaround methodologies used by a local public health department to reverse successive periods of operational and financial declines. Illustrations are provided on the value added by implementing financial ratio and trend analysis in addition to using evidence-based private sector turnaround strategies of retrenchment, repositioning, and reorganization. Evidence has shown how the financial analysis and strategies aided in identifying operational weakness and set in motion corrective measures. The Public Health Uniform Data System is introduced along with a list of standards offered for mainstreaming these and other routine stewardship practices to diagnose, predict, and prevent agency declines. PMID:22635191

Honoré, Peggy A; Stefanak, Matthew; Dessens, Scott


Measurement of a time-weighted average concentration of polycyclic aromatic hydrocarbons in aquatic environment using solid phase extraction cartridges and a portable pump  

SciTech Connect

There have been several different methods of measurement for waterborne pollutants. The most frequently utilized method for sample preparation has been a liquid-to-liquid partition or a liquid-to-solid partition. In these methods, pollutants such as polycyclic aromatic hydrocarbons (PAHs) are extracted to organic solvents directly from the sample water, or the pollutants are once adsorbed to solid phase adsorbent, and subsequently eluted with organic solvents. In either case, the measured level represents at the time of the sampling, namely a spot-sampling. On the other hand, a time-weighted average concentration (TWA) has been used as a determinant to evaluate an atmospheric environment. But it has been an elaborated work for us to estimate TWA of pollutants in water, since a frequent spot-sampling of water is required at a field site. Further no data on the TWA of PAHs in the field water has been published, however the TWA of pollutants could be an important factor for a chronic effect on biota. In our previous report, we set up a continuous sampling device, using Sep-Pak C18 cartridge and a peristaltic pump, which enabled us to measure a TWA of benzo(a)pyrene in an experimental water system. The present paper describes a portable sampling device that can continuously sample PAHs in water. We have evaluated basic characteristics of the sampling device in the laboratory, and optimized chromatographic detection of 4 PAHs, fluoranthene, perylene, benzo(b)-fluoranthene (BbF) and benzo(a)pyrene (BaP). After these procedures, we have brought this sampling device to field water sites to verify its performance. The levels of PAHs was calculated as TWA for 24 hr period of time in water at a site. 9 refs., 1 fig., 1 tab.

Kira, S. [Okayama Univ. Medical School (Japan); Sakano, M.; Nogami, Y. [Okayama Univ. of Science (Japan)



The time averaged TeV energy spectrum of MKN 501 of the extraordinary 1997 outburst as measured with the stereoscopic Cherenkov telescope system of HEGRA  

NASA Astrophysics Data System (ADS)

During the several months of the outburst of Mkn 501 in 1997 the source has been monitored in TeV gamma -ray s with the HEGRA stereoscopic system of imaging atmospheric Cherenkov telescopes. Quite remarkably it turned out that the shapes of the daily gamma -ray energy spectra remained essentially stable throughout the entire state of high activity despite dramatic flux variations during this period. The derivation of a long term time-averaged energy spectrum, based on more than 38,000 detected TeV photons, is therefore physically meaningful. The unprecedented gamma -ray statistics combined with the 20% energy resolution of the instrument resulted in the first detection of gamma -rays from an extragalactic source well beyond 10 TeV, and the first high accuracy measurement of an exponential cutoff in the energy region above 5 TeV deeply into the exponential regime. From 500 GeV to 24 TeV the differential photon spectrum is well approximated by a power-law with an exponential cutoff: d N/d E=N_0 (E/1 TeV)(-alpha ) exp {(-E/E_0)}, with N_0=(10.8 +/-0.2_stat +/-2.1_sys) * 10(-11) cm(-2) s(-1) TeV(-1) , alpha =1.92 +/-0.03_stat +/-0.20_sys, and E_0=(6.2 +/-0.4_stat (-1.5 +2.9)_sys) TeV. We summarize the methods for the evaluation of the energy spectrum in a broad dynamical range which covers almost two energy decades, and study in detail the principal sources of systematic errors. We also discuss several important astrophysical implications of the observed result concerning the production and absorption mechanisms of gamma -ray s in the emitting jet and the modifications of the initial spectrum of TeV radiation due to its interaction with the diffuse extragalactic background radiation.

Aharonian, F. A.; Akhperjanian, A. G.; Barrio, J. A.; Bernlöhr, K.; Bojahr, H.; Calle, I.; Contreras, J. L.; Cortina, J.; Daum, A.; Deckers, T.; Denninghoff, S.; Fonseca, V.; Gonzalez, J. C.; Heinzelmann, G.; Hemberger, M.; Hermann, G.; Heß, M.; Heusler, A.; Hofmann, W.; Hohl, H.; Horns, D.; Ibarra, A.; Kankanyan, R.; Kettler, J.; Köhler, C.; Konopelko, A.; Kornmeyer, H.; Kestel, M.; Kranich, D.; Krawczynski, H.; Lampeitl, H.; Lindner, A.; Lorenz, E.; Magnussen, N.; Meyer, H.; Mirzoyan, R.; Moralejo, A.; Padilla, L.; Panter, M.; Petry, D.; Plaga, R.; Plyasheshnikov, A.; Prahl, J.; Pühlhofer, G.; Rauterberg, G.; Renault, C.; Rhode, W.; Röhring, A.; Sahakian, V.; Samorski, M.; Schmele, D.; Schröder, F.; Stamm, W.; Völk, H. J.; Wiebel-Sooth, B.; Wiedner, C.; Willmer, M.; Wittek, W.



School Improvement and Urban Renewal: The Impact of a Turnaround School's Performance on Real Property Values in Its Surrounding Community  

ERIC Educational Resources Information Center

|This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…

Jacobson, Stephen L.; Szczesek, Jill



Real-time cross-sectional averaged void fraction measurements in vertical annulus gas-liquid two-phase flow by neutron radiography and X-ray tomography techniques  

Microsoft Academic Search

A Real-Time Neutron Radiography (RTNR) system and a high speed X-ray Computed Tomography (X-CT) system are used to determine the flow regime, the instantaneous cross-sectional averaged void fraction, and the time averaged void fraction in a vertical annulus flow channel. A standard optical video system is also used to observe the flow regime. The annulus flow channel is operated as

G. D. Harvel; K. Hori; K. Kawanishi; J. S. Chang



Solar Wind-Driven Radiation Belt Response Functions at Sub-Daily Time Scales Using SAMPEX Orbit-Averaged Electron Fluxes  

Microsoft Academic Search

Comprehensive studies of radiation belt dynamics using linear prediction filters (LPFs) were first published by Nagai (1988) and Baker et al. (1990). These studies focused primarily on short-term predictions of daily averaged radiation flux at geostationary orbit using either global magnetospheric indices, or the speed of the solar wind impinging the Earth's magnetosphere. Using daily solar wind data from the

E. J. Rigler; D. N. Baker; D. Vassiliadis; S. G. Kanekal; A. J. Klimas



A new method of deriving time-averaged tropospheric column ozone over the tropics using total ozone mapping spectrometer (TOMS) radiances: Intercomparison and analysis using TRACE A data  

Microsoft Academic Search

Error analysis of archived total O3 from total ozone mapping spectrometer (TOMS) (version 6) presented in earlier studies [Hudson and Kim, 1994; Hudson et al., 1995] is extended to include scan angle effects. Daily total O3 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged

J. H. Kim; R. D. Hudson; A. M. Thompson



The Averaging Problem in Cosmology  

NASA Astrophysics Data System (ADS)

This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.

Paranjape, Aseem



The X-ray pulsar A0535+26 in hard X-rays - Average spectrum, pulse-phase spectroscopy, and spectral time variability  

NASA Astrophysics Data System (ADS)

A hard X-ray (30-190 keV) observation of A0535+26 in outburst was performed during a balloon flight on October 4, 1980. Detailed spectral analysis shows that the average photon spectrum of A0535+26 can be well fitted by a Wein law with kT = 7.68 + or - 0.035 keV. This spectral law is in good agreement with theoretical model spectra of X-ray pulsars. The energy spectra over 51 phase bins are also well fitted by a Wein law with a kT variable with pulse phase from 7.3 to about 8.5 keV. Apart from this spectral variability, photon spectra averaged over a pulse period strongly suggest that chaotic spectral variability is also present. A simple geometrical model for the X-ray emission and modulation with pulse phase from A0535+26 is presented. This model gives good qualitative agreement with the present observations.

dal Fiume, D.; Frontera, F.; Morelli, E.



Average strain rate in the Italian crust inferred from a permanent GPS network - I. Statistical analysis of the time-series of permanent GPS stations  

NASA Astrophysics Data System (ADS)

Time-series of the horizontal coordinates of 21 GPS stations of the EUREF Permanent Network in the Alpine Mediterranean area with three or more years of continuous tracking have been computed with the intent of estimating velocities and their uncertainties, taking into account the detailed structure of their noise. The power spectral densities demonstrate that coloured noise, mostly flicker (1/f) noise, can be present at frequencies below 6 cycle yr-1, while at higher frequencies the spectrum tends to a regime of white (i.e. frequency-independent) noise. This statistical information is used to obtain more accurate estimates of station velocities and of their uncertainties than by the standard least-squares method. Following an approach well known in the analysis of time-series of frequency standards, the stability of each time-series is computed as a function of time, in the sense of a two-sample Allan variance. The power spectral density of the time-series is used to infer the variance of the change in the slope, with 1? probability, of two consecutive, equal-length batches of a given time-series, and this as a function of the length of the batch. The power spectral density of each time-series is then converted into an autocorrelation function. Taking into account the correct correlations of pairs of samples as a function of their lag, the slope of each time-series is estimated by the least-squares method, with a non-diagonal weight-matrix. We show that in all the examined cases the uncertainties in the velocities computed taking into account the detailed noise spectrum are larger by a factor of 4 +/- 1 than the formal uncertainties obtained by the least-squares method under the assumption of pure white noise. Estimating the slope of a time-series taking into account the autocorrelation of the samples yields velocities not significantly different from those obtained assuming uncorrelated samples. We conclude that the reason for the velocity uncertainty estimated by the standard least-squares method being unrealistically small is the neglect of the cumulative effect of uncorrelated and correlated noise. Neglecting the correlated noise does not, however, affect the velocity. Earlier investigations based on more limited (<3 yr) data sets have resulted in non-unique conclusions as to the time decrease of the velocity uncertainty with time, and the noise spectrum of the time-series. We find that the velocity uncertainty does decrease as the time-series increases, and the value of the velocity uncertainty can be predicted from the power spectral density, as the length of the time-series increases. The time-series are finally analysed in the space domain. After removal of common errors, typically represented by sinusoids of annual period, correlation coefficients are computed for pairs of stations and plotted as a function of their distance. We find that the time-series already decorrelate at very short distances (<100 km). This suggests that random errors affecting the coordinates of clusters of stations such as, for example, atmospheric refraction or mismodelling of the orbits are negligible in our time-series. The estimates of the velocities and uncertainties of the permanent stations obtained by spectral analysis form the basis for a subsequent investigation of the present-day, large-scale strain rate field in the Alpine Mediterranean area, which is implied by these scattered surface displacements.

Caporali, Alessandro



Real-time preprocessing for dense 3-D range imaging on the GPU: Defect interpolation, bilateral temporal averaging and guided filtering  

Microsoft Academic Search

Recent advances in range imaging (RI) have enabled dense 3-D scene acquisition in real-time. However, due to physical limitations and the underlying range sampling principles, range data are subject to noise and may contain invalid measurements. Hence, data preprocessing is a prerequisite for practical applications but poses a challenge with respect to real-time constraints. In this paper, we propose a

Jakob Wasza; Sebastian Bauer; Joachim Hornegger



Inversion and space-time-averaging algorithms for ScaRaB (Scanner for the Earth Radiation Budget). Comparison with ERBE  

NASA Astrophysics Data System (ADS)

Establishment of a uniform long-term record of top-of-the atmosphere (TOA) Earth radiation budget (ERB) components, on a scale appropriate to the study of cloud radiation interactions, requires that the data obtained from different observation missions satisfy two basic conditions: (1) the broadband shortwave (SW:0.2-4 µm) and longwave (LW: 4-50 µm) radiances must be demonstrably made on the same absolute scale; and (2) the methods used first to convert the instantaneous (filtered) radiances into (unfiltered) SW and LW radiant fluxes, and then to perform the space-time integrations to yield regional monthly means, must be consistent. Here we consider mainly the second point, with regard to the ScaRaB/Meteor mission in orbit since 25 January 1994 and observing the Earth since 24 February 1994. The objective of this mission is to determine the TOA ERB components and so to provide a continuation of the NASA ERBE scanner mission (November 1984-February 1990). We show how results compatible with ERBE can be obtained by taking into account the instrumental characteristics and the satellite orbit parameters: spectral response of the broadband channels, Earth local time of observation. Considering the spectral response of the ScaRaB broadband channels, we show that no spectral correction is required in the longwave domain, whereas a correction of +4.5% must be applied in the shortwave domain for clear and partly cloudy ocean, in order to compensate for underestimation at the shortest wavelengths. Despite possible differences between ERBE and ScaRaB procedures in values assumed for certain parameters of the scene/cloud identifications, application of these procedures to the same set of ERBE data (spectrally corrected, i.e. unfiltered radiances) shows that scene identification agreement is close to 90% and that, where there is disagreement, resulting differences in LW fluxes are negligible, those in SW fluxes small. We show that regional and global mean quantities are in excellent agreement, considering that differences between (ERBS+NOAA-9) and (NOAA-9 only) results may be taken as illustrating time-sampling effects. We find that biases may occur from the undersampling, specifically for the night-time clear-sky estimation over land and desert. Preliminary results using ScaaB data of March 1994 show that clear-sky regional estimates may be less numerous than in ERBE scanner products, due to either the larger pixel size or the auxiliary parameters used in the scene identification, and that expected uncertainties in the global monthly mean values depend mainly on the instrument radiometric calibration.

Viollier, M.; Kandel, R.; Raberanto, P.



Average Rate of Change  

NSDL National Science Digital Library

This demo provides students with a concrete understanding of the average rate of change for physical situations and for functions described in tabular or graphic form. Animations in gif and Quicktime format are available, as well as Excel programs for classroom or student use.

Roberts, Lila F.; Hill, David R.



Application of Markov Chain Model to Calculate the Average Number of Times of Use of a Material in Society. An Allocation Methodology for Open-Loop Recycling. Part 1: Methodology Development (7 pp)  

Microsoft Academic Search

-  Preamble. In this series of two papers, a methodology to calculate the average number of times a material is used in a society\\u000a from cradle to grave is presented and applied to allocation of environmental impact of virgin material. Part 1 focuses on\\u000a methodology development and shows how the methodology works with hypothetical examples of material flows. Part 2 presents

Yasunari Matsuno; Yoshihiro Adachi; Yasushi Kondo



Short-term reproducibility of time domain, spectral temporal mapping, and spectral turbulence analysis of the signal-averaged electrocardiogram in normal subjects and patients with acute myocardial infarction  

Microsoft Academic Search

The aim of this prospective study was to compare the short-term reproducibility of the signal-averaged ECG (SAECG) with three analysis techniques, conventional time-domain analysis, spectral-temporal mapping (STM), and spectral-turbulence analysis (STA), in a large series of normal subjects and patients with acute myocardial infarction (AMI). Two consecutive SAECGs were recorded in 225 consecutive patients 10.2 ± 2.7 days after AMI

Rafael Vázquez; Edward B. Caref; Francisco Torres; Margarita Reína; Francisco Ortega; Nabil El-Sherif



An Average Lunch?  

NSDL National Science Digital Library

This statistics activity requires students to find the mean using their TI-10 or TI-15 Explorer calculators. The task asks students to read part of Sideways Stories of Wayside School by Louis Sachar and find different combinations of lunches that would give an average price of $1.85. This resource includes a three-page PDF lesson plan with guiding questions, data collection page, and calculator instructions.



Average density in cosmology  

SciTech Connect

The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

Bonnor, W.B.



An orbit averaged particle code  

Microsoft Academic Search

A new method for efficient computer simulation of long time-scale plasma physics phenomena is proposed which has proved successful in one- and two-dimensional magneto-inductive particle codes. The method relies on orbit-averaging charge and current densities in Maxwell's equations before solving for the self-consistent electric and magnetic fields in order to filter out unwanted high-frequency oscillations and reduce the number of

B. I. Cohen; T. A. Brengle; D. B. Conley; R. P. Freis



Dynamic Multiscale Averaging (DMA) of Turbulent Flow  

SciTech Connect

A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.

Richard W. Johnson



Americans' Average Radiation Exposure  

SciTech Connect

We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.




Determination of hydrologic properties needed to calculate average linear velocity and travel time of ground water in the principal aquifer underlying the southeastern part of Salt Lake Valley, Utah  

USGS Publications Warehouse

A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum

Freethey, G. W.; Spangler, L. E.; Monheiser, W. J.



Timing control improves seabed survey data quality  

Microsoft Academic Search

Seateam has completed development of and field-proven the Dolphin data acquisition and timing system for high-density surveys offshore. The Dolphin project was initiated to improve quality control of survey sensor data and ensure time synchronization, thus leading to faster turnaround of seabed terrain information. Data received from survey sensors is asynchronous, so the system must provide for data correlation. This



Geodesic shape-based averaging.  


A new method for the geometrical averaging of labels or landmarks is presented. This method expands the shape-based averaging framework from an Euclidean to a geodesic based distance, incorporating a spatially varying similarity term as time cost. This framework has unique geometrical properties, making it ideal for propagating very small structures following rigorous labelling protocols. The method is used to automate the seeding and way-pointing of optic radiation tractography in DTI imaging. The propagated seeds and waypoints follow a strict clinical protocol by being geometrically constrained to one single slice and by guaranteeing spatial contiguity. The proposed method not only reduces the fragmentation of the propagated areas but also significantly increases the seed positioning accuracy and subsequent tractography results when compared to state-of-the-art label fusion techniques. PMID:23286110

Cardoso, M Jorge; Winston, Gavin; Modat, Marc; Keihaninejad, Shiva; Duncan, John; Ourselin, Sebastien



Mixed messages. Not-for-profit hospitals woo investors with turnaround stories, but don't spare details about future challenges.  


West Penn Allegheny Health System's management team (left) told a turnaround story at the Non-Profit Health Care Investor Conference. The forum, which was launched in 2000 to woo skittish bond investors, reflects hospitals' ongoing challenge to temper their messages for different audiences. PMID:12048840

Jaklevic, Mary Chris



Numerical Computation of Aerodynamics and Heat Transfer in a Turbine Cascade and a Turn-around Duct Using Advanced Turbulence Models.  

National Technical Information Service (NTIS)

The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed usin...

B. Lakshminarayana J. Luo



Snowpack: Decadal Averages Map  

NSDL National Science Digital Library

This is an interactive map of California and the Sierra Nevada mountains, showing projected variations in water stored in snowpack, from 1950 to 2090, assuming low or high emission scenarios over that period of time. Interactive can be adjusted to show different months of the year and various climate models, graphed by site.

Commission, California E.


Panchromatic averaged stellar populations  

NASA Astrophysics Data System (ADS)

We study how the spectral fitting of galaxies, in terms of light fractions, derived in one spectral region translates into another region, by using results from evolutionary synthesis models. In particular, we examine propagation dependencies on evolutionary population synthesis (EPS) models (GRASIL, GALEV, Maraston and GALAXEV), age, metallicity and stellar evolution tracks over the near-ultraviolet-near-infrared (NUV-NIR, 3500 Å to 2.5 ?m) spectral region. Our main results are as follows: as expected, young (t? 400 Myr) stellar population fractions derived in the optical cannot be directly compared to those derived in the NIR, and vice versa. In contrast, intermediate to old age (t? 500 Myr) fractions are similar over the whole spectral region studied. The metallicity has a negligible effect on the propagation of the stellar population fractions derived from NUV to NIR. The same applies to the different EPS models, but restricted to the range between 3800 and 9000 Å. However, a discrepancy between GALEV/Maraston and GRASIL/GALAXEV models occurs in the NIR. Furthermore, the initial mass function is not important for the synthesis propagation. Compared to STARLIGHT synthesis results, our propagation predictions agree at ˜95 per cent confidence level in the optical, and ˜85 per cent in the NIR. In summary, spectral fitting performed in a restricted spectral range should not be directly propagated from the NIR to the UV/optical, or vice versa. We provide equations and an on-line form [panchromatic averaged stellar population (PaASP)] to be used for this purpose.

Riffel, R.; Bonatto, C.; Cid Fernandes, R.; Pastoriza, M. G.; Balbinot, E.



Application of Lean Six Sigma techniques to optimize hospital laboratory Emergency Department Turnaround time across a multi-hospital system  

Microsoft Academic Search

In January 2005, Indiana University, Purdue University at Indianapolis (IUPUI) and Purdue- Calumet were invited to partner with the Alverno Clinical Labs in adapting Lean Six Sigma methodologies for use within healthcare. Our initial project focused on optimization of lab services for the emergency department at the Saint Margaret Mercy Hospitals in Hammond, IN. This project resulted in reduction of

Heather Hagg; Susan Scachitti; Lash Mapa; Lillie Brandford; Chris Vanni; Catherine Cox



Toward determining cost, quality, and turn-around time of MEMS devices fabricated in a distributed prototyping environment  

Microsoft Academic Search

The MEMS Exchange is a DARPA funded research program to provide the domestic MEMS community with an interface between MEMS designers and microfabrication facilities to allows designer to develop and exercise custom process sequences. These process sequences may cross the boundaries separating fabrication facilities in order to maximize process and design freedom. Information transfer within the MEMS-Exchange fabrication network takes

William L. Benard; Kaigham J. Gabriel; Michael A. Huff



Reliable and sensitive detection of fragile X (expanded) alleles in clinical prenatal DNA samples with a fast turnaround time.  


This study evaluated a large set of blinded, previously analyzed prenatal DNA samples with a novel, CGG triplet-repeat primed (TP)-PCR assay (Amplidex FMR1 PCR Kit; Asuragen, Austin, TX). This cohort of 67 fetal DNAs contained 18 full mutations (270 to 1100 repeats, including 1 mosaic), 12 premutations (59 to 150 repeats), 9 intermediate mutations (54 to 58 repeats), and 28 normal samples (17 to 50 repeats, including 3 homozygous female samples). TP-PCR accurately identified FMR1 genotypes, ranging from normal to full- mutation alleles, with a 100% specificity (95% CI, 85.0% to 100%) and a 97.4% sensitivity (95% CI, 84.9% to 99.9%) in comparison with Southern blot analysis results. Exact sizing was possible for a spectrum of normal, intermediate, and premutation (up to 150 repeats) alleles, but CGG repeat numbers >200 are only identified as full mutations. All homozygous alleles were correctly resolved. The assay is also able to reproducibly detect a 2.5% premutation and a 3% full-mutation mosaicism in a normal male background, but a large premutation in a full male mutation background was masked when the amount of the latter was >5%. Implementation of this TP-PCR will significantly reduce reflex testing using Southern blot analyses. Additional testing with methylation-informative techniques might still be needed for a few cases with (large) premutations or full mutations. PMID:22921311

Seneca, Sara; Lissens, Willy; Endels, Kristof; Caljon, Ben; Bonduelle, Maryse; Keymolen, Kathleen; De Rademaeker, Marjan; Ullmann, Urielle; Haentjens, Patrick; Van Berkel, Kim; Van Dooren, Sonia



Short Planning TurnAround Time and High Flexibility of the Swedish Astronomy\\/Aeronomy Satellite Odin  

Microsoft Academic Search

The Swedish small satellite Odin combines two different scientific disciplines, astronomy and atmospheric research. It has a 3-axis stabilized, zero momentum, attitude control system that operates in two different modes, an inertial pointing astronomy mode and an atmospheric mode providing pointing\\/scanning of the Earth limb. The reference attitude for the atmospheric mode is created by a versatile reference attitude generator,

B. Jakobsson; T. Karlsson; M. Nylund; T. Olsson; E. Vinterhav



Average heterozygosity revisited.  

PubMed Central

The estimate of heterozygosity and proportion of polymorphic loci for 33 red blood cell loci has been updated by the elimination of some loci of questionable status and the addition of data on 33 loci. The new figures for heterozygosity and proportion of polymorphic loci, .105 and .283, respectively, are based on 60 red blood cell loci of European origin populations. These values are less than those calculated by Lewontin in 1967, and furthermore they do not appear to be reaching an asymptote. At the present time, the red blood cell data and allozyme data for European populations have similar estimates of heterozygosity and proportion of polymorphic loci.

Hedrick, P W; Murray, E



Computer-assisted time-averaged holograms of the motion of the surface of the mammalian tympanic membrane with sound stimuli of 0.4-25 kHz.  


Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f>4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841

Rosowski, John J; Cheng, Jeffrey Tao; Ravicz, Michael E; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme



Short-term reproducibility of time domain, spectral temporal mapping, and spectral turbulence analysis of the signal-averaged electrocardiogram in normal subjects and patients with acute myocardial infarction.  


The aim of this prospective study was to compare the short-term reproducibility of the signal-averaged ECG (SAECG) with three analysis techniques, conventional time-domain analysis, spectral-temporal mapping (STM), and spectral-turbulence analysis (STA), in a large series of normal subjects and patients with acute myocardial infarction (AMI). Two consecutive SAECGs were recorded in 225 consecutive patients 10.2 +/- 2.7 days after AMI and in 85 healthy volunteers. The visual, diagnostic, and quantitative reproducibility of the three techniques was compared. Time-domain analysis was the most reproducible method, having high R2 correlations, statistically fewer inconsistent diagnostic recordings, and statistically smaller differences compared with other techniques. STM was the least reproducible, justifying caution in its current form. Although STA was significantly less reproducible than time-domain analysis, it was also significantly better than STM. Two STA parameters, spectral entropy and interslice correlation mean, showed good reproducibility, suggesting that modification of this analysis technique could be useful in risk stratification. PMID:7484730

Vázquez, R; Caref, E B; Torres, F; Reina, M; Ortega, F; el-Sherif, N



Modular Approach to Physics: Weighted Average  

NSDL National Science Digital Library

This resource is a multi-exercise module relating to average speed. It contains an interactive Java simulation featuring a car that the user controls for speed. Graphs of instantaneous and average velocities are simultaneously displayed, helping the user to understand the concept of time-weighted average. The lesson plan available through the Help menu explains how to calculate a weighted average, first with two speeds, and then with multiple. This item is part of a larger collection of simulation based physics models sponsored by the MAP project (Modular Approach to Physics).

Austin, David; Martin, Brian



Univariate Lp and lp Averaging, 0.  

National Technical Information Service (NTIS)

We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0. < p < 1, in polynomial time by restricting the data to come from a wide class of statistical distributions. Our approach differs from the approaches in...

J. E. Lavery



Orbit-averaged implicit particle codes  

Microsoft Academic Search

The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the

B. I. Cohen; R. P. Frieis; V. Thomas



Modular Approach to Physics: Average Speed versus Average Velocity  

NSDL National Science Digital Library

This applet is an interactive demonstration of average speed and velocity. The user can move the object, while average speed and velocity is calculated and available through the Data panel. A full lesson plan is available through the Help menu. This item is part of a larger collection of simulation based physics modules sponsored by the MAP project (Modular Approach to Physics).

Austin, David; Martin, Brian



Spacetime averaged null energy condition  

SciTech Connect

The averaged null energy condition has known violations for quantum fields in curved space, even when one considers only achronal geodesics. Many such examples involve rapid variation in the stress-energy tensor in the vicinity of the geodesic under consideration, giving rise to the possibility that averaging in additional dimensions would yield a principle universally obeyed by quantum fields. However, after discussing various procedures for additional averaging, including integrating over all dimensions of the manifold, we give here a class of examples that violate any such averaged condition.

Urban, Douglas; Olum, Ken D. [Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts 02155 (United States)



Averaging in spherically symmetric cosmology  

SciTech Connect

The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis.

Coley, A. A.; Pelavas, N. [Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova Scotia (Canada)



Engineering a Turnaround  

ERIC Educational Resources Information Center

|This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…

Hood, Lucy



The Real Turnaround  

ERIC Educational Resources Information Center

Chilean educator and poet Gabriela Mistral warned that children's needs are immediate and comprise more than just academic concerns. Implementing comprehensive community schools is an increasingly successful approach to taking her warning to heart, particularly in neighborhoods with large immigrant populations. The reason is simple: education does…

Purinton, Ted; Azcoitia, Carlos



Improved averaging for non-null interferometry  

NASA Astrophysics Data System (ADS)

Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

Fleig, Jon F.; Murphy, Paul E.



Evaluations of average level spacings  

SciTech Connect

The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

Liou, H.I.



On generalized averaged Gaussian formulas  

NASA Astrophysics Data System (ADS)

We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.

Spalevic, Miodrag M.



Average Cost of Common Schools.  

ERIC Educational Resources Information Center

|The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…

White, Fred; Tweeten, Luther


Neutron average cross sections of Np237  

NASA Astrophysics Data System (ADS)

This work reports Np237 neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the Np237 neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width =39.3±1.0 meV, a neutron strength function 104S0=1.02±0.14, a mean level spacing D0=0.60±0.03 eV, and a potential scattering length R'=9.8±0.1 fm.

Noguere, G.



Average trajectory of returning walks.  


We compute the average shape of trajectories of some one-dimensional stochastic processes x(t) in the (t,x) plane during an excursion, i.e., between two successive returns to a reference value, finding that it obeys a scaling form. For uncorrelated random walks the average shape is semicircular, independent from the single increments distribution, as long as it is symmetric. Such universality extends to biased random walks and Levy flights, with the exception of a particular class of biased Levy flights. Adding a linear damping term destroys scaling and leads asymptotically to flat excursions. The introduction of short and long ranged noise correlations induces nontrivial asymmetric shapes, which are studied numerically. PMID:15169006

Colaiori, Francesca; Baldassarri, Andrea; Castellano, Claudio



Robust averaging during perceptual judgment  

PubMed Central

An optimal agent will base judgments on the strength and reliability of decision-relevant evidence. However, previous investigations of the computational mechanisms of perceptual judgments have focused on integration of the evidence mean (i.e., strength), and overlooked the contribution of evidence variance (i.e., reliability). Here, using a multielement averaging task, we show that human observers process heterogeneous decision-relevant evidence more slowly and less accurately, even when signal strength, signal-to-noise ratio, category uncertainty, and low-level perceptual variability are controlled for. Moreover, observers tend to exclude or downweight extreme samples of perceptual evidence, as a statistician might exclude an outlying data point. These phenomena are captured by a probabilistic optimal model in which observers integrate the log odds of each choice option. Robust averaging may have evolved to mitigate the influence of untrustworthy evidence in perceptual judgments.

de Gardelle, Vincent; Summerfield, Christopher



Simulation of two- and three-dimensional dense-fluid shear flows via nonequilibrium molecular dynamics: Comparison of time-and-space-averaged stresses from homogeneous Doll's and Sllod shear algorithms with those from boundary-driven shear  

NASA Astrophysics Data System (ADS)

Homogeneous shear flows (with constant strainrate dvx/dy ) are generated with the Doll’s and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear normal-stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, Pxx-Pyy , depend on both the thermostat type and the chosen shearflow algorithm. The Doll’s and Sllod algorithms predict opposite signs for this normal-stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures: Txx>Tzz>Tyy .

Hoover, Wm. G.; Hoover, Carol G.; Petravic, Janka



Average entanglement for Markovian quantum trajectories  

SciTech Connect

We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.

Vogelsberger, S. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Spehner, D. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Laboratoire de Physique et Modelisation des Milieux Condenses, Universite Joseph Fourier and CNRS, BP 166, F-38042 Grenoble (France)



Designing a real-time ramp arrival prediction tool  

Microsoft Academic Search

Airline efficiency may be increased by the timely dissemination of accurate ramp time-of-arrival predictions for inbound aircraft. Potential benefits include improvements to gate management, ramp management, and personnel\\/equipment resource allocation. These benefits can lead to shorter aircraft turn-around times and reduced airline operating costs. Collaboration with Federal Express has identified the following criteria for a real-time prediction system: a gate

J. Legge



The Berkeley Earth Surface Temperature Averaging Methodology  

NASA Astrophysics Data System (ADS)

The Berkeley Earth Surface Temperature project has developed a new analysis framework for handling surface temperature records and constructing global temperature averages and maps. This has allowed us to achieve greater detail with less uncertainty than prior efforts. This framework adapts a weighted least-squares approach with spatial Kriging to allow us to utilize even very short and discontinuous temperature records. Further, we automate the process of assessing station reliability to allow data of unknown reliability to be included in the analysis. For the first time, this means that nearly all of the Earth ~40 thousand weather station time series can be directly incorporated into the construction of global temperature averages, rather than the 5-10 thousand long time series traditionally used. Applying the Berkeley Earth techniques, we broadly confirm the temperature histories presented by prior groups. However, the improved methodology allows the uncertainties to be reduced (often by 50%) and also has allowed the instrumental temperature record to be extended back to 1800. This session will present the Berkeley Earth Averaging Framework and results, with emphasis on the techniques employed.

Rohde, R. A.; Brillinger, D.; Curry, J. A.; Groom, D.; Jacobsen, B.; Muller, R. A.; Perlmutter, S.; Rosenfeld, A.; Wickham, C.; Wurtele, J.



Wavelet analysis of paleomagnetic data: 1. Characteristic average times (5 10 kyr) of variations in the geomagnetic field during and immediately before and after the Early Jaramillo reversal (Western Turkmenistan)  

NASA Astrophysics Data System (ADS)

Joint wavelet analysis of complete and downsampled series of paleomagnetic and petromagnetic characteristics of rocks in the Matuyama-Jaramillo transitional zone in the Adzhidere section is used to extract paleomagnetic data whose variations are associated with the geomagnetic field alone and data correlating with variations in petromagnetic parameters. It supposed that this correlation can be caused by an external factor affecting weak variations in the magnetic field and climatic changes reflected in the composition and amount of the ferromagnetic fraction in rocks. Preliminary data are obtained for the characteristic times of field variations at the time of accumulation of rocks in the transitional zone.

Gurarii, G. Z.; Aleksyutin, M. V.; Ataev, N.



Achronal averaged null energy condition  

SciTech Connect

The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.

Graham, Noah; Olum, Ken D. [Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States) and Center for Theoretical Physics, Laboratory for Nuclear Science, and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts 02155 (United States)




ERIC Educational Resources Information Center

Presents brief annotations of 41 children's books that explore the concept of time from varied perspectives to elucidate some of its meanings for children. Groups books to focus on milestones in children's lives; how they enjoy spending their time; visiting the past in suspending time; recognizing different ways to mark time; celebrating special…

Matthews, Susan; Reid, Rebecca; Sylvan, Anne; Woolard, Linda; Freeman, Evelyn B.



Rapid Detection and Differentiation of Human Pathogenic Orthopox Viruses by a Fluorescence Resonance Energy Transfer Real-Time PCR Assay  

Microsoft Academic Search

Background: The orthopox viruses that are pathogenic for humans include variola major virus (VAR), monkey- pox virus (MPV), cowpox virus (CPV), and to a lesser extent, camelpox virus (CML) and vaccinia virus (VAC). PCR is a powerful tool to detect and differentiate orthopox viruses, and real-time PCR has the further advantages of rapid turnaround time, low risk of con- tamination,

Marcus Panning; Marcel Asper; Stefanie Kramme; Herbert Schmitz; Christian Drosten



Scaling crossover for the average avalanche shape  

NASA Astrophysics Data System (ADS)

Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.

Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.



Changes in Average Trip Length.  

National Technical Information Service (NTIS)

The recommendations made by the Chicago Area Transportation Study, CATS, for the 1980 transportation requirements of its study area have been based in part upon an estimate of the total daily mileage which will be traveled within the area by that time. Th...

W. D. Stoll



Averaged run-and-tumble walks  

NASA Astrophysics Data System (ADS)

A random walk consisting of a run phase at constant speed interrupted by tumble events is analyzed and analytically solved for arbitrary time distributions. A general expression is given for the Laplace-Fourier transform of the probability density function and for the mean square displacement averaging over initial conditions. Run-and-tumble bacteria and Lévy walks are considered as particular cases. The effects of an underlying Brownian noise are also discussed. Derived expressions can be used for a direct comparison with experimentally measured quantities.

Angelani, L.



Coherent averaging in the frequency domain  

NASA Astrophysics Data System (ADS)

Quantum-mechanical evolution of systems with periodic time-modulated Hamiltonians is often described by effective interactions. Such average Hamiltonians, calculated as few terms of an expansion in powers of the interaction, are sometimes difficult to relate to experimental observations. We propose a frequency-domain approach to this problem, which offers certain advantages and produces an approximate solution for the density matrix, better linked to measurable quantities. The formalism is suitable for calculating the intensities of narrowed spectral peaks. Fast magic-angle-spinning NMR spectra of solids are used to experimentally illustrate the method.

Khitrin, A. K.; Xu, Jiadi; Ramamoorthy, Ayyalusamy



Coherent averaging in the frequency domain  

PubMed Central

Quantum-mechanical evolution of systems with periodic time-modulated Hamiltonians is often described by effective interactions. Such average Hamiltonians, calculated as few terms of an expansion in powers of the interaction, are sometimes difficult to relate to experimental observations. We propose a frequency-domain approach to this problem, which offers certain advantages and produces an approximate solution for the density matrix, better linked to measurable quantities. The formalism is suitable for calculating the intensities of narrowed spectral peaks. Fast magic-angle-spinning NMR spectra of solids are used to experimentally illustrate the method.

Khitrin, A. K.; Xu, Jiadi; Ramamoorthy, Ayyalusamy



Average deployments versus missile and defender parameters  

SciTech Connect

This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

Canavan, G.H.



Time-averaged field at the equator: Results from Ecuador  

NASA Astrophysics Data System (ADS)

Seventy sites were collected from Pliocene to recent lavas in Ecuador between 0.19° and 1.1°S latitude. Lightning affected many of the sites sampled, and the final data set consisted of 51 sites with ?95 of 10° or less; 21 flows yielded normal directions of magnetization (declination = 354.6°, inclination = -5°, ?95 = 7.2°) and 30 yielded reverse (declination = 183.5°, inclination = 5.6°, ?95 = 5°). These sites pass a reversal test at a high level. The combined data give a direction of declination = 359.9°, inclination = -5.4°, and ?95 = 4.2°, which is just significantly different from the GAD field but is in agreement with a GAD field plus 5% quadrupole field. Mean VGPs are not significantly different from the geographic poles. The ASD for the combined sites is 13.3°, which is in agreement with model "G." Paleointensity measurements yield values of VADM, which range from 1.39 × 1022 to 22.6 × 1022 Am2. The mean value for the reverse field is 3.2 × 1022 Am2, while for the normal field it is 10.2 × 1022 Am2. The 40Ar/39Ar dates on selected sites show that they are 2.6 Ma and younger. The age dates and magnetic polarity indicate that the Chacana structure is Brunhes/Matuyama in age.

Opdyke, Neil D.; Hall, Minard; Mejia, Victoria; Huang, Kainian; Foster, David A.



Effect of wind averaging time on wind erosivity estimation  

Technology Transfer Automated Retrieval System (TEKTRAN)

The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...


Averaging of nonlinearity-managed pulses  

SciTech Connect

We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons.

Zharnitsky, Vadim; Pelinovsky, Dmitry [Department of Mathematics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Department of Mathematics, McMaster University, Hamilton, Ontario L8S 4K1 (Canada)



Averaged model of inverter-fed ultrasonic motors  

Microsoft Academic Search

An averaged model for the most advanced traveling wave type ultrasonic motor drive is presented using a generalized averaging method. The modeling approach reflects the interesting dynamic behavior of the drive's ultrasonic vibrations by time-varying fundamental Fourier coefficients and coincides with the realized measurement system. Hence, the novel model is predestined for optimized control schemes of the drive using two-phase

J. Maas; H. Grotstollen



Delineating the Average Rate of Change in Longitudinal Models  

ERIC Educational Resources Information Center

|The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always…

Kelley, Ken; Maxwell, Scott E.



Estimating average regression effect under non-proportional hazards  

Microsoft Academic Search

SUMMARY We present an estimator of average regression effect under a non-proportional hazards model, where the regression effect of the covariates on the log hazard ratio changes with time. In the absence of censor- ing, the new estimate coincides with the usual partial likelihood estimate, both estimates being consistent for a parameter having an interpretation as an average population regression




Idle Resource Supplement Model and Validity Time Designation Model with Reliability Measurement in Grid Computing  

NASA Astrophysics Data System (ADS)

Grid computing provides high performance like a super computer through sharing and using distributed heterogeneous computing resources. Grid computing processing time and cost are widely varied since a grid user or a grid middleware can select a variety of distributed heterogeneous resources. Therefore, grid computing absolutely needs a grid resource management method and model. In this paper, we propose two types of resource management model with resource reliability. The first model is the idle resource supplement model. The point of the idle resource supplement model is that adds to idle resources when existing resources can not process jobs. The second model is the validity time designation model which considers grid users. This model processes jobs during validity time that is decided by grid users. This paper evaluates system performance such as utilization, job-loss rate and average turn-around time. And, we estimate experiment results of our models in comparison with those of existing models such as a random model and a round-robin model. The experiment results demonstrate that the two models based on resource reliability measurement improve resource utilization and provide reliable job processing. And, we expect that our proposed models improve grid computing QoS.

Park, Da Hye; Jang, Sung Ho; Noh, Chang Hyeon; Lee, Jong Sik


Average observational quantities in the timescape cosmology  

SciTech Connect

We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

Wiltshire, David L. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand, and International Center for Relativistic Astrophysics Network (ICRANet), Piazzale le della Repubblica 10, Pescara 65121 (Italy)



Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading  

ERIC Educational Resources Information Center

|Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri



Optimal averaging of seasonal sea surface temperatures and associated confidence intervals (1860-1989)  

Microsoft Academic Search

Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest

Thomas M. Smith; C. F. Ropelewski; R. W. Reynolds



Physical effects of consistent cosmological averaging  

NASA Astrophysics Data System (ADS)

We use cosmological perturbation theory to study the backreaction effects of a self-consistent and well-defined cosmological averaging on the dynamics and the evolution of the Universe. Working with a perturbed Friedman-Lemaître-Robertson-Walker Einstein–de Sitter cosmological solution in a comoving volume-preserving gauge, we compute the expressions for the expansion scalar and deceleration parameter to second order, which we use to characterize the backreaction. We find that the fractional shift in the Hubble parameter with respect to the input background cosmological model is ?¯?10-5, which leads to ?eff of the order of a few times 10-5. In addition, we find that an appropriate measure of the fractional shift in the deceleration parameter Q¯ is very large.

Brown, Iain A.; Coley, Alan A.; Herman, D. Leigh; Latta, Joey



40 CFR 86.449 - Averaging provisions.  

Code of Federal Regulations, 2011 CFR

...States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family emission (1) Make sure your report includes the following things: (i) Calculate in detail your average emission level...



RHIC BPM System Average Orbit Calculations.  

National Technical Information Service (NTIS)

RHIC beam position monitor (RPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contr...

C. Degen M. Minty P. Cerniglia R. Hulsart R. Michnoff



RHIC BPM system average orbit calculations  

SciTech Connect

RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.



Synchronous averaging of epicyclic sun gear vibration  

US Patent & Trademark Office Database

In the method of detecting a localized sun gear fault, in the operation of an epicyclic gear train having ring, planet and sun gears, and a planet carrier, the steps that include detecting sun gear vibrations transmitted through each planet gear, computing separated averages of such detected vibrations, phase shifting the averages to account for the differences in gear meshing positions, and re-combining the phase shifted averages to produce a modified average value of the sun gear vibration.

Blunt; David Mark (Fishermans Bend, AU)



The average mental age of adults  

Microsoft Academic Search

Investigates the average mental age of adults by comparing the average intelligence of 9500 native whites, 3700 foreign-born men and 3300 Negroes. The intelligence of the former was judged on the basis of Army Mental Tests and that of the two latter groups was on the basis of the Surgeon General's report. The average mental age of adults was found

Edgar A. Doll



Attitude stabilization of averaging magnetic torque  

Microsoft Academic Search

The stability problems of two magnetic control systems are discussed. One uses a three-axis magnetic actuator only; the other includes an additional momentum-biased wheel. The so-called orbit-averaged magnetic controllers, derived from the orbit-averaged magnetic field, show an ability to ensure the global attitude stabilization. This is supported theoretically by the averaging method and Lyapunov theories, and also illustrated by the

Lin Huang; Wuxing Jing



Autoregressive Integrated Moving Average (ARIMA) Models for Birth Forecasting  

Microsoft Academic Search

Autoregressive integrated moving average (ARIMA) models are developed for birth time series, and their relationship with classical models for population growth is investigated. Parsimonious versions for the ARIMA models are obtained which retain the most important pieces of information including the length of generation of the population. The technique is applied to birth time series data for Norway.

João Luiz Maurity Saboia



Spacetime averaging of exotic singularity universes  

NASA Astrophysics Data System (ADS)

Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w -singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

d?browski, Mariusz P.



Geographic Gossip: Efficient Averaging for Sensor Networks  

NASA Astrophysics Data System (ADS)

Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.


Improving consensus structure by eliminating averaging artifacts  

PubMed Central

Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to C? models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach.

KC, Dukka B



Saturn kilometric radiation: Average and statistical properties  

NASA Astrophysics Data System (ADS)

Since Cassini entered Saturn's magnetosphere in July 2004, the auroral Saturnian kilometric radiation (SKR), which dominates the kronian radio spectrum, is observed quasi-continuously. Consecutive orbits of the spacecraft covered distances to Saturn down to 1.3 Saturn radii, all local times and, since December 2006, latitudes as high as 60°. On the basis of carefully calibrated and cleaned long-term time series and dynamic spectra, we analyze the average properties, and characteristics of the SKR over 2.75 years starting at Cassini's Saturn orbit insertion. This study confirms and expands previous results from Voyager 1 and 2 studies in the 1980s: the SKR spectrum is found to extend from a few kHz to 1200 kHz; extraordinary mode emission dominates, i.e., left-handed (LH) from the southern kronian hemisphere and right-handed (RH) from the northern one, for which we measure directly a degree of circular polarization up to 100%; the variable visibility of SKR along Cassini's orbit is consistent with sources at or close to the local electron cyclotron frequency fce, in the Local Time (LT) sector 09 h-12 h, and at latitudes ?70°, with emission beamed along hollow cones centered on the local magnetic field vector; this anisotropic beaming results in the existence of an equatorial radio shadow zone, whose extent is quantified as a function of frequency; it also causes the systematic disappearance of emission at high latitudes above 200 kHz and below 30 kHz. In addition, we obtain new results on SKR: LH and RH intensity variations are found to match together at all timescales ?30 min; moreover their spectra are found to be conjugated as a function of the latitude of the observer; we use this conjugacy to merge LH and RH spectra and derive pronounced systematic dependences of the SKR spectrum as a function of the spacecraft latitude and LT (that will be the input of a subsequent modeling study); we identify for the first time ordinary mode SKR emission; finally, in addition to the SKR and n-SMR components, we discuss the narrowband kilometric component (named here n-SKR) which extends mainly between 10 and 40 kHz, preferentially observed from high latitudes.

Lamy, L.; Zarka, P.; Cecconi, B.; Prangé, R.; Kurth, W. S.; Gurnett, D. A.



A note on generalized averaged Gaussian formulas  

NASA Astrophysics Data System (ADS)

We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.

Spalevic, Miodrag



Bayes model averaging with selection of regressors  

Microsoft Academic Search

When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with Bayesian model averaging offers a cure for this robustness issue but at the expense of requiring very many predictors. Here we look at Bayes model averaging incorporating variable selection for prediction.

P. J. Brown; M. Vannucci; T. Fearn



Average Transmission Probability of a Random Stack  

ERIC Educational Resources Information Center

|The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg



Generalized averaging method for power conversion circuits  

Microsoft Academic Search

A more general averaging procedure that encompasses state-space averaging and that is potentially applicable to a much broader class of circuits and systems is presented. Examples of its application in resonant and PWM power convertors are presented. The technique is shown to be effective on a number of examples. including resonant type converters. The approach offers refinements to the theory

Seth R. Sanders; J. M. Noworolski; X. Z. Liu; G. C. Verghese



Evaluating template bias when synthesizing population averages  

NASA Astrophysics Data System (ADS)

Establishing the average shape and spatial variability for a set of similar anatomical objects is important for detecting and discriminating morphological differences between populations. This may be done using deformable templates to synthesize a 3D CT/MRI image of the average anatomy from a set of CT/MRI images collected from a population of similar anatomical objects. This paper investigates the error associated with the choice of template selected from the population used to synthesize the average population shape. Population averages were synthesized for a population of five infant skulls with sagittal synostosis and a population of six normal adult brains using a consistent linear-elastic image registration algorithm. Each data set from the populations was used as the template to synthesize a population average. This resulted in five different population averages for the skull population and six different population averages for the brain population. The displacement variance distance from a skull within the population to the other skulls in the population ranged from 5.5 to 9.9 mm2 while the displacement variance distance from the synthesized average skulls to the population ranged from 2.2 to 2.7 mm2. The displacement variance distance from a brain within the population to the other brains in the population ranged from 9.3 to 14.2 mm2 while the displacement variance distance from the synthesized average brains to the population ranged from 3.2 to 3.6 mm2. These results suggest that there was no significant difference between the choice of template with respect to the shape of the synthesized average data set for these two populations.

Carlson, Blake L.; Christensen, Gary E.; Johnson, Hans J.; Vannier, Michael W.



The causal meaning of Fisher's average effect  

PubMed Central

Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology.




Turnaround Project Signs Six States  

ERIC Educational Resources Information Center

|Dozens of schools are slated for aggressive interventions over the next three years under a new, multistate effort that aims to clear hurdles that have hindered previous attempts to improve underperforming schools. Education officials in Colorado, Delaware, Illinois, Louisiana, Massachusetts, and New York have agreed to partner with Mass Insight…

Maxwell, Lesli A.



Turnaround team revitalizes mining operations  

Microsoft Academic Search

Starting from scratch, the Broe Companies' Century Coal trains the next generation. The article describes how the newly created Century Coal LLC, controlled by Denver-based Broe Companies investment firm, is rebuilding and expanding its highwall mining operations, increasing production from a current 1 million tons to 5 to 6 million tons in 2006 and 2007. The company has a $100,000




Districts Created to Steer "Turnarounds"  

ERIC Educational Resources Information Center

If the job of a traditional superintendent is hard, imagine the complexities involved in building a school system from scratch--especially one composed of schools with some of the most intractable educational challenges. That's the task facing education leaders in Michigan and Tennessee, which are building special districts to take over…

Samuels, Christina A.



Districts Created to Steer "Turnarounds"  

ERIC Educational Resources Information Center

|If the job of a traditional superintendent is hard, imagine the complexities involved in building a school system from scratch--especially one composed of schools with some of the most intractable educational challenges. That's the task facing education leaders in Michigan and Tennessee, which are building special districts to take over…

Samuels, Christina A.



Turnaround team revitalizes mining operations  

SciTech Connect

Starting from scratch, the Broe Companies' Century Coal trains the next generation. The article describes how the newly created Century Coal LLC, controlled by Denver-based Broe Companies investment firm, is rebuilding and expanding its highwall mining operations, increasing production from a current 1 million tons to 5 to 6 million tons in 2006 and 2007. The company has a $100,000 outreach program with 95% of these funds going to local communities. Present coal mining operations are spread around Bell, Clay, Harlan Knox and Leslie Counties. A priority is the renovation of the WenLar preparation plant in Bell County. 5 photos.

Buchsbaum, L.



Cosmic inhomogeneities and averaged cosmological dynamics.  


If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811

Paranjape, Aseem; Singh, T P



A flexible signal-averaging system for cardiac waveforms.  


The repetitive nature of cardiac waveforms renders them suitable for processing by signal averaging. A flexible system is described, based on a general-purpose digital computer, capable of averaging a variety of cardiac signals in excess of 0.5 microV containing frequency components below a theoretical Nyquist limit of 5 kHz. Important features of the system include real-time processing capability at a high level of interactive control, and the facility to trigger the averaging process accurately from any part of either the data or a synchronous reference waveform. Application of the system to electrocardiographic and phonocardiographic signals at conventional gain has enabled clinically useful records to be obtained in the face of obtrusive environmental noise. The averaging of signals of a similar nature after considerably increased amplification has revealed cardiac activity unseen in conventional records in a total of 81 subjects. PMID:7359894

Vincent, R; English, M J; Mackintosh, A F; Stroud, N; Chamberlain, D A; Woollons, D J



Implication of spatial averaging in complex terrain wind studies  

SciTech Connect

Studies of wind over complex terrain have been conducted at two different times at the Geyser's Geothermal Region of Northern California. These studies have included optical anemometers with spatial averaging (Lawrence et al., 1972) over path lengths of 0.6 to 1 km and conventional cup-vane anemometers. Most of the work described in this paper resulted from field studies conducted as a part of ASCOT (Dickerson and Gudiksen, 1980) experiments in July, 1979 and September, 1980. It represents an effort to use measurements of spatially averaged wind to obtain the average drainaged wind speed component down a small slopping valley with few locations with reasonable instrument exposure accessible to cup anemometers. By comparing the point and spatially averaged wind measurements, it was hoped that data periods could be chosen for numerical wind field model testing when the two measurements were similar.

Porch, W.M.



The uncertainty of simple spatial averages using rain gauge networks  

SciTech Connect

A technique is presented whereby the standard error of a simple spatial average can be computed given any spatial configuration of measuring sites. The method should improve useful to those wishing to assess the time-averaged performance of a network of rain gauges where the site locations are fixed within the averaging area. It is demonstrated that in addition to the site density and the spatial structure of the measured variable, the spatial geometry of the network with respect to the averaging area must be accounted for in the standard error expression. By incorporating estimates of the long-term population mean in the method, bias and random error in the sample statistics are minimized. 18 refs., 9 figs.

Morrissey, M.L. [Univ. of Oklahoma, Norman, OK (United States); Maliekal, J.A. [State Univ. of New York College, Brockport, NY (United States); Greene, J.S. [Univ. of Delaware, Newark, DE (United States)] [and others



Modification of turbulent transport by orbit averaging.  

National Technical Information Service (NTIS)

The effect on plasma turbulence of orbit averaging by thermal ions is considered, and illustrated for two modes of potential importance for tokamaks. The effect can reduce the ion response below that in earlier treatments, modifying the predicted mode gro...

H. E. Mynick S. J. Zweben



40 CFR 76.11 - Emissions averaging.  

Code of Federal Regulations, 2013 CFR

...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...



Reasonable Averages That Give Wrong Answers  

NSDL National Science Digital Library

Averages are meant to convey the essential features of a set of data, or a random variable, in a simple and a concise way. Like any other summary, an average can be misleading, misused and abused; there is a fair amount of literature on this aspect of averages, the book by D. Huff(1973) being a particularly readable account. In one intuitive use of averages there is a source of error which can be quite serious and which is often not recognized. This source of error is illustrated below by a quality control problem, a project, an experiment and a game. A Taylor series expansion gives an insight into the nature of the error.

Shahani, A. K. (Arjan Kewalram)



Average Passenger Occupancy (APO) in Your Community.  

ERIC Educational Resources Information Center

Provides details of an activity in which students in grades 4-10 determine the Average Passenger Occupancy (APO) in their community and develop, administer, and analyze a survey to determine attitudes toward carpooling. (DDR)

Stenstrup, Al



Approximation Algorithms for Minimizing Average Distortion  

Microsoft Academic Search

This paper considers embeddings f of arbitrary finite metrics into the line metric ? so that none of the distances is shrunk\\u000a by the embedding f; the quantity of interest is the factor by which the average distance in the metric is stretched. We call\\u000a this quantity the average distortion of the non-contracting map f. We prove that finding the

Kedar Dhamdhere; Anupam Gupta; R. Ravi



Multifrequency averaging of DC\\/DC converters  

Microsoft Academic Search

This paper presents some of the issues involved in applying frequency-selective averaging to modeling the dynamic behavior of pulsewidth-modulated (PWM) DC-DC converters. We use the boost converter as an example to show the details involved in deriving some novel extended averaged models, and we use simulations to highlight the accuracy of the models even when traditional small-ripple conditions are not

V. A. Caliskan; O. C. Verghese; A. M. Stankovic



Monthly average polar sea-ice concentration  

USGS Publications Warehouse

The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

Schweitzer, Peter N.




SciTech Connect

I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.

Svanda, Michal, E-mail: [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)



Changes in global average surface temperature, global average sea level, and northern hemisphere snow cover  

NSDL National Science Digital Library

Key figure from the 2007 Intergovernmental Panel on Climate Change (IPCC) report that shows changes in global average surface temperature, global average sea level, and Northern Hemisphere snow cover from as far back as 1850.

IPCC (Intergovernmental Panel on Climate Change) AR4 Synthesis Report


Grade Point Average and Changes in (Great) Grade Expectations.  

ERIC Educational Resources Information Center

|Examines student grade expectations throughout a semester in which students offered their expectations three times during the course: (1) within the first week; (2) midway through the semester; and (3) the week before the final examination. Finds that their expectations decreased stating that their cumulative grade point average was related to…

Wendorf, Craig A.



Analytical Average Model of Studies on Orbital Resonance (Abstract Only),  

National Technical Information Service (NTIS)

For a long time, from the aspect of gravitation, there are two mathematical models for studying orbital resonance: practical and averaged models. It is not convenient to use the practical model because it is devoid of simplifications in the dynamic system...

X. Liao L. Liu



When Is the Local Average Treatment Close to the Average? Evidence from Fertility and Labor Supply  

ERIC Educational Resources Information Center

The local average treatment effect (LATE) may differ from the average treatment effect (ATE) when those influenced by the instrument are not representative of the overall population. Heterogeneity in treatment effects may imply that parameter estimates from 2SLS are uninformative regarding the average treatment effect, motivating a search for…

Ebenstein, Avraham




SciTech Connect

The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

K. L. Goluoglu



Greedy grid scheduling algorithm in dynamic job submission environment  

Microsoft Academic Search

Grid is a homogeneous or heterogeneous environment of resources. Grid scheduling is a technique by which resources are efficiently utilized to meet user demands. On the user perspective minimization of job turnaround time is the main challenge. Most of the minimization algorithms are implemented in homogeneous resource environment. In this paper a scheduling algorithm is presented to minimize average turnaround

Saumitra Singh; Krishna Kant



Rapid identification of pathogens directly from blood culture bottles by Bruker matrix-assisted laser desorption laser ionization-time of flight mass spectrometry versus routine methods.  


The use of matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for identification of microorganisms directly from blood culture is an exciting dimension to the microbiologists. We evaluated the performance of Bruker SepsiTyper kit™ (STK) for direct identification of bacteria from positive blood culture. This was done in parallel with conventional methods. Nonrepetitive positive blood cultures from 160 consecutive patients were prospectively evaluated by both methods. Of 160 positive blood cultures, the STK identified 114 (75.6%) isolates and routine conventional method 150 (93%). Thirty-six isolates were misidentified or not identified by the kit. Of these, 5 had score of >2.000 and 31 had an unreliable low score of <1.7. Four of 8 yeasts were identified correctly. The average turnaround time using the STK was 35 min, including extraction steps and 30:12 to 36:12 h with routine method. The STK holds promise for timely management of bacteremic patients. PMID:23726652

Jamal, Wafaa; Saleem, Rola; Rotimi, Vincent O



Time savings and energy efficiency through alternate electric motor rewind methods  

Microsoft Academic Search

The purpose of this paper is to introduce techniques for turn-around time reduction using nontraditional electric motor rewind techniques. Mechanical stripping, automated winding, and trickle varnish impregnation are all methods that can improve shop work-flow, repaired motor life, and reduce end-user downtime. The electromechanical properties and efficiency levels of rewound three phase motors are also maintained. For this paper, we

Howard W. Penrose; Barry Bauer



40 CFR 89.204 - Averaging.  

Code of Federal Regulations, 2012 CFR

...exceedance of a nonroad engine family caused by a NOX FEL above the applicable emission standard. NOX credits used in averaging may be obtained...a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...



Average Values of Mean Squares in Factorials  

Microsoft Academic Search

The assumptions appropriate to the application of analysis of variance to specific examples, and the effects of these assumptions on the resulting interpretations, are today a matter of very active discussion. Formulas for average values of mean squares play a central role in this problem, as do assumptions about interactions. This paper presents formulas for crossed (and, incidentally, for nested

Jerome Cornfield; John W. Tukey



World average top-quark mass  

SciTech Connect

This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

Glenzinski, D.; /Fermilab



Gauge invariant averages for the cosmological backreaction  

NASA Astrophysics Data System (ADS)

We show how to provide suitable gauge invariant prescriptions for the classical spatial averages (resp. quantum expectation values) that are needed in the evaluation of classical (resp. quantum) backreaction effects. We also present examples illustrating how the use of gauge invariant prescriptions can avoid interpretation problems and prevent misleading conclusions.

Gasperini, M.; Marozzi, G.; Veneziano, G.



Average magnitude difference function pitch extractor  

Microsoft Academic Search

This paper describes a method for using the average magnitude difference function (AMDF) and associated decision logic to estimate the pitch period of voiced speech sounds. The AMDF is a variation on autocorrelation analysis where, instead of correlating the input speech at various delays (where multiplications and summations are formed at each value of delay), a difference signal is formed

M. Ross; H. Shaffer; A. Cohen; R. Freudberg; H. Manley



Modification of turbulent transport by orbit averaging  

Microsoft Academic Search

The effect on plasma turbulence of orbit averaging by thermal ions is considered. This effect can reduce the ion response, modifying the mode growth rate, which in turn changes the turbulent transport. For the trapped electron mode, this mechanism provides a potential explanation of the observed more favourable scaling of ?i with Ti in supershots than in L-modes. Also, for

H. E. Mynick; S. J. Zweben



Why Johnny Can Be Average Today.  

ERIC Educational Resources Information Center

During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

Sturrock, Alan



Initial Conditions in the Averaging Cognitive Model  

ERIC Educational Resources Information Center

The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

Noventa, S.; Massidda, D.; Vidotto, G.



New applications for high average power beams.  

National Technical Information Service (NTIS)

The technology base formed by the development of high peak power simulators, laser drivers, FEL's, and ICF drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of supporting n...

E. L. Neau B. N. Turman E. L. Patterson




SciTech Connect

Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.




Boundedness and Dimension for Weighted Average Functions.  

National Technical Information Service (NTIS)

The power considers a weighted average property of the type u(x sub 0) = (the integral over B of uwdx)/(the integral over B of wdx), B a ball in E superscript n with center x sub 0. A lemma constructing such functions is presented from which it follows th...

D. P. Stanford



Comparison of mouse brain DTI maps using K-space average, image-space average, or no average approach.  


Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data were collected from five normal mice. Noisy data with signal-to-noise ratios (SNR) that varied between five and 30 (before averaging) were then simulated. The DTI indices, including relative anisotropy (RA), trace of diffusion tensor (TR), axial diffusivity (??), and radial diffusivity (??), derived from the k-avg, m-avg, and no-avg, were then compared in the corpus callosum white matter, cortex gray matter, and the ventricles. We found that k-avg and m-avg enhanced the SNR of DWI with no significant differences. However, k-avg produced lower RA in the white matter and higher RA in the gray matter, compared to the m-avg and no-avg, regardless of SNR. The latter two produced similar DTI quantifications. We concluded that k-avg is less preferred for DTI brain imaging. PMID:23988782

Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan



Despeckling vs averaging of retinal UHROCT tomograms: advantages and limitations  

NASA Astrophysics Data System (ADS)

Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5?m axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.

Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.



Neutron average cross sections of {sup 237}Np  

SciTech Connect

This work reports {sup 237}Np neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the {sup 237}Np neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width =39.3+-1.0 meV, a neutron strength function 10{sup 4}S{sub 0}=1.02+-0.14, a mean level spacing D{sub 0}=0.60+-0.03 eV, and a potential scattering length R{sup '}=9.8+-0.1 fm.

Noguere, G. [Atomic Energy Commission (CEA), DEN Cadarache, F-13108 Saint Paul Les Durance (France)



New applications for high average power beams  

SciTech Connect

The technology base formed by the development of high peak power simulators, laser drivers, FEL`s, and ICF drivers from the early 60`s through the late 80`s is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

Neau, E.L.; Turman, B.N.; Patterson, E.L.



Average-case analysis of greedy pursuit  

NASA Astrophysics Data System (ADS)

Recent work on sparse approximation has focused on the theoretical performance of algorithms for random inputs. This average-case behavior is typically far better than the behavior for the worst inputs. Moreover, an average-case analysis fits naturally with the type of signals that arise in certain applications, such as wireless communications. This paper describes what is currently known about the performance of greedy prusuit algorithms with random inputs. In particular, it gives a new result for the performance of Orthogonal Matching Pursuit (OMP) for sparse signals contaminated with random noise, and it explains recent work on recovering sparse signals from random measurements via OMP. The paper also provides a list of open problems to stimulate further research.

Tropp, Joel A.



New applications for high average power beams  

NASA Astrophysics Data System (ADS)

The technology base formed by the development of high peak power simulators, laser drivers, FEL's, and ICF drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

Neau, E. L.; Turman, B. N.; Patterson, E. L.



High-average-power exciplex laser system  

NASA Astrophysics Data System (ADS)

The LUX high-average-power high-PRF exciplex laser (EL) system being developed at the Institut de Mecanique des Fluides de Marseille is characterized, and some preliminary results are presented. The fundamental principles and design criteria of ELs are reviewed, and the LUX components are described and illustrated, including a closed-circuit subsonic wind tunnel and a 100-kW-average power 1-kHz-PRF power pulser providing avalanche-discharge preionization by either an electron beam or an X-ray beam. Laser energy of 50 mJ has been obtained at wavelength 308 nm in the electron-beam mode (14.5 kV) using a 5300/190/10 mixture of Ne/Xe/HCl at pressure 1 bar.

Sentis, M.


The Average Velocity in a Queue  

ERIC Educational Resources Information Center

|A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

Frette, Vidar



Stochastic Games with Average Payoff Criterion  

SciTech Connect

We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

Ghosh, M. K. [Department of Mathematics, Indian Institute of Science, Bangalore 560012 (India); Bagchi, A. [Department of Applied Mathematics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)



Geometric Orbital Transfer Using Averaging Techniques  

Microsoft Academic Search

In this article, we focus on the coplanar orbital transfer of a satellite between Keplerian orbits, with low propulsion. We\\u000a compute and analyze averaged systems associated with the energy minimization problem with the full control or control oriented\\u000a in a single direction, tangential-normal or radial-orthoradial. This allows to understand the properties of each control direction\\u000a in the optimal control problems.

B. Bonnard; G. Janin



Gridded electron guns for high average power  

Microsoft Academic Search

The design and performance of electron guns producing high-average-power beams which are controlled by an intercepting-type grid are described. The design consists of a modification of the design of Pierce-type electron guns to allow for the effect of grid insertion, and calculation of the amplification factors based on the potential distribution. The performance is evaluated in terms of the beam

H. E. Gallagher



The Average Velocity in a Queue  

ERIC Educational Resources Information Center

A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

Frette, Vidar



Average Transitive Trustworthy Degrees for Trustworthy Networks  

Microsoft Academic Search

The goal of this paper is to investigate measurement models of transitive trustworthiness for trustworthy networks. The reals\\u000a in the interval [0,1] are used to define trust degrees in such a way that the larger its trust degree and the more it is trustworthy.\\u000a A basic contribution is to introduce the average algorithm for the measurement of the transitive trustworthiness

Yixiang Chen; Min Zhang; Hong Zhu; Tian-ming Bu



Averaging Processes in Children's Judgment of Happiness  

Microsoft Academic Search

Children of 6–7 years of age judged happiness of stimulus children on the basis of the nature of their father and mother. In Experiment I(N = 36), it was predicted and found consistent with the averaging rule of stimulus-combination that a pair of polar and mildly polar or neutral parents would contribute less to the happiness of the child than

Ramadhar Singh; Usha R. Sidana; Prabha Srivastava



On the high correlation between long-term averages of solar wind speed and geomagnetic activity  

Microsoft Academic Search

Six-month and yearly averages of solar wind speed from 1962 to 1975 are shown to be highly correlated with geomagnetic activity as measured by averages of the AP index. On the same time scale the correlation between the southward component of the interplanetary magnetic field and geomagnetic activity is poor. Previous studies with hourly averages give opposite results. The better

N. U. Crooker; J. Feynman; J. T. Gosling



The Monthly-Averaged and Yearly-Averaged Cosine Effect Factor of a Heliostat Field  

Microsoft Academic Search

Analyses are given to determine the monthly-averaged and yearly-averaged daily cosine effect factor for a heliostat field. The analyses showed that this factor depends on the latitude angle of the site, the month, and the relative position of the heliostat with respect to the tower. The position of the heliostat is defined in terms of its distance from the tower

Omar M. Al-Rabghi; Moustafa M. Elsayed



Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.  

ERIC Educational Resources Information Center

|Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)|

Dirks, Jean; And Others



Average Run Lengths of Geometric Moving Average Charts by Numerical Methods  

Microsoft Academic Search

A numerical procedure is presented for the tabulation of average run lengths (ARL's) of geometric moving average charts. Both one-and two-sided ARL's are given for various settings of the control limits, smoothing constant and shift in the nominal level of the process mean. Where comparison is possible. the tabulated ARL's are in agreement with those obtained by Roberts [3]. [4

P. B. Robinson; T. Y. Ho



Average level crossing rate and average outage duration of generalized selection combining  

Microsoft Academic Search

The paper presents an analytical methodology for the evaluation of the average level crossing rate and the average outage duration of generalized selection combining. More specifically, it presents a general approach when this scheme is employed over independent, but not necessarily identically distributed, fading channels. The general approach is then specialized to independent identically distributed Rayleigh fading paths. Selected numerical

Lin Yang; Mohamed-Slim Alouini



Self-averaging in complex brain neuron signals  

NASA Astrophysics Data System (ADS)

Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.

Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.



Averaged hole mobility model of biaxially strained Si  

NASA Astrophysics Data System (ADS)

We aim to establish a model of the averaged hole mobility of strained Si grown on (001), (101), and (111) relaxed Si1?x Gex substrates. The results obtained from our calculation show that their hole mobility values corresponding to strained Si (001), (101) and (111) increase by at most about three, two and one times, respectively, in comparison with the unstrained Si. The results can provide a valuable reference to the understanding and design of strained Si-based device physics.

Jianjun, Song; He, Zhu; Jinyong, Yang; Heming, Zhang; Rongxi, Xuan; Huiyong, Hu




Microsoft Academic Search

A high pulse and average power low-induction load with a built-in divider is described in this report. The load has a nominal resistance of 25 Ohm and is designed to operate with a repetition rate of up to 50 Hz at a pulse duration (FWHM) of 100 ns, a rise\\/fall time of 50 ns and pulse amplitude of up to

B. I. Grishanov; F. V. Podgorny



Averaging spherically symmetric spacetimes in general relativity  

SciTech Connect

We discuss the averaging problem in general relativity, using the form of the macroscopic gravity equations in the case of spherical symmetry in volume preserving coordinates. In particular, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. On cosmological scales, the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background is found to be of the form of a spatial curvature. On astrophysical scales the correlation tensor can be interpreted as the sum of a spatial curvature and an anisotropic fluid. We briefly discuss the physical implications of these results.

Coley, A. A.; Pelavas, N. [Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova Scotia (Canada)



Average Lyman absorption from simulated QSO spectra  

NASA Astrophysics Data System (ADS)

Averagae flux decrements are computed below Ly-alpha and above Ly-beta, D(A) and between Ly-beta and the Lyman limit, D(B) of QSOs from synthetic spectra as a function of emission redshift on the basis of the presently known statistics of absorption lines. These simulations indicate that a steepening of the intrinsic spectrum of the QSO up to the Lyman limit is not required in order to explain the observed run of D(B)/D(A) with redshift, and that the average observed absorption is consistent with absorption line statistics.

Giallongo, E.; Gratton, R.; Trevese, D.



High average power diode pumped solid state lasers for CALIOPE  

SciTech Connect

Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

Comaskey, B.; Halpin, J.; Moran, B.



Mexican input in the time-averaged field global database and geomagnetic instability Time Scale  

NASA Astrophysics Data System (ADS)

Trans-Mexican Volcanic Belt (TMVB) is one of the largest continental volcanic arcs of the North American plate spanning about 1000 km from the Pacific to the Gulf of Mexico. I report here a detailed paleomagnetic study of more than 400 oriented samples coming from 68 independent cooling. All these sites were recently dated by means 40Ar-39Ar systematics and span from 5 Ma to present. Most samples belong to the Western part of TMVB while 18 sites were collected at the Golf of Mexico. Rock-magnetic experiments which included continuous susceptibility and hysteresis measurements point to simple magnetic mineralogy. In most of cases, the remanence is carried by Ti-poor titanomagnetite of pseudo-single-domain magnetic structure. The characteristic paleodirections are successfully isolated for 62 units.

Gogichaishvili, A.



Average properties of geomagnetic storms in 1932-2009  

NASA Astrophysics Data System (ADS)

We investigate the average properties of geomagnetic storms using the global and local Dxt indices at four Dst stations in 1932-2009. Imposing the condition of complete data availability during storms, our study includes 1268/362/134/59 storms with Dxt minimum less than -50/-100/-150/-200 nT, respectively. The global Dxt minima were, on an average, -94/-156/-216/-275 nT, while deepest storm-time local Dxt minima were -137/-214/-285/-350 nT. Accordingly, the local Dxt minima are typically 25-30% stronger than the global Dxt minima. The distribution of largest storm-time disturbances is strongly peaked at 18 local time (LT), challenging local midnight as the dominant ion source. Relative timing of local minima verifies that stations at earlier LT hour observe their minimum a couple of hours after the deepest minimum, in agreement with westward drift of ions. Storm-time maximum asymmetries were found to increase with storm intensity level from about 70 nT to 150 nT for -50 to -200 nT storms. However, strong storms are relatively more symmetric than weak storms when compared to the typical level of local disturbance. During individual storms the asymmetry can be more than 200 nT. The rate of evolution of storm-time asymmetry is found to be roughly twice as fast for large storms. We emphasize that the unique database of local Dxt indices proves to be very useful in studying the average spatial distribution and temporal evolution of storms.

Yakovchouk, O. S.; Mursula, K.; Holappa, L.; Veselovsky, I. S.; Karinen, A.



Motional averaging in a superconducting qubit.  


Superconducting circuits with Josephson junctions are promising candidates for developing future quantum technologies. Of particular interest is to use these circuits to study effects that typically occur in complex condensed-matter systems. Here we employ a superconducting quantum bit--a transmon--to perform an analogue simulation of motional averaging, a phenomenon initially observed in nuclear magnetic resonance spectroscopy. By modulating the flux bias of a transmon with controllable pseudo-random telegraph noise we create a stochastic jump of its energy level separation between two discrete values. When the jumping is faster than a dynamical threshold set by the frequency displacement of the levels, the initially separate spectral lines merge into a single, narrow, motional-averaged line. With sinusoidal modulation a complex pattern of additional sidebands is observed. We show that the modulated system remains quantum coherent, with modified transition frequencies, Rabi couplings, and dephasing rates. These results represent the first steps towards more advanced quantum simulations using artificial atoms. PMID:23361011

Li, Jian; Silveri, M P; Kumar, K S; Pirkkalainen, J-M; Vepsäläinen, A; Chien, W C; Tuorila, J; Sillanpää, M A; Hakonen, P J; Thuneberg, E V; Paraoanu, G S




SciTech Connect

A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

Kevin Jordan; Trent Allison; Richard Evans; James Coleman; Albert Grippo



Climatology of globally averaged thermospheric mass density  

NASA Astrophysics Data System (ADS)

We present a climatological analysis of daily globally averaged density data, derived from orbit data and covering the years 1967-2007, along with an empirical Global Average Mass Density Model (GAMDM) that encapsulates the 1986-2007 data. The model represents density as a function of the F10.7 solar radio flux index, the day of year, and the Kp geomagnetic activity index. We discuss in detail the dependence of the data on each of the input variables, and demonstrate that all of the terms in the model represent consistent variations in both the 1986-2007 data (on which the model is based) and the independent 1967-1985 data. We also analyze the uncertainty in the results, and quantify how the variance in the data is apportioned among the model terms. We investigate the annual and semiannual variations of the data and quantify the amplitude, height dependence, solar cycle dependence, and interannual variability of these oscillatory modes. The auxiliary material includes Fortran 90 code for evaluating GAMDM.

Emmert, J. T.; Picone, J. M.



Average neutronic properties of prompt fission products  

SciTech Connect

Calculations of the average neutronic properties of the ensemble of fission products producted by fast-neutron fission of /sup 235/U and /sup 239/Pu, where the properties are determined before the first beta decay of any of the fragments, are described. For each case we approximate the ensemble by a weighted average over 10 selected nuclides, whose properties we calculate using nuclear-model parameters deduced from the systematic properties of other isotopes of the same elements as the fission fragments. The calculations were performed primarily with the COMNUC and GNASH statistical-model codes. The results, available in ENDF/B format, include cross sections, angular distributions of neutrons, and spectra of neutrons and photons, for incident-neutron energies between 10/sup -5/ eV and 20 MeV. Over most of this energy range, we find that the capture cross section of /sup 239/Pu fission fragments is systematically a factor of two to five greater than for /sup 235/U fission fragments.

Foster, D.G. Jr.; Arthur, E.D.



The average optical depth of disc galaxies  

NASA Astrophysics Data System (ADS)

The average optical depth of disc galaxies is calculated assuming that their infrared output originates from the dust reprocessing of stellar light. Essentially, the luminosity functions at both optical (B-band) and FIR (IRAS) wavelengths are integrated to give a local luminosity density in the optical and FIR. This method was first proposed by Saunders et al., who found a B-band optical depth of tau_B=0.26. In this paper we reconsider the method and employ more realistic models. We conclude that on average a B-band photon originates from a region with an optical depth of tau_B~0.9, a value that indicates that galaxies have high rather than low optical depths. Adjusting the FIR luminosity for flux at wavelengths longer than those detected by IRAS could increase this value further (to tau_B>1). Interpreting this, a galaxy with the dust distributed in a similar manner to the stars would have a central face-on optical depth of tau_B>2.5, and this `typical' galaxy would become optically thin at ~1 scalelength from the centre.

Trewhella, M.; Davies, J. I.; Disney, M. J.; Jones, H. G. W.



Impact of averaged photolysis rates on stratospheric chemical models  

SciTech Connect

An evaluation is made of the effect of two approximations to the diurnal variation of photolysis rates on the simulation of stratospheric chemistry. The daylight average approximation uses photolysis rates that are given a value representative of the daylight average over that portion of the day that the Sun is shining, and zero otherwise. Although this distorts the diurnal cycle, the zonal average concentrations using this approximation are within about 5% of those from the fully resolved diurnal cycle for most of the important species in the stratosphere. Larger discrepancies occur in the partitioning of inorganic chlorine, although even this is in error by only about 10% for the dominant species. The 24-hour average photolysis approximation gives perpetual daylight except in the polar night. The error associated with this is quite large, especially for those species such as NO{sub 3} and N{sub 2}O{sub 5} that are produced during night. Over an integration time of a month, it can lead to differences in the simulated concentrations of major species such as ozone of the order of 20% or more. 18 refs., 9 figs., 4 tabs.

Smith, A.K. [National Center for Atmospheric Research, Boulder, CO (United States)



Tongue Motion Averaging from Contour Sequences  

ERIC Educational Resources Information Center

In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned…

Li, Min; Kambhamettu, Chandra; Stone, Maureen



The Multidimensional Self-Concept: A Comparison of Gifted and Average-Ability Adolescents  

Microsoft Academic Search

The purpose of this paper is to describe the self-concept of gifted and average-ability adolescents from a multidimensional perspective. Four dimensions of self-concept (academic, social, athletic, and social) were examined using an instrument that incorporates three theoretical perspectives (reflected appraisals, social comparison, and attribution). Ninety-eight junior high school students from a large urban centre in Western Canada participated in the

Michael C. Pyryt; Sal Mendaglio



Determinants of underachievement as perceived by gifted, above?average, and average black students  

Microsoft Academic Search

This study examined determinants of underachievement as perceived by 148 Black fifth?and sixth?grade students in an urban school district. Explored were the respective influences of social, psychological, and cultural determinants of underachievement as perceived by Black students in three different academic programs (gifted, above?average, and average). Students were placed into groups based on profiles of achievement behavior (Types I through

Donna Y. Ford



Asymmetric network connectivity using weighted harmonic averages  

NASA Astrophysics Data System (ADS)

We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

Morrison, Greg; Mahadevan, L.



Average power laser experiment (APLE) design  

NASA Astrophysics Data System (ADS)

We describe the details and the design requirements for the 100 kW CW radio frequency free electron laser at 10 ?m to be built at Boeing Aerospace and Electronics Division in Seattle with the collaboration of Los Alamos National Laboratory. APLE is a single-accelerator master-oscillator and power-amplifier (SAMOPA) device. The goal of this experiment is to demonstrate a fully operational RF-FEL at 10 ?m with an average power of 100 kW. The approach and wavelength were chosen on the basis of maximum cost effectiveness, including utilization of existing hardware and reasonable risk, and potential for future applications. Current plans call for an initial oscillator power demonstration in the fall of 1994 and full SAMOPA operation by December 1995.

Parazzoli, C. G.; Rodenburg, R. E.; Dowell, D. H.; Greegor, R. B.; Kennedy, R. C.; Romero, J. B.; Siciliano, J. A.; Tong, K.-O.; Vetter, A. M.; Adamski, J. L.; Pistoresi, D. J.; Shoffstall, D. R.; Quimby, D. C.



The entropy in finite N-unit nonextensive systems: The normal average and q-average  

NASA Astrophysics Data System (ADS)

We discuss the Tsallis entropy in finite N-unit nonextensive systems by using the multivariate q-Gaussian probability distribution functions (PDFs) derived by the maximum entropy methods with the normal average and the q-average (q: the entropic index). The Tsallis entropy obtained by the q-average has an exponential N dependence: Sq(N)/N~=e(1-q)NS1(1) for large N (>>1/(1-q)>0). In contrast, the Tsallis entropy obtained by the normal average is given by Sq(N)/N~=[1/(q-1)N] for large N (>>1/(q-1)>0). N dependences of the Tsallis entropy obtained by the q- and normal averages are generally quite different, although both results are in fairly good agreement for |q-1|<<1.0. The validity of the factorization approximation (FA) to PDFs, which has been commonly adopted in the literature, has been examined. We have calculated correlations defined by Cm=<(?xi?xj)m>-<(?xi)m><(?xj)m> for i?j where ?xi=xi-, and the bracket <.> stands for the normal and q-averages. The first-order correlation (m=1) expresses the intrinsic correlation and higher-order correlations with m>=2 include nonextensivity-induced correlation, whose physical origin is elucidated in the superstatistics.

Hasegawa, Hideo



Noise reduction of video imagery through simple averaging  

NASA Astrophysics Data System (ADS)

Examiners in the Special Photographic Unit of the Federal Bureau of Investigation Laboratory Division conduct examinations of questioned photographic evidence of all types, including surveillance imagery recorded on film and video tape. A primary type of examination includes side-by- side comparisons, in which unknown objects or people depicted in the questioned images are compared with known objects recovered from suspects or with photographs of suspects themselves. Most imagery received in the SPU for such comparisons originate from time-lapse video or film systems. In such circumstances, the delay between sequential images is so great that standard image summing and/or averaging techniques are useless as a means of improving image detail in questioned subjects or objects without also resorting to processing-intensive pattern reconstruction algorithms. Occasionally, however, the receipt of real-time video imagery will include a questioned object at rest. In such cases, it is possible to use relatively simple image averaging techniques as a means of reducing transient noise in the images, without further compromising the already-poor resolution inherent in most video surveillance images. This paper presents an example of one such case in which multiple images were averaged to reduce the transient noise to a sufficient degree to permit the positive identification of a vehicle based upon the presence of scrape marks and dents on the side of the vehicle.

Vorder Bruegge, Richard W.



Improved MCMAC with momentum, neighborhood, and averaged trapezoidal output.  


An improved modified cerebellar articulation controller (MCMAC) neural control algorithm with better learning and recall processes using momentum, neighborhood learning, and averaged trapezoidal output, is proposed in this paper. The learning and recall processes of MCMAC are investigated using the characteristic surface of MCMAC and the control action exerted in controlling a continuously variable transmission (CVT). Extensive experimental results demonstrate a significant improvement with reduced training time and an extended range of trained MCMAC cells. The improvement in recall process using the averaged trapezoidal output (MCMAC-ATO) are contrasted against the original MCMAC using the square of the Pearson product moment correlation coefficient. Experimental results show that the new recall process has significantly reduced the fluctuations in the control action of the MCMAC and addressed partially the problem associated with the resolution of the MCMAC memory array. PMID:18252380

Ang, K K; Chai, Q



Database of average-power damage thresholds at 1064 nm  

SciTech Connect

We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from < 1 J/cm/sup 2/ for some metals to > 46 J/cm/sup 2/ for a bare polished glass substrate. 4 refs., 7 figs., 1 tab.

Rainer, F.; Hildum, E.A.; Milam, D.



Results from the average power laser experiment photocathode injector test  

NASA Astrophysics Data System (ADS)

Tests of the electron beam injector for the Boeing/Los Alamos Average Power Laser Experiment (APLE) have demonstrated first time operation of a photocathode RF gun accelerator at 25% duty factor. This exceeds previous photocathode operation by three orders of magnitude. The success of these tests was dependent upon the development of reliable and efficient photocathode preparation and processing. This paper describes the fabrication details for photocathodes with quantum efficiencies up to 12% which were used during electron beam operation. Measurements of photocathode lifetime as it depends upon the presence of water vapor are also presented. Observations of photocathode quantum efficiency rejuvenation and extended lifetime in the RF cavities are described. The importance of these effects upon photocathode lifetime during high average power operation are discussed.

Dowell, D. H.; Bethel, S. Z.; Friddell, K. D.



Image segmentation using intraregion averaging techniques  

NASA Astrophysics Data System (ADS)

An iterative approach that partitions an image into piecewise-constant regions is presented. Each iteration consists of three steps. The first step extracts edges from the image. The extracted edges, which must exhibit high connectivity, are computed using a Laplacian-like morphological edge detector in the first iteration and a simple gradient thresholding in subsequent iterations. After the first iteration, the edge extractor operates on a piecewise-constant image for which the edge detection problem is well defined and well posed. In the second step, a fast averaging of connected pixels within closed boundaries defines the regions in the image. Finally, edge pixels (both true and spurious edge pixels) are each assigned to an underlying region. The iterative application of the three steps simplifies an input image in a diffusion- like manner. The advantages of this algorithm are that both local and region-based information is incorporated, convergence occurs in a few iterations, and the component operations are relatively simple.

Crespo, Jose; Schafer, Ronald W.; Maojo, Victor



Forecasting Sales by Exponentially Weighted Moving Averages  

Microsoft Academic Search

The growing use of computers for mechanized inventory control and production planning has brought with it the need for explicit forecasts of sales and usage for individual products and materials. These forecasts must be made on a routine basis for thousands of products, so that they must be made quickly, and, both in terms of computing time and information storage,

Peter R. Winters



Nihm's Law Only Perfect on the Average  

Microsoft Academic Search

Any theory which not only claims to fit the data better than existing laws but obtains a perfect fit every time should call forth a certain amount of skepticism in the mind of any critical reader. Nihm's (November 1976) polynomial law of sensation makes such claims and thus deserves severe scrutiny. Small differences in Pearson correlation coefficients may represent large

Ima Göst Ryeta



The consequences of average curve generation: implications for biomechanics data.  


One method of understanding the general mechanical response of a complex system such as a vehicle, a human surrogate, a bridge, a boat, a plane, etc., is to subject it to an input, such as an impact, and obtain the response time-histories. The responses can be accelerations, velocities, strains etc. In general, when experiments of this type are run the responses are contaminated by sample-to-sample variation, test-to-test variability, random noise, instrumentation noise, and noise from unknown sources. One common method of addressing the noise in the system to obtain the underlying response is to run multiple tests on different samples that represent the same system and add them together obtaining an average. This functionally reduces the random noise. However, if the fundamental response of each sample is not the same, then it is not altogether clear what the average represents. It may not capture the underlying physics. This paper evaluates the use of transducer time-histories for developing an underlying response when there is variation in the time-histories that is not due to random noise, but to a fundamental aspect of the response. Although the examples used are from NCAP tests, the analysis has direct application to the development of Anthropomorphic Test Devices (ATDs) when the underlying response to which the ATD is designed is obtained from impact tests on Post Mortem Human Surrogates. PMID:21512901

Nusholtz, Guy S; Hsu, Timothy P; Gracián Luna, Manuel Alejandro; Didomenico, Laura; Kochekseraii, Sadech Babaii



Comparison of two matrix-assisted laser desorption ionization-time of flight mass spectrometry methods with conventional phenotypic identification for routine identification of bacteria to the species level.  


Bacterial identification relies primarily on culture-based methodologies requiring 24 h for isolation and an additional 24 to 48 h for species identification. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is an emerging technology newly applied to the problem of bacterial species identification. We evaluated two MALDI-TOF MS systems with 720 consecutively isolated bacterial colonies under routine clinical laboratory conditions. Isolates were analyzed in parallel on both devices, using the manufacturers' default recommendations. We compared MS with conventional biochemical test system identifications. Discordant results were resolved with "gold standard" 16S rRNA gene sequencing. The first MS system (Bruker) gave high-confidence identifications for 680 isolates, of which 674 (99.1%) were correct; the second MS system (Shimadzu) gave high-confidence identifications for 639 isolates, of which 635 (99.4%) were correct. Had MS been used for initial testing and biochemical identification used only in the absence of high-confidence MS identifications, the laboratory would have saved approximately US$5 per isolate in marginal costs and reduced average turnaround time by more than an 8-h shift, with no loss in accuracy. Our data suggest that implementation of MS as a first test strategy for one-step species identification would improve timeliness and reduce isolate identification costs in clinical bacteriology laboratories now. PMID:20164271

Cherkaoui, Abdessalam; Hibbs, Jonathan; Emonet, Stéphane; Tangomo, Manuela; Girard, Myriam; Francois, Patrice; Schrenzel, Jacques



On the vertically averaged balance equation of atmospheric trace constituents  

NASA Astrophysics Data System (ADS)

The vertically averaged balance equation of atmospheric trace constituents that customarily serves as a basis for one- and two-layer (box) models of the atmospheric boundary layer and the whole troposphere is derived and discussed. It is shown that this kind of balance equation is accompanied by several prominent short-comings so that the capability of such models to predict real situations of photochemical smog formation and depletion is strongly limited. Three of these short-comings are theoretically elucidated, namely (1) the vertically averaged source and sink terms owing to chemical reactions, (2) the parameterisation of the dry deposition and exhalation fluxes at the earth's surface that serve as lower boundary conditions, and (3) the parameterisation of the upper boundary conditions. Even though the consumption of computing time is much smaller than those of sophisticated Eulerian air pollution models like ADOM, DRAIS, EURAD, RADM, and TADAP, we have to conclude from our theoretical results that one- and two-layer (box) models should not be considered as a true alternative to such air pollution models. We also suggest that the influence of these short-comings on the predicted results has to be estimated and evaluated in a reliable manner before vertically averaged balance equations of atmospheric trace constituents are introduced into general circulation models to investigate the influence of air pollution on climate change on the basis of long-term simulations.

Kramm, G.; Dlugi, R.; Mölders, N.


Averaged null energy condition in loop quantum cosmology  

SciTech Connect

Wormholes and time machines are objects of great interest in general relativity. However, to support them it needs exotic matters which are impossible at the classical level. Semiclassical gravity introduces the quantum effects into the stress-energy tensor and constructs many self-consistent wormholes. But they are not traversable due to the averaged null energy condition. Loop quantum gravity (LQG) significantly modifies the Einstein equation in the deep quantum region. If we write the modified Einstein equation in the form of the standard one but with an effective stress-energy tensor, it is convenient to analyze the geometry in LQG through the energy condition. Loop quantum cosmology (LQC), an application of LQG, has an effective stress-energy tensor which violates some kinds of local energy conditions. So it is natural that the inflation emerges in LQC. In this paper, we investigate the averaged null energy condition in LQC in the framework of the effective Hamiltonian, and we find that the effective stress-energy tensor in LQC violates the averaged null energy condition in the massless scalar field coupled model.

Li Lifang; Zhu Jianyang [Department of Physics, Beijing Normal University, Beijing 100875 (China)



Quantized Average Consensus on Gossip Digraphs with Reduced Computation  

NASA Astrophysics Data System (ADS)

The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

Cai, Kai; Ishii, Hideaki


Calculating ensemble averaged descriptions of protein rigidity without sampling.  


Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability. PMID:22383947

González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J



Implication of spatial averaging in complex-terrain wind studies  

SciTech Connect

Studies of wind over complex terrain have been conducted at three times and two locations in Northern California. Instrumentation included conventional cup-vane anemometers and optical anemometers with spatial averaging over path lengths of 0.6--1 km. Autospectra of the path-normal component of wind from the cup-vane and optical anemometers show consistent differences in slope for periods shorter than four hours. The spectral differences are associated with changes in wind direction than with changes in wind speed.

Porch, W.M.



Constructing the Average Natural History of HIV-1 Infection  

NASA Astrophysics Data System (ADS)

Many aspects of the natural course of the HIV-1 infection remains unclear, despite important efforts towards understanding its long-term dynamics. Using a scaling approach that places progression markers (viral load, CD4+, CD8+) of many individuals on a single average natural course of disease progression, we introduce the concept of inter-individual scaling and time scaling. Our quantitative assessment of the natural course of HIV-1 infection indicates that the dynamics of the evolution for the individual that developed AIDS (opportunistic infections) is different from that of the individual that did not develop AIDS. This means that the rate of progression is not relevant for the infection evolution.

Diambra, L.; Capurro, A.; Malta, C. P.



High average power second harmonic generation in air  

SciTech Connect

We demonstrate second harmonic vortex generation in atmospheric pressure air using tightly focused femtosecond laser beam. The circularly polarized ring-shaped beam of the second harmonic is generated in the air by fundamental beam of the same circular polarization, while the linear polarized beam produces two-lobe beam at the second harmonic frequency. The achieved normalized conversion efficiency and average second harmonic power are two orders of magnitude higher compared to those previously reported and can be increased up to 20 times by external gas flow. We demonstrate that the frequency doubling originates from the gradient of photoexcited free electrons created by pondermotive force.

Beresna, Martynas; Kazansky, Peter G. [Optoelectronics Research Centre, University of Southampton, Southampton SO17 1BJ (United Kingdom); Svirko, Yuri [Department of Physics and Mathematics, University of Joensuu, Joensuu FI-80101 (Finland); Barkauskas, Martynas; Danielius, Romas [Light Conversion Ltd., Vilnius, LT-10223 (Lithuania)



Averaged null energy condition in a classical curved background  

NASA Astrophysics Data System (ADS)

The averaged null energy condition (ANEC) states that the integral along a complete null geodesic of the projection of the stress-energy tensor onto the tangent vector to the geodesic cannot be negative. Exotic spacetimes, such as those allow wormholes or the construction of time machines are possible in general relativity only if ANEC is violated along achronal geodesics. Starting from a conjecture that flat-space quantum inequalities apply with small corrections in spacetimes with small curvature, we prove that ANEC is obeyed by a minimally coupled, free quantum scalar field on any achronal null geodesic surrounded by a tubular neighborhood whose curvature is produced by a classical source.

Kontou, Eleni-Alexandra; Olum, Ken D.



Comparison of Actual Monthly Average Temperatures with 30-Year Average Monthly Temperatures.  

National Technical Information Service (NTIS)

The DOE recently completed a study of regional and seasonal influences on fuel economy shortfall. In a more recent study, the influence of vehicle use intensity, or average miles per day (AMPD), on fuel economy was determined. In both the regionalization ...



20 CFR 226.62 - Computing average monthly compensation.  

Code of Federal Regulations, 2013 CFR

...Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation (disregarding...



A Novel Method for Averaging Equations of Motion.  

National Technical Information Service (NTIS)

Most perturbation theories are based on successively averaging out those angular variables having increasingly longer periods. Sometimes even the straightforward application of averaging is not simple because the integrals required to average certain func...

F. Morrison



Light-cone averages in a Swiss-cheese universe  

NASA Astrophysics Data System (ADS)

We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaître-Tolman-Bondi solution of Einstein’s equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ?CDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w0 and wa follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.

Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino



Determination of the average lifetime of b-baryons  

NASA Astrophysics Data System (ADS)

The average lifetime of b-baryons has been studied using 3×106 hadronic Z 0 decays collected by the DELPHI detector at LEP. Three methods have been used, based on the measurement of different observables: the proper decay time distribution of 206 vertices reconstructed with a ?, a lepton and an oppositely charged pion; the impact parameter distribution of 441 muons with high transverse momentum accompanied by a ? in the same jet; and the proper decay time distribution of 125 ? c-lepton decay vertices with the ? c exclusively reconstructed through its pK?, pK 0 and ?3 ? decay modes. The combined result is: begin{gathered} tau (b - baryon) = (1.254_{ - 0.109}^{ + 0.121} (stat) \\ ± 0.04(syst)_{ - 0.05}^{ + 0.03} (syst)) ps \\ where the first systematic error is due to experimental uncertainties and the second to the uncertainties in the modelling of the b-baryon production and semi-leptonic decay. Including the measurement recently published by DELPHI based on a sample of proton-muon vertices, the average b-baryon lifetime is: tau (b - baryon) = (1.255_{ - 0.102}^{ + 0.115} (stat) ± 0.05) ps.

Abreu, P.; Adam, W.; Adye, T.; Agasi, E.; Ajinenko, I.; Aleksan, R.; Alekseev, G. D.; Alemany, R.; Allport, P. P.; Almehed, S.; Amaldi, U.; Amato, S.; Andreazza, A.; Andrieux, M. L.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Asman, B.; Augustin, J.-E.; Augustinus, A.; Baillon, P.; Bambade, P.; Barao, F.; Barate, R.; Barbi, M.; Bardin, D. Y.; Baroncelli, A.; Barring, O.; Barrio, J. A.; Bartl, W.; Bates, M. J.; Battaglia, M.; Baubillier, M.; Baudot, J.; Becks, K.-H.; Begalli, M.; Beilliere, P.; Belokopytov, Yu.; Belous, K.; Benvenuti, A. C.; Berggren, M.; Bertrand, D.; Bianchi, F.; Bigi, M.; Bilenky, M. S.; Billoir, P.; Bloch, D.; Blume, M.; Blyth, S.; Bolognese, T.; Bonesini, M.; Bonivento, W.; Booth, P. S. L.; Borisov, G.; Bosio, C.; Bosworth, S.; Botner, O.; Boudinov, E.; Bouquet, B.; Bourdarios, C.; Bowcock, T. J. V.; Bozzo, M.; Branchini, P.; Brand, K. D.; Brenke, T.; Brenner, R. A.; Bricman, C.; Brillault, L.; Brown, R. C. A.; Bruckman, P.; Brunet, J.-M.; Bugge, L.; Buran, T.; Burgsmueller, T.; Buschmann, P.; Buys, A.; Cabrera, S.; Caccia, M.; Calvi, M.; Camacho Rozas, A. J.; Camporesi, T.; Canale, V.; Canepa, M.; Cankocak, K.; Cao, F.; Carena, F.; Carroll, L.; Caso, C.; Castillo Gimenez, M. V.; Cattai, A.; Cavallo, F. R.; Cerrito, L.; Chabaud, V.; Chapkin, M.; Charpentier, Ph.; Chaussard, L.; Chauveau, J.; Checchia, P.; Chelkov, G. A.; Chen, M.; Chierici, R.; Chliapnikov, P.; Chochula, P.; Chorowicz, V.; Chudoba, J.; Cindro, V.; Collins, P.; Contreras, J. L.; Contri, R.; Cortina, E.; Cosme, G.; Cossutti, F.; Crawley, H. B.; Crennell, D.; Crosetti, G.; Maestro, J. Cuevas; Czellar, S.; Dahl-Jensen, E.; Dahm, J.; Dalmagne, B.; Dam, M.; Damgaard, G.; Dauncey, P. D.; Davenport, M.; da Silva, W.; Defoix, C.; Deghorain, A.; Della Ricca, G.; Delpierre, P.; Demaria, N.; de Angelis, A.; de Boer, W.; de Brabandere, S.; de Clercq, C.; de La Vaissiere, C.; de Lotto, B.; de Min, A.; de Paula, L.; de Saint-Jean, C.; Dijkstra, H.; di Ciaccio, L.; Djama, F.; Dolbeau, J.; Donszelmann, M.; Doroba, K.; Dracos, M.; Drees, J.; Drees, K. A.; Dris, M.; Dufour, Y.; Edsall, D.; Ehret, R.; Eigen, G.; Ekelof, T.; Ekspong, G.; Elsing, M.; Engel, J.-P.; Ershaidat, N.; Erzen, B.; Falk, E.; Fassouliotis, D.; Feindt, M.; Ferrer, A.; Filippas, T. A.; Firestone, A.; Fischer, P. A.; Foeth, H.; Fokitis, E.; Fontanelli, F.; Formenti, F.; Franek, B.; Frenkiel, P.; Fries, D. C.; Frodesen, A. G.; Fruhwirth, R.; Fulda-Quenzer, F.; Fuster, J.; Galloni, A.; Gamba, D.; Gandelman, M.; Garcia, C.; Garcia, J.; Gaspar, C.; Gasparini, U.; Gavillet, Ph.; Gazis, E. N.; Gele, D.; Gerber, J.-P.; Gerdyukov, L.; Gibbs, M.; Gokieli, R.; Golob, B.; Gopal, G.; Gorn, L.; Gorski, M.; Gouz, Yu.; Gracco, V.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Gumenyuk, S.; Gunnarsson, P.; Gunther, M.; Guy, J.; Hahn, F.; Hahn, S.; Hajduk, Z.; Hallgren, A.; Hamacher, K.; Hao, W.; Harris, F. J.; Hedberg, V.; Henriques, R.; Hernandez, J. J.; Herquet, P.; Herr, H.; Hessing, T. L.; Higon, E.; Hilke, H. J.; Hill, T. S.; Holmgren, S.-O.; Holt, P. J.; Holthuizen, D.; Hoorelbeke, S.; Houlden, M.; Hrubec, J.; Huet, K.; Hultqvist, K.; Jackson, J. N.; Jacobsson, R.; Jalocha, P.; Janik, R.; Jarlskog, Ch.; Jarlskog, G.; Jarry, P.; Jean-Marie, B.; Johansson, E. K.; Jonsson, L.; Jonsson, P.; Joram, C.; Juillot, P.; Kaiser, M.; Kapusta, F.; Karafasoulis, K.; Karlsson, M.; Karvelas, E.; Katsanevas, S.; Katsoufis, E. C.; Keranen, R.; Khokhlov, Yu.; Khomenko, B. A.; Khovanski, N. N.; King, B.; Kjaer, N. J.; Klein, H.; Klovning, A.; Kluit, P.; Koene, B.; Kokkinias, P.; Koratzinos, M.; Korcyl, K.; Kourkoumelis, C.; Kouznetsov, O.; Kramer, P. H.; Krammer, M.; Kreuter, C.; Kronkvist, I.; Krumstein, Z.; Krupinski, W.; Kubinec, P.; Kucewicz, W.; Kurvinen, K.; Lacasta, C.; Laktineh, I.; Lamblot, S.; Lamsa, J. W.; Lanceri, L.; Lane, D. W.; Langefeld, P.; Last, I.; Laugier, J.-P.; Lauhakangas, R.; Leder, G.; Ledroit, F.; Lefebure, V.; Legan, C. K.; Leitner, R.; Lemoigne, Y.; Lemonne, J.; Lenzen, G.; Lepeltier, V.; Lesiak, T.; Liko, D.; Lindner, R.; Lipniacka, A.; Lippi, I.; Loerstad, B.; Loken, J. G.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Maehlum, G.; Maio, A.; Malychev, V.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Maron, T.; Martinez-Rivero, C.; Martinez-Vidal, F.; Marti I Garcia, S.; Masik, J.; Matorras, F.; Matteuzzi, C.; Matthiae, G.; Mazzucato, M.; Mc Cubbin, M.; Mc Kay, R.; Mc Nulty, R.; Medbo, J.; Merk, M.; Meroni, C.; Meyer, S.; Meyer, W. T.; Michelotto, M.; Migliore, E.; Mirabito, L.; Mitaroff, W. A.; Mjoernmark, U.; Moa, T.; Moeller, R.; Moenig, K.; Monge, M. R.; Morettini, P.; Mueller, H.; Mundim, L. M.; Murray, W. J.; Muryn, B.; Myatt, G.; Naraghi, F.; Navarria, F. L.; Navas, S.; Nawrocki, K.; Negri, P.



Opinion formation on adaptive networks with intensive average degree  

NASA Astrophysics Data System (ADS)

We study the evolution of binary opinions on a simple adaptive network of N nodes. At each time step, a randomly selected node updates its state (“opinion”) according to the majority opinion of the nodes that it is linked to; subsequently, all links are reassigned with probability p˜ (q˜) if they connect nodes with equal (opposite) opinions. In contrast to earlier work, we ensure that the average connectivity (“degree”) of each node is independent of the system size (“intensive”), by choosing p˜ and q˜ to be of O(1/N) . Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. We find two absorbing states, characterized by perfect consensus, and one metastable state, characterized by a population split evenly between the two opinions. The relaxation time of this state grows exponentially with the number of nodes, N . A second metastable state, found in the earlier studies, is no longer observed.

Schmittmann, B.; Mukhopadhyay, Abhishek



To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space  

SciTech Connect

We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'.

Khrennikov, Andrei [International Center for Mathematical Modeling in Physics and Cognitive Sciences, University of Vaexjoe, Vaexjoe S-35195 (Sweden)



Digital Average Current-Mode Control of PWM DC–DC Converters Without Current Sensors  

Microsoft Academic Search

This paper introduces a digital average current-mode control technique for pulsewidth modulation dc-dc converters which only rely on voltage sampling. The proposed approach is to estimate inductor current using first-order discrete-time low-pass filter; therefore, the controller can calculate average inductor current in every switching cycle. As a novel technique of predictive average current control, it has been investigated by choosing

Ying Qiu; Helen Liu; Xiyou Chen



What do bloggers do: an average day on an average political blog  

Microsoft Academic Search

This study investigates whether average political bloggers engage in four distinct activities: informing readers, checking\\u000a the media, engaging in political advocacy, and gathering money for charitable causes. Findings show that most bloggers inform\\u000a their readers about articles in the media or to draw attention to key posts on other blogs. Somewhat less frequently, most\\u000a bloggers write posts that detect errors

Laura McKenna; Antoinette Pole



Perception of average value in multiclass scatterplots.  


The visual system can make highly efficient aggregate judgements about a set of objects, with speed roughly independent of the number of objects considered. While there is a rich literature on these mechanisms and their ramifications for visual summarization tasks, this prior work rarely considers more complex tasks requiring multiple judgements over long periods of time, and has not considered certain critical aggregation types, such as the localization of the mean value of a set of points. In this paper, we explore these questions using a common visualization task as a case study: relative mean value judgements within multi-class scatterplots. We describe how the perception literature provides a set of expected constraints on the task, and evaluate these predictions with a large-scale perceptual study with crowd-sourced participants. Judgements are no harder when each set contains more points, redundant and conflicting encodings, as well as additional sets, do not strongly affect performance, and judgements are harder when using less salient encodings. These results have concrete ramifications for the design of scatterplots. PMID:24051798

Gleicher, Michael; Correll, Michael; Nothelfer, Christine; Franconeri, Steven



Stability theory for adaptive systems: Methods of averaging and persistency of excitation  

Microsoft Academic Search

A method of averaging is developed for the stability analysis of linear differential equations with small time-varying coefficients which do not necessarily possess a (global) average. The technique is then applied to determine the stability of a linear equation which arises in the study of adaptive systems where the adaptive parameters are slowly varying. The stability conditions are stated in

R. L. Kosut; B. D. O. Anderson; I. Mareels



Predictive data mining on Average Global Temperature using variants of ARIMA models  

Microsoft Academic Search

This paper analyzes and predicts the Average Global Temperature time series data. Three different variants of ARIMA models: Basic ARIMA, Trend based ARIMA and Wavelet based ARIMA have been used to predict the average global temperature. Out of all the three linear models, it has been observed that Trend based ARIMA method outperforms basic ARIMA method and Wavelet based ARIMA

C. Narendra Babu; B. Eswara Reddy



The influence of the averaging period on calculation of air pollution using a puff model  

Microsoft Academic Search

The main goal of this paper is to assess differences in calculations of air pollution using standard, one hour wind averages and shorter time averages of ten minutes. A puff model has been used to estimate concentrations of a passive substance for four days in January, March, June and September as representatives of variations of wind and stability during a

Popov Zlatica


The Event-Related Low-Frequency Activity of Highly and Average Intelligent Children  

ERIC Educational Resources Information Center

Using time-frequency analysis techniques to investigate the event-related low-frequency (delta: 0.5-4 Hz; theta: 4-8 Hz) activity of auditory event-related potentials (ERPs) data of highly and average intelligent children, 18 intellectually gifted children, and 18 intellectually average children participated the present study. Present findings…

Liu, Tongran; Shi, Jiannong; Zhao, Daheng; Yang, Jie



Enhanced Offset Averaging Technique for Flash ADC Design  

Microsoft Academic Search

This paper presents a new combined AC\\/DC-coupled output averaging technique for input amplifier design of flash analog-to-digital converters (ADC). The new offset averaging design technique takes full advantage of traditional DC-coupled resistance averaging and AC-coupled capacitance averaging techniques to minimize offset-induced ADC nonlinearities. Circuit analysis allows selection of optimum resistance and capacitance averaging factors to achieve maximum offset reduction in

Siqiang Fan; He Tang; Hui Zhao; Xin Wang; Albert Wang; Bin Zhao; Gary G Zhang



A high average power electro-optic switch using KTP  

SciTech Connect

High damage threshold, high thermal conductivity, and small thermo-optic coefficients make KTiOPO{sub 4} (KTP) an attractive material for use in a high average power Q-switch. However, electro-chromic damage and refractive index homogeneity have prevented the utilization of KTP in such a device in the past. This work shows that electro-chromic damage is effectively suppressed using capacitive coupling, and a KTP crystal can be Q-switched for 1.5 {times} 10{sup 9} shots without any detectable electro-chromic damage. In addition, KTP with the high uniformity and large aperture size needed for a KTP electro-optic Q-switch can be obtained from flux crystals grown at constant temperature. A thermally compensated, dual crystal KTP Q-switch, which successfully produced 50 mJ pulses with a pulse width of 8 ns (FWHM), has been constructed. In addition, in off-line testing the Q-switch showed less than 7% depolarization at an average power loading of 3.2 kW/cm{sup 2}.

Ebbers, C.A.; Cook, W.M.; Velsko, S.P.



Topology, Delocalization via Average Symmetry and the Symplectic Anderson Transition  

NASA Astrophysics Data System (ADS)

A field theory of the Anderson transition in two dimensional disordered systems with spin-orbit interactions and time-reversal symmetry is developed, in which localization is driven by the proliferation of vortex-like topological defects. The sign of the vortex fugacity determines the Z2 topological class of the localized phase. There are two distinct fixed points with the same critical exponents, corresponding to transitions from a metal to an insulator and a topological insulator respectively. The critical conductivity and correlation length exponent of these transitions are computed in a N=1-? expansion in the number of replicas, where for small ? the critical points are perturbatively connected to the Kosterlitz Thouless critical point. Delocalized states, which arise at the surface of weak topological insulators and topological crystalline insulators, occur because vortex proliferation is forbidden due to the presence of symmetries that are violated by disorder, but are restored by disorder averaging.

Kane, Charles; Fu, Liang



Topology, delocalization via average symmetry and the symplectic Anderson transition.  


A field theory of the Anderson transition in two-dimensional disordered systems with spin-orbit interactions and time-reversal symmetry is developed, in which the proliferation of vortexlike topological defects is essential for localization. The sign of vortex fugacity determines the Z(2) topological class of the localized phase. There are two distinct fixed points with the same critical exponents, corresponding to transitions from a metal to an insulator and a topological insulator, respectively. The critical conductivity and correlation length exponent of these transitions are computed in an N=1-[symbol: see text] expansion in the number of replicas, where for small [symbol: see text] the critical points are perturbatively connected to the Kosterlitz-Thouless critical point. Delocalized states, which arise at the surface of weak topological insulators and topological crystalline insulators, occur because vortex proliferation is forbidden due to the presence of symmetries that are violated by disorder, but are restored by disorder averaging. PMID:23368359

Fu, Liang; Kane, C L



A vertically averaged spectral model for tidal circulation in estuaries  

USGS Publications Warehouse

A frequency dependent computer model based on the two-dimensional vertically averaged shallow-water equations is described for general purpose application in tidally dominated embayments. This model simulates the response of both tides and tidal currents to user-specified geometries and boundary conditions. The mathematical formulation and practical application of the model are discussed in detail. Salient features of the model include the ability to specify: (1) stage at the open boundaries as well as within the model grid, (2) velocities on open boundaries (river inflows and so forth), (3) spatially variable wind stress, and (4) spatially variable bottom friction. Using harmonically analyzed field data as boundary conditions, this model can be used to make real time predictions of tides and tidal currents. (USGS)

Burau, J. R.; Cheng, R. T.



Spatial averaging algorithms for ultrasonic inspection of austenitic stainless steel welds  

SciTech Connect

Interpretation of ultrasonic inspection data from stainless steel welds is difficult because the signal-to-noise ratio is very low. The three main reasons for this are the granular structure of the weld, the high attenuation of stainless steel, and electronic noise. Averaging in time at the same position in space reduces electronic noise, but does not reduce ultrasonic noise from grain boundary scattering. Averaging wave-forms from different spatial positions helps reduce grain noise, but desired signals can destructively interfere if they shift in time. If the defect geometry is known, the ultrasonic waveforms can be shifted before averaging, ensuring signal reinforcement. The simplest geometry results in a linear time shift. An averaging algorithm has been developed which finds the optimum shift. This algorithm computes the averaged, or composite waveform as a function of the time shift. The optimum occurs when signals from a reflector become aligned in time, producing a large amplitude composite waveform. This algorithm works very well, but requires significant computer time and storage. This paper discusses this linear shift averaging algorithm, and considers an implementation using frequency domain techniques. Also, data from several weld defects are presented and analyzed.

Horn, J. E.; Cooper, C.S.; Michaels, T.E.



Evaluation of soft x-ray average recombination coefficient and average charge for metallic impurities in beam-heated plasmas  

SciTech Connect

The soft x-ray continuum radiation in TFTR low density neutral beam discharges can be much lower than its theoretical value obtained by assuming a corona equilibrium. This reduced continuum radiation is caused by an ionization equilibrium shift toward lower states, which strongly changes the value of the average recombination coefficient of metallic impurities anti ..gamma.., even for only slight changes in the average charge, anti Z. The primary agent for this shift is the charge exchange between the highly ionized impurity ions and the neutral hydrogen, rather than impurity transport, because the central density of the neutral hydrogen is strongly enhanced at lower plasma densities with intense beam injection. In the extreme case of low density, high neutral beam power TFTR operation (energetic ion mode) the reduction in anti ..gamma.. can be as much as one-half to two-thirds. We calculate the parametric dependence of anti ..gamma.. and anti Z for Ti, Cr, Fe, and Ni impurities on neutral density (equivalent to beam power), electron temperature, and electron density. These values are obtained by using either a one-dimensional impurity transport code (MIST) or a zero-dimensional code with a finite particle confinement time. As an example, we show the variation of anti ..gamma.. and anti Z in different TFTR discharges.

Sesnic, S.S.; Bitter, M.; Hill, K.W.; Hiroe, S.; Hulse, R.; Shimada, M.; Stratton, B.; von Goeler, S.



Long-Run Performance of Bayesian Model Averaging.  

National Technical Information Service (NTIS)

Hjort and Claeskens (HC) argue that statistical inference conditional on a single selected model underestimates uncertainty, and that model averaging is the way to remedy this; we strongly agree. They point out that Bayesian model averaging (BMA) has been...

A. E. Raftery Y. Zheng



Computer for Averaging Transients to Determine EEG Evoked Potentials.  

National Technical Information Service (NTIS)

A 'Computer for Averaging Transients' (CAT) has been designed, constructed, and tested; the CAT is capable of averaging overall electroencephalogram (EEG) patterns following repeated stimuli, thereby separating the EEG response waveform from the many unre...

F. Cruger



Calculation of Monthly Average Insolation on Tilted Surfaces.  

National Technical Information Service (NTIS)

Several simplified design procedures for solar energy systems require monthly average meteorological data. Monthly average daily totals of the solar radiation incident on a horizontal surface are available. However, radiation data on tilted surfaces, requ...

S. A. Klein



Error Bounds in the Averaging of Hybrid Systems.  

National Technical Information Service (NTIS)

This reprint analyzes the errors introduced by the averaging of hybrid systems. These systems involve linear systems which can take a number of different realizations based on the state of an underlying finite state process. The averaging technique (based...

J. Essine A. H. Haddad



The average ionospheric electrodynamics for the different substorm phases  

NASA Astrophysics Data System (ADS)

The average patterns of the electrostatic potential, current vectors, and Joule heating in the polar ionosphere, as well as the associated field-aligned currents, are determined for a quiet time, the growth phase, the expansion phase, the peak epoch, and the recovery phase of substorms. For this purpose, the Kamide-Richmond-Matsushita magnetogram-inversion algorithm is applied to a data set (for March 17, 18, and 19, 1978) from the six meridian magnetometer chains (the total number of magnetometer stations being 71) which were operated during the period of the International Magnetospheric Study (IMS). This is the first attempt at obtaining, on the basis of individual substorms, the average pattern of substorm quantities in the polar ionosphere for the different epochs. The main results are as follows: (1) The substorm-time current patterns over the entire polar region consist of two components. The first one is related to the two-cell convection pattern, and the second one is the westward electrojet in the dark sector which is related to the wedge current. (2) Time variations of the two components for the four substorm epochs are shown to be considerably different. (3) The dependence of these differences on the ionospheric electric field and the conductivities (Hall and Pedersen) is identified. (4) It is shown that the large-scale two-cell pattern in the electric potential is dominant during the growth phase of substorms. (5) The expansion phase is characterized by the appearance of a strong westward electrojet, which is added to the two-cell pattern. (6) The large-scale potential pattern becomes complicated during the recovery phase of substorms, but the two-cell pattern appears to be relatively dominant again during their late recovery as the wedge current subsides. These and many other earlier results are consistent with the present ones, which are more quantitatively and comprehensively demonstrated in our global study. Thus the two components are tentatively identified as the directly driven and the unloading components, respectively, although there is some indication that both components are actually coupled in the ionosphere. In the present paper we show that the directly driven component is present throughout the lifetime of substorms, becoming a dominant feature during the recovery phase of substorms as the unloading component wanes. The fact that the two components exist and that their time variations are different indicates that different processes are in progress even for the same value, say, -500 nT, of AL and that we must be cautious in using a single electrojet index, such as AL, in ordering satellite data.

Kamide, Y.; Sun, W.; Akasofu, S.-I.



Stability theory of the orbit-averaged Boltzmann equation  

Microsoft Academic Search

Consideration is given to the relation between the thermal runaway predicted in models of the collisional evolution of stellar systems and the onset of linear instability in equilibrium solutions of the Boltzmann equation obtained by averaging over the stellar orbits. The orbit-averaged Boltzmann equation is obtained by instantaneously averaging, over orbits of fixed energy and angular momentum, the Boltzmann equation

J. R. Ipser; H. E. Kandrup



20 CFR 404.220 - Average-monthly-wage method.  

Code of Federal Regulations, 2013 CFR

...2013-04-01 false Average-monthly-wage method. 404.220 Section 404.220 Employees...Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this...



Interaction between directional epistasis and average mutational effects  

Microsoft Academic Search

We investigate the relationship between the average fitness decay due to single mutations and the strength of epistatic interactions in genetic sequences. We observe that epistatic interactions between mutations are correlated to the average fitness decay, both in RNA secondary structure prediction as well as in digital organisms replicating in silico. This correlation implies that during adaptation, epistasis and average

Claus O. Wilke; Christoph Adami



Calculation of monthly average insolation on tilted surfaces  

Microsoft Academic Search

Several simplified design procedures for solar energy systems require monthly average meteorological data. Monthly average daily totals of the solar radiation incident on a horizontal surface are available. However, radiation data on tilted surfaces, required by the design procedures, are generally not available. A simple method of estimating the average daily radiation for each calendar month on surfaces facing directly

S. A. Klein



Robustness of spatial average equalization: A statistical reverberation model approach  

Microsoft Academic Search

Traditionally, multiple listener room equalization is performed to improve sound quality at all listeners, during audio playback, in a multiple listener environment (e.g., movie theaters, automobiles, etc.). A typical way of doing multiple listener equalization is through spatial averaging, where the room responses are averaged spatially between positions and an inverse equalization filter is found from the spatially averaged result.

Sunil Bharitkar; Philip Hilmes; Chris Kyriakakis



Substance Use by Adolescents on an Average Day Is Alarming  


... 2130 Substance use by adolescents on an average day is alarming On an average day, 881,684 teenagers aged 12 to 17 smoked ... SAMHSA). The report also says that on average day 646,707 adolescents smoked marijuana and 457,672 ...


Average storm duration and seasonal precipitation rates for the northeast sector of the United States  

SciTech Connect

Regional and seasonal averages of storm duration and precipitation rates for the northeast sector of the United States are needed as input variables in models aimed at describing precipitation processes relating to acid rain and other problems. Hourly precipitation data for 78 first order weather stations for four summer months and three winter months were used to calculate the regional average storm duration and average precipitation rates over the 19 northeastern states north of 36/sup 0/N latitude and east of 92/sup 0/W longitude. Average storm duration for summer (June-August), and winter (December-February) seasons are calculated based on a scheme that allows a period of up to three dry hours in summer storms, and up to six dry hours in winter storms to be included as part of the same storm. Thus, each storm consists of one or more precipitation periods or events. The arithmetic mean of event precipitation rates was used to determine the average precipitation rates for summer and winter season storms. The data suggest a preferred summer storm time duration of about three hours and a preferred winter storm time scale of about 26 hours. The summer storms typically consist of one rain event lasting about three hours with rain rates averaging 2.5 mm h/sup -1/. The 26 hour winter storm averages about three precipitation periods (or rain bands). Each period lasts about six hours and has an average precipitation rate of 0.9 mm h/sup -1/.

Thorp, J.M.; Scott, B.C.



Bilateral averaging over receiving and transmitting areas for accurate measurements of sector average signal strength inside buildings  

Microsoft Academic Search

In defining path loss for narrowband systems, it is common to introduce the concept of the sector or local average in order to remove the rapid (Rayleigh or Rician) fading. As originally developed for cellular mobile radio (CMR) systems, the sector average is obtained from the spatial average of the received signal as the mobile traverses a path of 20

Walter Honcharenko; Henry L. Bertoni; James L. Dailing



On the average outage rate and average outage duration of wireless communication systems with multiple cochannel interferers  

Microsoft Academic Search

This paper studies the average outage rate [or average level crossing rate (LCR)] and average outage duration (AOD) of wireless communication systems subject to cochannel interference. In particular, it presents closed-form expressions for the LCR and AOD when a minimum desired signal power requirement is specified for satisfactory reception. The results are quite general and account for systems operating over

Lin Yang; Mohamed-slim Alouini



Lack of self-averaging and family trees  

NASA Astrophysics Data System (ADS)

We consider a large population of asexually reproducing individuals in absence of selective pressure. The population size is maintained constant by the environment. We find out that distances between individuals (time from the last common ancestor) exhibit highly non-trivial properties. In particular their distribution in a single population is random even in the thermodynamical limit, i.e., there is lack of self-averaging. As a result, not only distances are different for different pairs of individuals but also the mean distance of the individuals of a given population is different at different times. All computed quantities are parameters free and only scale linearly with the population size. Results in this paper may have some relevance in the `Out of Africa/Multi-regional' debate about the origin of modern man. In fact, the recovery of mitochondrial DNA from Neandertal fossils in three different loci: Feldhofer (Germany), Mezmaiskaya (Northern Caucaso), Vinjia (Croatia), permitted to compare Neandertal/Neandertal distances with Neandertal/modern and modern/modern ones.

Serva, Maurizio



Sorted averaging--principle and application to auditory brainstem responses.  


A new averaging method for evoked potentials, called sorted averaging, is presented. The method requires an ensemble of sweeps stored in memory and is based on the principle of interchangeability of individual sweeps within this ensemble. Sorted averaging is applied by sorting all sweeps according to their estimated background noise and successive averaging of this sorted ensemble, starting with low-noise sweeps. Signal-to-noise power ratio (SNR2) is estimated by calculating the inverse single point variance (Elberling & Don, 1984). This SNR2 estimate increases linearly with the number of sweeps for standard averaging and shows a clear maximum for sorted averaging. Auditory brainstem responses to click stimuli at 70 dB nHL in 20 normal hearing subjects were recorded and 4000 individual sweeps during each run were stored for analysis. In an off-line analysis, SNR2 for standard averaging with 10 microV artefact rejection, for weighted averaging and for sorted averaging were calculated. Sorted averaging was found to yield a significantly higher SNR2. PMID:10489863

Mühler, R; von Specht, H



Paleosecular variation and the average geomagnetic field at ±20° latitude  

NASA Astrophysics Data System (ADS)

We assembled a new paleomagnetic directional data set from lava flows and thin dikes for four regions centered on ±20° latitude: Hawaii, Mexico, the South Pacific, and Reunion. We investigate geomagnetic field behavior over the past 5 Myr and address whether geographical differences are recorded by our data set. We include inclination data from other globally distributed sites with the ±20° data to determine the best fitting time-averaged field (TAF) for a two-parameter longitudinally symmetric (zonal) model. Values for our model parameters, the axial quadrupole and octupole terms, are 4% and 6% of the axial dipole, respectively. Our estimate of the quadrupole term is compatible with most previous studies of deviations from a geocentric axial dipole (GAD) field. Our estimated octupole term is larger than that from normal polarity continental and igneous rocks, and oceanic sediments, but consistent with that from reversed polarity continental and igneous rocks. The variance reduction compared with a GAD field is ˜12%, and the remaining signal is attributed to paleosecular variation (PSV). We examine PSV at ±20° using virtual geomagnetic pole (VGP) dispersion and comparisons of directional distributions with simulations from two statistical models. Regionally, the Hawaii and Reunion data sets lack transitional magnetic directions and have similar inclination anomalies and VGP dispersion. In the Pacific hemisphere, Hawaii has a large inclination anomaly, and the South Pacific exhibits high PSV. The deviation of the TAF from a GAD contradicts earlier ideas of a "Pacific dipole window," and the strong regional PSV in the South Pacific contrasts with the generally low secular variation found on short timescales. The TAF and PSV at Hawaii and Reunion are distinct from values for the South Pacific and Mexico, demonstrating the need for time-averaged and paleosecular variation models that can describe nonzonal field structures. Investigations of zonal statistical PSV models reveal that recent models are incompatible with the empirical ±20° directional distributions and cannot fit the data by simply adjusting relative variance contributions to the PSV. The ±20° latitude data set also suggests less PSV and smaller persistent deviations from a geocentric axial dipole field during the Brunhes.

Lawrence, K. P.; Constable, C. G.; Johnson, C. L.



Averaging-invariance of compressible Navier-Stokes equation  

NASA Astrophysics Data System (ADS)

While the averaging-invariance property of incompressible Navier-Stokes (iNS) is well documented; there is a need to formally establish the property for the compressible Navier-Stokes (cNS) equations. We put forth two new weighted-moment definitions and derive the averaging-invariant form of the continuity, momentum and energy equations for a general compressible flow. The averaging-invariant equations have the form of the Favre-averaged Navier-Stokes (FANS) equations and reduce to it in the appropriate limit. Furthermore, we derive the average-invariant forms of the evolution equations of various turbulent fluxes encountered in compressible turbulence. This formalization of the averaging invariance property is expected to contribute towards developing mathematically rigorous RANS-LES hybrid and/or bridging models.

Suman, Sawan; Girimaji, Sharath



It's about time. The mistake we make in this crazy business is to think that the small-time components are what matter the most.  


Few things weave their way through all aspects of what we do like the clock. Make your response times (less than 8 minutes, 59 seconds 90% of the time), have fast hospital turn-around, do two minutes of compressions prior to shock, ventilate if you can't get the Stube in 15 seconds, transport people who are shot to the trauma center in less than 10 minutes after you reach them, give your estimated time of arrival to the hospital, count the minutes between contractions, give epi every 5 minutes, show up to work on time, donate time to plan the holiday party, work overtime, take time to relax, make time for your family, and, and, and. It's time we had a frank talk about stress, myths, tricks and traps of the most common unit of measurement in EMS. PMID:18814741

Taigman, Mike



Nonlocal thermodynamic equilibrium self-consistent average-atom model for plasma physics.  


A time-dependent collisional-radiative average-atom model is presented to study statistical properties of highly charged ion plasmas in off-equilibrium conditions. The time evolution of electron populations and the electron covariance matrix is obtained as approximate solutions of a master equation. Atomic structure is described either with a screened-hydrogenic model including l splitting, or by calculating one-electron states in a self-consistent average-atom potential. Collisional and radiative excitation/deexcitation and ionization/recombination rates, as well as autoionization and dielectronic recombination rates, are formulated within the average-configuration framework. Local thermodynamic equilibrium is obtained as a specific steady-state solution. The influence of atomic structure and the role of autoionization and dielectronic recombination processes are studied by calculating steady-state average ionization and ionization variance of hot plasmas with or without radiation field. PMID:11308579

Faussurier, G; Blancard, C; Berthier, E



Orbit-averaged guiding-center Fokker-Planck operator  

Microsoft Academic Search

A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant psi, the minimum-B pitch-angle coordinate xi0, and the momentum magnitude p.

A. J. Brizard; J. Decker; Y. Peysson; F.-X. Duthoit



An Excel macro for transformed and weighted averaging  

Microsoft Academic Search

An Excel macro is presented for averaging spreadsheet data. The macro has several special features: (1) The data are weighted\\u000a by the inverse variance of each datum to decrease the contribution-of noisy outliers. (2) There is a provision for a power\\u000a or a log transform of the data before averaging. The rationale for transforming the data before averaging is discussed

Stanley A. Klein



Orbit-averaged guiding-center Fokker-Planck operator  

SciTech Connect

A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant {psi}, the minimum-B pitch-angle coordinate {xi}{sub 0}, and the momentum magnitude p.

Brizard, A. J. [Department of Chemistry and Physics, Saint Michael's College, Colchester, Vermont 05439 (United States); Decker, J.; Peysson, Y.; Duthoit, F.-X. [CEA, IRFM, Saint-Paul-lez-Durance F-13108 (France)



Average Acceleration of the Earth's Rotation and the Viscosity of the Deep Mantle  

Microsoft Academic Search

The acceleration of the earth's rotation averaged over the past 3000 years has been restudied using a new analysis of the ancient eclipses. The relaxation time for a second-order harmonic distortion of the earth has been calculated to fall in the range 870-1600 years. This short relaxation time seems to require a viscosity for the deep mantle of ~10 =

R. H. Dicke



7 CFR 701.117 - Average adjusted gross income limitation.  

Code of Federal Regulations, 2013 CFR




Average g-Factors of Anisotropic Polycrystalline Samples  

SciTech Connect

Due to the lack of suitable single crystals, the average g-factor of anisotropic polycrystalline samples are commonly estimated from either the Curie-Weiss susceptibility or the saturation magnetization. We show that the average g-factor obtained from the Curie constant is always greater than or equal to the average g-factor obtained from the saturation magnetization. The average g-factors are equal only for a single crystal or an isotropic polycrystal. We review experimental results for several compounds containing the anisotropic cation [Fe(C5Me5)2]+ and propose an experiment to test this inequality using a compound with a spinless anion.

Fishman, Randy Scott [ORNL; Miller, Joel S. [University of Utah



Implementation of a semi-implicit orbit-averaged gyrokinetic particle code  

Microsoft Academic Search

A gyrokinetic particle simulation code devised for investigating self-consistent phenomena in a strongly magnetized plasma is presently used to implement a semiimplicit orbit-averaged time-integration algorithm whose semiimplicit aspect relaxes the timestep constraints required to ensure numerical stability, while its orbit-averaging serves to relax the statistical constraints for kinetic simulation. This algorithm should be more efficient than particle-in-cell plasma simulation algorithms

Bruce I. Cohen; Timothy J. Williams



Comparison of the prevalence index and average wetland values for identification of wetland vegetation  

SciTech Connect

Prevalence index values (FICWD, 1989) and average wetland values for all species present were compared for three wetland gas pipeline rights-of-way (ROWS) and adjacent natural areas. The similarities in results using these two indicator values suggest that an average wetland value may offer a simpler, less time-consuming method of evaluating the vegetation of a study site as an indication of wetness. Both PIVs and AWVs, are presented for the ROWs and the adjacent natural area at each site.

Zimmerman, R.E.; Shem, L.M.; Gowdy, M.J. (Argonne National Lab., IL (United States)); Van Dyke, G.D. (Trinity Christian Coll., Palos Heights, IL (United States)); Hackney, C.T. (North Carolina Univ., Wilmington, NC (United States))



Comparison of the prevalence index and average wetland values for identification of wetland vegetation  

SciTech Connect

Prevalence index values (FICWD, 1989) and average wetland values for all species present were compared for three wetland gas pipeline rights-of-way (ROWS) and adjacent natural areas. The similarities in results using these two indicator values suggest that an average wetland value may offer a simpler, less time-consuming method of evaluating the vegetation of a study site as an indication of wetness. Both PIVs and AWVs, are presented for the ROWs and the adjacent natural area at each site.

Zimmerman, R.E.; Shem, L.M.; Gowdy, M.J. [Argonne National Lab., IL (United States); Van Dyke, G.D. [Trinity Christian Coll., Palos Heights, IL (United States); Hackney, C.T. [North Carolina Univ., Wilmington, NC (United States)



A numerical model of the zonally averaged dynamical and chemical structure of the middle atmosphere  

Microsoft Academic Search

A two-dimensional, time-dependent model has been constructed to study the zonally averaged structure of the middle atmosphere (16-116 km) allowing interaction among dynamics, radiation and photochemistry. The zonally averaged dynamics are governed by a stream function equation for the residual Eulerian meridional circulation wherein the effects of wave transience and dissipation have been neglected. The resulting circulation is thus driven

Rolando R. Garcia; Susan Solomon



Definitions and Applications of Dynamic Average Models for Analysis of Power Systems  

Microsoft Academic Search

Modeling and analysis of power and power-electronic systems' transients using digital programs enables testing new design concepts in modern electric grid and many industrial and commercial products and applications. This paper gives an overview of dynamic average-value modeling techniques for representing static switching converters for the system-level studies. Concepts and desirable properties of averaged models for conducting large-signal time-domain transient

S. Chiniforoosh; J. Jatskevich; A. Yazdani; V. Sood; V. Dinavahi; J. A. Martinez; A. Ramirez



The average common substring approach to phylogenomic reconstruction.  


We describe a novel method for efficient reconstruction of phylogenetic trees, based on sequences of whole genomes or proteomes, whose lengths may greatly vary. The core of our method is a new measure of pairwise distances between sequences. This measure is based on computing the average lengths of maximum common substrings, which is intrinsically related to information theoretic tools (Kullback-Leibler relative entropy). We present an algorithm for efficiently computing these distances. In principle, the distance of two l long sequences can be calculated in O(l) time. We implemented the algorithm using suffix arrays our implementation is fast enough to enable the construction of the proteome phylogenomic tree for hundreds of species and the genome phylogenomic forest for almost two thousand viruses. An initial analysis of the results exhibits a remarkable agreement with "acceptable phylogenetic and taxonomic truth." To assess our approach, our results were compared to the traditional (single-gene or protein-based) maximum likelihood method. The obtained trees were compared to implementations of a number of alternative approaches, including two that were previously published in the literature, and to the published results of a third approach. Comparing their outcome and