Turnaround Time Modeling for Conceptual Rocket Engines
NASA Technical Reports Server (NTRS)
Nix, Michael; Staton, Eric J.
2004-01-01
Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.
Turnaround Time and Market Capacity in Contract Cheating
ERIC Educational Resources Information Center
Wallace, Melisa J.; Newton, Philip M.
2014-01-01
Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…
Interlibrary Loan Turnaround Times in Science and Engineering.
ERIC Educational Resources Information Center
Horton, Weldon, Jr.
1989-01-01
Describes the use of fixed point analysis procedures at King Fahd University of Petroleum and Minerals to determine as narrow a range as possible of interlibrary loan turnaround times in science and engineering subjects. The findings are discussed in terms of the complexity of interlibrary loan factors and items determined as relevant for further…
Factors that impact turnaround time of surgical pathology specimens in an academic institution.
Patel, Samip; Smith, Jennifer B; Kurbatova, Ekaterina; Guarner, Jeannette
2012-09-01
Turnaround time of laboratory results is important for customer satisfaction. The College of American Pathologists' checklist requires an analytic turnaround time of 2 days or less for most routine cases and lets every hospital define what a routine specimen is. The objective of this study was to analyze which factors impact turnaround time of nonbiopsy surgical pathology specimens. We calculated the turnaround time from receipt to verification of results (adjusted for weekends and holidays) for all nonbiopsy surgical specimens during a 2-week period. Factors studied included tissue type, number of slides per case, decalcification, immunohistochemistry, consultations with other pathologists, and diagnosis. Univariate and multivariate analyses were performed. A total of 713 specimens were analyzed, 551 (77%) were verified within 2 days and 162 (23%) in 3 days or more. Lung, gastrointestinal, breast, and genitourinary specimens showed the highest percentage of cases being signed out in over 3 days. Diagnosis of malignancy (including staging of the neoplasia), consultation with other pathologists, having had a frozen section, and use of immunohistochemical stains were significantly associated with increased turnaround time in univariate analysis. Decalcification was not associated with increased turnaround time. In multivariate analysis, consultation with other pathologists, use of immunohistochemistry, diagnosis of malignancy, and the number of slides studied continued to be significantly associated with prolonged turnaround time. Our findings suggest that diagnosis of malignancy is central to significantly prolonging the turnaround time for surgical pathology specimens, thus institutions that serve cancer centers will have longer turnaround time than those that do not. PMID:22406373
24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.
Code of Federal Regulations, 2010 CFR
2010-04-01
...of the adjusted vacancy rate and turnaround time...calculating the adjusted vacancy rate or, if necessary...conditions. (iv) Natural disasters. (v) Insufficient...use the actual vacancy rate, the adjusted...
Reducing the turnaround time for the Shuttle Orbiter main propulsion system at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Bilardo, V. J., Jr.
1983-01-01
This paper presents steps currently being implemented to reduce the processing time of the Orbiter Main Propulsion System. Given the projected increase in launch rates of the Space Transportation System (STS) in the operational era, average turnaround time from arrival at Kennedy Space Center to subsequent launch will have to be significantly reduced. In many STS flows to date, a pacing system has been the Main Propulsion System consisting of the three Space Shuttle Main Engines (SSME's) and the Main Propulsion Subsystem (MPS) connecting the SSME's to the Orbiter/ground and Orbiter/External Tank interfaces. This paper summarizes procedural, hardware, software, and requirements changes being incorporated at Kennedy Space Center (KSC) to reduce the processing time required for the MPS/SSME systems. Specific examples from each category are cited to illustrate the impact of the improvements on MPS/SSME processing.
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
2011-01-01
Background Assessing turnaround times can help to analyse workflows in hospital information systems. This paper presents a systematic review of literature concerning different turnaround time definitions. Our objectives were to collect relevant literature with respect to this kind of process times in hospitals and their respective domains. We then analysed the existing definitions and summarised them in an appropriate format. Methods Our search strategy was based on Pubmed queries and manual reviews of the bibliographies of retrieved articles. Studies were included if precise definitions of turnaround times were available. A generic timeline was designed through a consensus process to provide an overview of these definitions. Results More than 1000 articles were analysed and resulted in 122 papers. Of those, 162 turnaround time definitions in different clinical domains were identified. Starting and end points vary between these domains. To illustrate those turnaround time definitions, a generic timeline was constructed using preferred terms derived from the identified definitions. The consensus process resulted in the following 15 terms: admission, order, biopsy/examination, receipt of specimen in laboratory, procedure completion, interpretation, dictation, transcription, verification, report available, delivery, physician views report, treatment, discharge and discharge letter sent. Based on this analysis, several standard terms for turnaround time definitions are proposed. Conclusion Using turnaround times to benchmark clinical workflows is still difficult, because even within the same clinical domain many different definitions exist. Mapping of turnaround time definitions to a generic timeline is feasible. PMID:21609424
An automated resource management system to improve production tapeout turn-around time
NASA Astrophysics Data System (ADS)
Guo, Eric; Liu, Qingwei; Zhu, Sherry; Wu, Jason; Tsai, Jenny; Lu, Junwei; Simmons, Mark C.
2013-03-01
In today's semiconductor industry, both the pure-play and independent device manufacturer (IDM) foundries are constantly and rigorously competing for market share. The acknowledged benefit for customers who partner with these foundries includes a reduced cost-of-ownership, along with the underwritten agreement of meeting or exceeding an aggressive time-to-market schedule. Because the Semiconductor Manufacturing International Corporation (SMIC) is one of the world-wide forerunners in the foundry industry, one of its primary concerns is ensuring continual improvement in its fab's turnaround time (TAT), especially given that newer technology nodes and their associated processes are increasing in complexity, and consequently, in their time-to-process. In assessing current runtime data trends at the 65nm and 40nm technology nodes, it was hypothesized that hardware and software utilization improvements could accomplish a reduced overall TAT. By running an experiment using the Mentor Graphics Calibre® Cluster Manager (CalCM) software, SMIC was able to demonstrate just over a 30% aggregate TAT improvement in conjunction with a greater than 90% average utilization of all hardware resources. This paper describes the experimental setup and procedures that predicated the reported results.
Integrated post tape outflow for fast design to mask turn-around time
Chi-Yuan Hung; Qingwei Liu; Liguo Zhang; Shumay Shang; George E. Bailey; Andrew Jost; Travis Brist
2005-01-01
SMIC is a pure-play IC foundry, as foundry culture Turn-Around Time is the most important thing FABs concern about. And aggressive tape out schedule required significant reduction of GDS to mask flow run time. So the objective of this work is to evaluate an OPC methodology and integrated mask data preparation flow on runtime performance via so-called 1-IO-tape-out platform. By
Sputum smear microscopy referral rates and turnaround time in the Tonga Islands
Bissell, K.; Vivili, P.; Gounder, S.; Hill, P. C.
2014-01-01
Setting: The National Tuberculosis Programme and the National Reference Laboratory, located on the main island of Tonga, Tongatapu, and three district hospital laboratories located on the outer islands. Objectives: To compare Tongatapu with Tonga's outer islands with respect to sputum referral rates, numbers of samples per patient, sample quality, test results and time from sending sample to obtaining results and from obtaining results to treatment initiation. Design: Retrospective study involving record review of laboratory and TB treatment registers in Tonga's four hospitals from 2003 to 2012. Results: Of 3078 sputum samples submitted, 71.7% were of good quality. Sputum referral rates on Tongatupu were nearly twice as high as those on the outer islands (353 vs. 180 per 100 000 population). The mean smear turnaround times on Tongatapu and the outer islands were respectively 4.02 and 4.11 days. Of 83 positive cases, 91.2% were treated within a day in Tongatapu compared with 80% in the outer islands. Conclusions: Referral rates for sputum smear testing differed in the main and the outer islands in Tonga, but turnaround times did not. Records of sputum quality and dates had limitations, necessitating interventions with TB-specific laboratory guidelines and registers. Further research is required to understand the differences in referral rates.
Insertable system for fast turnaround time microwave experiments in a dilution refrigerator
NASA Astrophysics Data System (ADS)
Ong, Florian R.; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian
2012-09-01
Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence.
Insertable system for fast turnaround time microwave experiments in a dilution refrigerator.
Ong, Florian R; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian
2012-09-01
Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence. PMID:23020391
Method for assessment of laboratory turnaround times: comparison before, during, and after analysis.
Smellie, W S; Johnston, J; Galloway, P J
1994-01-01
AIMS--To establish a mechanism to examine the components of turnaround time in a representative cross-section of laboratory users; and to identify potential areas for improvement. METHODS--Information was collected manually from result reports received by eight laboratory users: three wards in the main hospital, four GP practices, and one local psychiatric hospital. This was combined with data from the departmental computer files to create a spreadsheet detailing different time points in the processing of a specimen, from venepuncture to receipt of result report. RESULTS--At the main hospital, 80% of samples arrived within two hours of venesection and 95% by four hours; 75% of samples were analysed within two hours; 85% of results arrived in the wards within six hours of printing, although 12% took more than 18 hours to arrive; median overall time six hours. At the satellite (psychiatric) hospital, all samples arrived within seven hours of venesection; 45% were analysed within two hours--the rest the following morning; there were highly variable post-analytical times, minimum 18 hours, maximum 122 hours; the median overall time was 69 hours. Twenty five per cent of samples from GPs took more than 20 hours to arrive; 75% were analysed within two hours, the rest took over 18 hours--waiting overnight; the post-analytical times were highly variable, minimum 22 hours, maximum 122 hours; the median overall time was 50 hours. CONCLUSIONS--The method is easily repeatable and demonstrates the need for local improvement in the post-analytical period. Although specific to the individual data handling system for one laboratory, this method may be used as a basis for other laboratories in pathology disciplines to undertake a representative assessment of turnaround times for different groups of laboratory users. PMID:8089210
Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh
2015-09-01
The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays. PMID:26068719
Turnaround Prediction with Stochastic Process Times and Airport Specific Delay Pattern
for a high reliable turnaround modeling and proof of concept with specific airport delays Bernd Oreschko operators and manufacturer derive TTT from deterministic sub-process (de-boarding, fueling, etc.) durations and their simple addition. This is not an accurate representation of reality because each of these sub
Preparing printed circuit boards for rapid turn-around time on a plotter
Hawtree, J.
1998-01-01
This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab`s Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day`s time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter.
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Kallen, Michael E; Sim, Myung S; Radosavcev, Bryan L; Humphries, Romney M; Ward, Dawn C; Apple, Sophia K
2015-10-01
Breast pathology relies on gross dissection for accurate diagnostic work, but challenges can necessitate submission of high tissue volumes resulting in excess labor, laboratory costs, and delays. To address these issues, a quality initiative was created through implementation of the Faxitron PathVision specimen radiography system as part of the breast gross dissection protocol; this report documents its impact on workflow and clinical care. Retrospective data from 459 patients who underwent simple or modified radical mastectomy at our institution between May 2012 and December 2014 were collected. Comparison was made between the mastectomy specimen control group before radiography use (233 patients, 340 breasts) and Faxitron group that underwent postoperative radiography (226 patients, 338 breasts). We observed a statistically significant decrease in mean number of blocks between control and Faxitron groups (47.0 vs 39.7 blocks; P<.0001), for calculated cost savings of US $146 per mastectomy. A statistically significant decrease in pathology report turnaround time was also observed (4.2 vs 3.8days; P=.038). Postoperative mastectomy specimen radiography has increased workflow efficiency and decreased histology costs and pathology report turnaround time. These findings may underestimate actual benefits and highlight the importance of quality improvement projects in anatomical pathology. PMID:26277043
Turnaround in Cyclic Cosmology
Baum, Lauris; Frampton, Paul H. [University of North Carolina, Chapel Hill, North Carolina 27599-3255 (United States)
2007-02-16
It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion.
NASA Astrophysics Data System (ADS)
Yamazaki, Kenji; Yamaguchi, Hiroshi
2015-06-01
The three-dimensional (3D) electron beam (EB) lithography devised by the authors, which uses EB writing on a sample from various directions, was renovated. Although the original technique was effective for 3D nanofabrication in hard materials, it had two disadvantages: long turnaround time (TAT) and insufficient 3D alignment accuracy in conflict with its high resolution. The renovated technique uses new methods of focusing and fine adjustment of rotations in an EB writer with higher positioning accuracy in addition to a rotation drive with higher rotation accuracy than the previous apparatus. These have enabled us to significantly reduce the TAT and obtain sufficiently high 3D alignment accuracy. Moreover, 3D resist coating on all four side faces of a square microblock formed on a substrate, which is very useful for 3D nanofabrication, is demonstrated. The renovation is accelerating the technique’s application to various 3D nanodevices.
Random time averaged diffusivities for Lévy walks
NASA Astrophysics Data System (ADS)
Froemberg, D.; Barkai, E.
2013-07-01
We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.
Mehra, Bhanu; Rawat, Deepti; Saxena, Shikhar
2015-01-01
Background Timeliness of reporting is of utmost importance to limit the spread of syphilis. The present analysis was undertaken to evaluate the turnaround time of syphilis testing (mainly Venereal disease research laboratory /VDRL test) in a sexually transmitted infections (STI) clinic in India; to find out the possible reasons for delay; to describe the trends of clinical indications for syphilis testing from an STI clinic; to assess the frequency of a positive syphilis serology among STI clinic attendees; and to analyse the follow-up rates of VDRL report collection. Materials and Methods Two hundred consecutive VDRL requests received at the serology laboratory of a tertiary care health facility from the STI clinic of the linked hospital were prospectively analysed to evaluate the above parameters. Results For the 200 requests audited, the mean absolute turnaround time of VDRL test was 7.46±2.81 days. The mean duration of the pre-laboratory, laboratory and post laboratory phases was 0, 4.69±2.13 and 2.77±2.51 days respectively. The interval from specimen receipt to performance of tests (mean duration=4.25±1.96 days) was the major reason for long VDRL turnaround time. The common indications for syphilis testing in STI clinic attendees were lower abdominal pain (33%), vaginal discharge (26.5%) and genital ulcer disease (9%); and the follow-up rate for report collection was 71%. Conclusion Our study highlights the strong need to shift to alternative testing methods, mainly rapid point of care procedures for serodiagnosis of syphilis in order to circumvent the problems of long turnaround time and low patient follow-up rates.
Time averaging of instantaneous quantities in HYDRA
McCallen, R.C.
1996-09-01
For turbulent flow the evaluation of direct numerical simulations (DNS) where all scales are resolved and large-eddy simulation (LES) where only large-scales are resolved is difficult because the results are three-dimensional and transient. To simplify the analysis, the instantaneous flow field can be averaged in time for evaluation and comparison to experimental results. The incompressible Navier-Stokes flow code HYDRA has been modified for calculation of time-average quantities for both DNS and LES. This report describes how time averages of instantaneous quantities are generated during program execution (i.e., while generating the instantaneous quantities, instead of as a postprocessing operation). The calculations are performed during program execution to avoid storing values at each time step and thus to reduce storage requirements. The method used in calculating the time-average velocities, turbulent intensities, <{ital u}{sup ``}{sup 2}>, <{ital va}{sup ``}{sup 2}>, and <{ital w}{sup ``}{sup 2}>, and turbulent shear, <{ital u}{sup ``}{ital v}{sup ``}> are outlined. The brackets <> used here represent a time average. the described averaging methods were implemented in the HYDRA code for three-dimensional problem solutions. Also presented is a method for taking the time averages for a number of consecutive intervals and calculating the time average for the sum of the intervals. This method could be used for code restarts or further postprocessing of the timer averages from consecutive intervals. This method was not used in the HYDRA implementation, but is included here for completeness. In HYDRA, the running sums needed fro time averaging are simply written to the restart dump.
Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.
Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J
2012-08-01
Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors. PMID:22270786
Rapid turnaround scanning probe nanolithography.
Paul, Philip C; Knoll, Armin W; Holzner, Felix; Despont, Michel; Duerig, Urs
2011-07-01
Scanning probe nanolithography (SPL) has demonstrated its potential in a variety of applications like 3D nanopatterning, 'direct development' lithography, dip-pen deposition or patterning of self-assembled monolayers. One of the main issues holding back SPL has been the limited throughput for patterning and imaging. Here we present a complete lithography and metrology system based on thermomechanical writing into organic resists. Metrology is carried out using a thermoelectric topography sensing method. More specifically, we demonstrate a system with a patterning pixel clock of 500 kHz, 20 mm s(-1) linear scan speed, a positioning accuracy of 10 nm, a read-back frequency bandwidth of 100,?000 line-pairs s(-1) and a turnaround time from patterning to qualifying metrology of 1 min. Thus, we demonstrate a nanolithography system capable of implementing rapid turnaround. PMID:21602616
Rapid turnaround scanning probe nanolithography
NASA Astrophysics Data System (ADS)
Paul, Philip C.; Knoll, Armin W.; Holzner, Felix; Despont, Michel; Duerig, Urs
2011-07-01
Scanning probe nanolithography (SPL) has demonstrated its potential in a variety of applications like 3D nanopatterning, 'direct development' lithography, dip-pen deposition or patterning of self-assembled monolayers. One of the main issues holding back SPL has been the limited throughput for patterning and imaging. Here we present a complete lithography and metrology system based on thermomechanical writing into organic resists. Metrology is carried out using a thermoelectric topography sensing method. More specifically, we demonstrate a system with a patterning pixel clock of 500 kHz, 20 mm s - 1 linear scan speed, a positioning accuracy of 10 nm, a read-back frequency bandwidth of 100 000 line-pairs s - 1 and a turnaround time from patterning to qualifying metrology of 1 min. Thus, we demonstrate a nanolithography system capable of implementing rapid turnaround.
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Hospital turnarounds: agents, approaches, alchemy.
Freed, David H
2005-01-01
Hospital turnarounds are an increasing phenomenon in an era of unrelenting financial pressure on the health care delivery system. However, the literature about hospital turnarounds contains more theories and case reports than research studies, and repeat turnarounds suggest that there is inadequate evidence about what really works. The essential role of the turnaround agent in strategy formulation and especially strategy implementation is the subject of this article. It casts a wide net over the literature of strategy, change, leadership, and management. The article traces the nature and evolution of hospital turnarounds plus the agents that effect them; summarizes the turnaround agent's 10 most potent approaches to strategy analysis and synthesis; and outlines 3 research questions about first, the recurring need for hospital turnarounds altogether; second, longitudinal evidence to support different turnaround approaches; and third, the mechanisms by which the turnaround agent both changes and is changed by the distressed hospital. PMID:15923921
Angeletti, Silvia; Dicuonzo, Giordano; D'Agostino, Alfio; Avola, Alessandra; Crea, Francesca; Palazzo, Carlo; Dedej, Etleva; De Florio, Lucia
2015-08-01
A comparative evaluation of the turnaround time (TAT) of positive blood culture before and after matrix-assisted laser desorption-ionization time-of-flight mass spectrometry (MALDI-TOF MS) introduction in the laboratory routine was performed. A total of 643 positive blood cultures, of which 310 before and 333 after MALDI-TOF technique introduction, were collected. In the post MALDI-TOF period, blood culture median TAT decreased from 73.53 hours to 71.73 for Gram-positive, from 64.09 hours to 63.59 for Gram-negative and from 115.7 hours to 47.62 for anaerobes. MALDI-TOF significantly decreased the TAT of anaerobes, for which antimicrobial susceptibility test is not routinely performed. Furthermore, the major advantage of MALDI-TOF introduction was the decrease of the time for pathogen identification (TID) independently from the species with an improvement of 93% for Gram-positive, 86% for Gram-negative and 95% for anaerobes. In addition, high species-level identification rates and cost savings than conventional methods were achieved after MALDI-TOF introduction. PMID:26147149
Time-dependent angularly averaged inverse transport
Guillaume Bal; Alexandre Jollivet
2009-05-07
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.
Turnaround Momentum Still Fragile
ERIC Educational Resources Information Center
Klein, Alyson
2012-01-01
The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a…
Edwards, Thomas
2005-09-01
The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3} fusion methods using DWPF simulated glass (part 1) and (to the extent possible) using DWPF radioactive glass (part 2). (3) Recommend specific equipment to implement the new methods and work with SRNL's Equipment Engineering Section (EES) to ensure compatibility of the equipment with the DWPF shielded cells. The focus of this report is major task activity No.2 (parts 1 and 2). The other two major task activities are to be addressed separately. The measurements supporting task activity No.1 are being conducted at the DWPF. EES is currently designing and fabricating the DWPF cell equipment needed for the new method as part of major task activity No.3.
Ensemble vs. time averages in financial time series analysis
NASA Astrophysics Data System (ADS)
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports
ERIC Educational Resources Information Center
Silva, Elena
2012-01-01
If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposed…
Ensemble averaging vs. time averaging in molecular dynamics simulations of thermal conductivity
NASA Astrophysics Data System (ADS)
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-01
In this report, we compare time averaging and ensemble averaging as two different methods for phase space sampling in molecular dynamics (MD) calculations of thermal conductivity. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium MD. We introduce two different schemes for the ensemble averaging approach and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical MD, the approaches used for generating independent trajectories may find their greatest utility in computationally expensive simulations such as first principles MD. For such simulations, where each time step is costly, time averaging can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each trajectory is independent. For this reason, particularly when using massively parallel architectures, ensemble averaging can result in much shorter simulation times (˜100-200X), but exhibits similar overall computational effort.
TIME INVARIANT MULTI ELECTRODE AVERAGING FOR BIOMEDICAL SIGNALS.
Orellana, R Martinez; Erem, B; Brooks, D H
2013-12-31
One of the biggest challenges in averaging ECG or EEG signals is to overcome temporal misalignments and distortions, due to uncertain timing or complex non-stationary dynamics. Standard methods average individual leads over a collection of epochs on a time-sample by time-sample basis, even when multi-electrode signals are available. Here we propose a method that averages multi electrode recordings simultaneously by using spatial patterns and without relying on time or frequency. PMID:24465170
TIME INVARIANT MULTI ELECTRODE AVERAGING FOR BIOMEDICAL SIGNALS
Orellana, R. Martinez; Erem, B.; Brooks, D.H.
2013-01-01
One of the biggest challenges in averaging ECG or EEG signals is to overcome temporal misalignments and distortions, due to uncertain timing or complex non-stationary dynamics. Standard methods average individual leads over a collection of epochs on a time-sample by time-sample basis, even when multi-electrode signals are available. Here we propose a method that averages multi electrode recordings simultaneously by using spatial patterns and without relying on time or frequency. PMID:24465170
Fei, Yang; Zeng, Rong; Wang, Wei; He, Falin; Zhong, Kun
2015-01-01
Introduction To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. Materials and methods 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. Results The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479 / 1307), 38% (228 / 598), and 36% (449 / 1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Conclusions Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them. PMID:26110033
Applicability of time-average moiré techniques for chaotic oscillations.
Ragulskis, Minvydas; Sanjuan, Miguel A F; Saunoriene, Loreta
2007-09-01
Applicability of time-average moiré techniques for chaotic oscillations is analyzed in this paper. Envelope function characterizing the rate of convergence of time-averaged color intensity is derived. A study of the relationship among Lyapunov exponents and gray scale color intensity provides insight into the process of chaotic contrast modulation and confirms that chaotic oscillations do not produce time-average moiré fringes. PMID:17930324
Maselle, Edna; Muhanguzi, Asaph; Muhumuza, Simon; Nansubuga, Jeniffer; Nawavvu, Cecilia; Namusobya, Jeniffer; Kamya, Moses R; Semitala, Fred C
2014-01-01
Introduction HIV/ AIDS clinics in resource limited settings (RLS) face increasing numbers of patients and workforce shortage [1, 2]. To address these challenges, efficient models of care like pharmacy only visits (POV) and nurse only visits (NOV) are recommended [3]. The Makerere University Joint AIDS Program (MJAP), a PEPFAR funded program providing care to over 42,000 HIV infected adults has implemented the POV model since 2009. In this model, stable patients on antiretroviral therapy (ART) with adherence to ART >95% and Karnofsky score >90% are reviewed by a doctor every four months but visit pharmacy for ART re-fills every two months. A study conducted in August 2011 showed low retention on the POV program with symptomatic diseases, pending CD4 count, complete blood count results, and poor adherence to ART as the major reasons for the non-retention in the POV program. To improve retention on POV, the TAT (Turnaround Time) for laboratory results (the main reason for non-retention in the previous study) was reduced from one month to one week. In August 2012, the study was repeated to assess the effect of reducing TAT on improving retention one year after patients were placed on POV. Materials and Methods A cohort analysis of data from patients in August 2011 and in August 2012 on POV was done. We compared retention of POV before and after reducing the TAT for laboratory results. Results Retention on POV was 12.0% (95% CI 9.50–14.7) among 619 patients in 2011, (70% Females), mean age was 33 years, Standard Deviation (SD) 8.5 compared to 11.1% (95% CI 9.15–13.4) among 888 patients (70% Females), mean age 38.3 years, SD 8.9 in 2012 (p=0.59). The main reasons for non-retention on the POV program in 2012 were poor adherence to ART (23%) and missed clinic appointments (14%). Conclusions Reducing TAT for laboratory test results did not improve retention of stable HIV-infected adults on POV in our clinic. Strategies for improving adherence to ART and keeping clinic appointments need to be employed to balance workload and management of patients without compromising quality of care, patients’ clinical, immunological and adherence outcome. PMID:25394111
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Scaling of average receiving time and average weighted shortest path on weighted Koch networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Chen, Dandan; Dong, Yujuan; Liu, Jie
2012-12-01
In this paper we present weighted Koch networks based on classic Koch networks. A new method is used to determine the average receiving time (ART), whose key step is to write the sum of mean first-passage times (MFPTs) for all nodes to absorption at the trap located at a hub node as a recursive relation. We show that the ART exhibits a sublinear or linear dependence on network order. Thus, the weighted Koch networks are more efficient than classic Koch networks in receiving information. Moreover, average weighted shortest path (AWSP) is calculated. In the infinite network order limit, the AWSP depends on the scaling factor. The weighted Koch network grows unbounded but with the logarithm of the network size, while the weighted shortest paths stay bounded.
Time-averaged photon-counting digital holography.
Demoli, Nazif; Skenderovi?, Hrvoje; Stip?evi?, Mario
2015-09-15
Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907
Time-averaged transport coefficients in photochemical modeling - A conjecture
NASA Astrophysics Data System (ADS)
Weinstock, J.
1989-10-01
It has been pointed out by several authors that constituent diffusivities associated with breaking gravity waves vary dramatically with the wave saturation amplitude. This variation introduces an uncertainty for the value to be used for diffusivity. In principle, this particular uncertainty may be avoided if an appropriate time average is used and the corresponding wave statistics are known. A simple hypothesis is put forward regarding what average is appropriate. A limitation is that turbulence scales be less than constituent scales.
Time-average TV holography for vibration fringe analysis.
Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2009-06-01
Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed. PMID:19488123
Trapping ultracold atoms in a time-averaged adiabatic potential
Gildemeister, M.; Nugent, E.; Sherlock, B. E.; Kubasik, M.; Sheard, B. T.; Foot, C. J.
2010-03-15
We report an experimental realization of ultracold atoms confined in a time-averaged, adiabatic potential (TAAP). This trapping technique involves using a slowly oscillating ({approx}kHz) bias field to time-average the instantaneous potential given by dressing a bare magnetic potential with a high-frequency ({approx}MHz) magnetic field. The resultant potentials provide a convenient route to a variety of trapping geometries with tunable parameters. We demonstrate the TAAP trap in a standard time-averaged orbiting potential trap with additional Helmholtz coils for the introduction of the radio frequency dressing field. We have evaporatively cooled 5x10{sup 4} atoms of {sup 87}Rb to quantum degeneracy and observed condensate lifetimes of longer than 3 s.
Turnaround radius in modified gravity
Faraoni, Valerio
2015-01-01
In an accelerating universe in General Relativity there is a maximum radius above which a shell of test particles cannot collapse, but is dispersed by the cosmic expansion. This radius could be used in conjunction with observations of large structures to constrain the equation of state of the universe. We extend the concept of turnaround radius to modified theories of gravity for which the gravitational slip is non-vanishing.
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Scaling of average sending time on weighted Koch networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Liu, Jie
2012-10-01
Random walks on weighted complex networks, especially scale-free networks, have attracted considerable interest in the past. But the efficiency of a hub sending information on scale-free small-world networks has been addressed less. In this paper, we study random walks on a class of weighted Koch networks with scaling factor 0 < r ? 1. We derive some basic properties for random walks on the weighted Koch networks, based on which we calculate analytically the average sending time (AST) defined as the average of mean first-passage times (MFPTs) from a hub node to all other nodes, excluding the hub itself. The obtained result displays that for 0 < r < 1 in large networks the AST grows as a power-law function of the network order with the exponent, represented by log 43r+1/r, and for r = 1 in large networks the AST grows with network order as N ln N, which is larger than the linear scaling of the average receiving time defined as the average of MFPTs for random walks to a given hub node averaged over all starting points.
An Advanced Time Averaging Modelling Technique for Power Electronic Circuits
NASA Astrophysics Data System (ADS)
Jankuloski, Goce
For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
Organized by the key conditions areas for turnaround, "People, Program, Time and Money," this tool offers sample language for each contract element to serve as a model for modifications from a traditional CBA that may support a district's turnaround efforts. Sample language is offered from existing provisions in district-wide collective bargaining…
EFFECT OF TIME AVERAGING ON ESTIMATION OF PHOTOVOLTAIC SYSTEM PERFORMANCE
1 EFFECT OF TIME AVERAGING ON ESTIMATION OF PHOTOVOLTAIC SYSTEM PERFORMANCE Clifford W. Hansen@sandia.gov ABSTRACT Power from proposed photovoltaic power systems is commonly estimated by a performance model using. 1. INTRODUCTION Power from photovoltaic (PV) systems can be calculated, using one of several models
Combining Quick-Turnaround and Batch Workloads at Scale
NASA Technical Reports Server (NTRS)
Matthews, Gregory A.
2012-01-01
NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Series Overview. Sustaining School Turnaround at Scale. Brief 1
ERIC Educational Resources Information Center
Education Resource Strategies, 2012
2012-01-01
Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…
Turnaround Schools and the Leadership They Require
ERIC Educational Resources Information Center
Leithwood, Kenneth; Strauss, Tiiu
2008-01-01
"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…
Time-averaged empirical models of chromospheric features
NASA Astrophysics Data System (ADS)
Avrett, E. H.
2000-05-01
Vernazza, Avrett, and Loeser (1981) used Skylab observations of the inhomogeneous structure of the quiet Sun in the EUV wavelength range 40-140 nm to determine separate chromospheric models for six observed brightness components, ranging from dark cell centers to very bright network elements. The intensity at the head of the C I continuum at 110 nm varies by a factor of 4 between cell center and bright network, but the corresponding temperature in the middle chromosphere responsible for this emission varies from about 6000 K to 7000 K. These observations and recent ones from the SUMER instrument on SOHO show variations with time, but the time variations appear to be smaller than the spatial variations. If so, time-averaged models for each separate brightness component seem to be justified as a first approximation. Using separate models for each component seems a reasonable approximation since the cell and network features are large compared with the thickness of the chromosphere. Large excursions of the chromospheric temperature on short time scales appear to be ruled out given the moderate temporal variations of EUV continuum and line intensities observed to date. The EUV observations seem to indicate a chromospheric temperature rise at all locations and at all times.
The Time-Averaged Field Investigations Project: A Synthesis
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Constable, C.; Tauxe, L.; Lawrence, K.; Staudigel, H.; Brown, L.; Mejia, V.; Opdyke, N.; Coe, R.; Stone, D.
2005-12-01
The Time-Averaged Field Investigations (TAFI) project is a multi-institutional effort to study the behavior of Earth's magnetic field over the past 5 Ma. Paleomagnetic directions from over 800 lava flows and dikes at 15 globally distributed locations have been obtained. Radiometric and paleointensity measurements have been performed on suitable specimens from individual flows. The new data significantly improve geographical coverage of the paleofield, particularly in the southern hemisphere and at high latitudes. We have integrated data collected under the TAFI project with the highest quality paleosecular variation (PSV) data available from the IAGA PSVRL, Global Paleomagnetic Database, and with regional compilations of published PSV data for Hawaii, Mexico, the South Pacific, Reunion, and the western United States. The data are compatible with a geocentric axial dipole average field, but also permit small but persistent axial quadrupole (g_2O) and axial octupole (g_3O) terms. We report on latitudinal variations in PSV diagnostics from the TAFI data. Where sufficient data are available regionally we compare statistical distributions of paleodirections with those predicted by PSV models.
Time-weighted averaging for nitrous oxide: an automated method.
McGill, W A; Rivera, O; Howard, R
1980-11-01
An automated method of obtaining a time-weighted average of nitrous oxide levels in an operating room was compared with a standard method. The automated method consisted of electronic integration of the voltage output of a nitrous oxide analyzer using a multimeter-microprocessor. The standard method utilized a bag and pump to collect a room air sample, which was subsequently analyzed with a nitrous oxide analyzer. There was a high degree of correlation (r = 0.99) between the two methods. It is concluded that the automated method is an accurate alternative and offers institutions a simple, cost-effective method of monitoring and documenting results of pollution control programs in anesthetizing locations. PMID:7425378
Comment on "Turnaround in Cyclic Cosmology"
Xin Zhang
2008-04-02
We comment on a recent paper by L. Baum and P. H. Frampton [Phys. Rev. Lett. 98, 071301 (2007)] where it was argued that the entropy problem can be resolved in a peculiar cyclic universe model through a deflation mechanism (i.e., the universe is fragmented into many disconnected causal patches at the turnaround). We point out that in this cyclic model the Hubble length will become infinity at the turnaround, thus the deflation scenario is not valid.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Unocal restarts Parachute Creek plant after turnaround
Not Available
1986-12-01
In late November, Unocal Corporation restarted its Parachute Creek shale oil project after a two-month turnaround, or overhaul operation. The plant is again operating at about 50% of its 10,000 barrel/day design capacity. This is the maximum rate which has been achieved for long periods of time. Unocal has been continuing to explore the possibility of adding a fluidized bed combustor to the retorting system. A decision on whether to proceed with building the FBC is expected to be made by July, 1987. The Colorado Mined Land Reclamation Board has allowed Unocal to make changes at Parachute Creek to better accommodate the handling of spent shale. These changes are described. Operating problems at the plant from start-up and modifications required are reviewed. Upgrading of the shale oil syncrude cannot be done at the Gary, Indiana refinery as planned. Current plans are to truck it to Utah, place it in a pipeline to Midland, Texas, then pipe it through Cushing, Oklahoma to the Unocal refinery in Lemont, Illinois.
Great comebacks: the 1992 turnaround contest winners.
Johnsson, J; Anderson, H J
1992-07-20
Turning a hospital around takes hard work, determination, and guts. The winners of this year's Hospital Turn-around of the Year Contest, sponsored by Hospitals and Coopers & Lybrand, demonstrate that a successful turn-around is about much more than money, and that it can't be judged a success based solely on financial results. The management team of each winning hospital showed imagination and courage in tackling their hospitals' problems. Congratulations to this year's winners and finalists for a job well done. PMID:1634214
On the average sojourn time under M/M/1/SRPT Nikhil Bansal
Bansal, Nikhil
On the average sojourn time under M/M/1/SRPT Nikhil Bansal #3; Abstract We study an M/M/1 queueing system under the Shortest Remaining Processing Time (SRPT) policy. We show that the average sojourn time. Keywords: Average sojourn time, SRPT, queueing theory, heavy traÆc 1 Introduction It is a classic result
Late Pleistocene Nannofossil Assemblages and Time-Averaging in the Fossil Record [IODP Site U1419]
Patrick, Amanda Kacy
2014-09-21
When looking at geologically rapid climatic changes, such as glacial/interglacial cycles, the true response of organisms may be obscured in the fossil record due to time averaging. Time averaging can also impact the rates of origination...
In Vivo T2 Relaxation Time Measurement with Echo-Time Averaging
Prescot, Andrew P.; Shi, Xianfeng; Choi, Changho; Renshaw, Perry. F.
2014-01-01
The accuracy of metabolite concentrations measured using in vivo proton (1H) magnetic resonance spectroscopy (MRS) is enhanced following correction for spin-spin (T2) relaxation effects. In addition, metabolite proton T2 relaxation times provide unique information regarding cellular environment and molecular mobility. Echo-time (TE) averaging 1H MRS involves the collection and averaging of multiple TE steps that greatly simplifies resulting spectra due to the attenuation of spin-coupled and macromolecule resonances. Given the simplified spectral appearance and inherent metabolite T2 relaxation information, the aim of the present proof-of-concept study was to develop a novel data processing scheme to estimate metabolite T2 relaxation times from TE-averaged 1H MRS data. Spectral simulations are used to validate the proposed TE-averaging methods for estimating methyl proton T2 relaxation times for N-acetyl aspartate, total creatine, and choline-containing compounds. The utility of the technique and its reproducibility are demonstrated using data obtained in vivo from the posterior-occipital cortex of ten healthy control subjects. Compared to standard methods, distinct advantages of this approach include built-in macromolecule resonance attenuation, in vivo T2 estimates closer to reported values when maximum TE ? T2, and the potential for T2 calculation of metabolite resonances otherwise inseparable in standard 1H MRS spectra recorded in vivo. PMID:24865447
Turnaround Planning for Integrated Chemical Sites
Grossmann, Ignacio E.
Chemical Company) 1 #12;2 Motivation · Large companies spend on the order of hundreds of millionsTurnaround Planning for Integrated Chemical Sites Satya Amaran, Tong Zhang, Sreekanth Rajagopalan, Nick Sahinidis (CMU) Bikram Sharda, Matt Morrison, Scott Bury, Scott Miller, John Wassick (The Dow
Relegation and Reversion: Longitudinal Analysis of School Turnaround and Decline
ERIC Educational Resources Information Center
Hochbein, Craig
2012-01-01
School turnaround has become a popular strategy for improvement of chronically low-performing schools. Research related to school turnaround has relied substantially upon case studies. These case studies often focus on successful turnarounds and report immediate outcomes, failing to provide information about the sustainability of the results. In…
Critical variables of venture turnarounds: a liabilities approach
M. Pretorius; G. T. D. Holtzhauzen
Potential failure is a threat that businesses face during any stage of their life cycle, while turnaround from decline is essential to protect them from failure. The scientific literature on turnaround focuses on both strategy and process. This study's line of enquiry firstly reviews the documented research (both theoretical and empirical) encompassing the phenomenon on 'turnaround'. The methodology applied is
On time-dependent perturbation theory in matrix mechanics and time averaging
NASA Astrophysics Data System (ADS)
Casas, Fernando
2015-09-01
The time-dependent quantum perturbation theory developed by Born, Heisenberg and Jordan in 1926 is revisited. We show that it not only reproduces the standard theory formulated in the interaction picture, but also allows one to construct more accurate approximations if time averaging techniques are employed. The theory can be rendered unitary even if the expansion is truncated by using a transformation previously suggested by Heisenberg. We illustrate the main features of the procedure on a simple example which clearly shows its advantages in comparison with the standard perturbation theory.
Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer
NASA Technical Reports Server (NTRS)
Dunn, M. G.; Rae, W. J.; Rigby, D. L.
1987-01-01
Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.
Concurrent Average Memory Access Time Xian-He Sun Dawei Wang
Sun, Xian-He
Concurrent Average Memory Access Time Xian-He Sun Dawei Wang Illinois Institute of Technology sun concurrency is a vital and timely task for data intensive applications. Traditional memory performance metrics, such as Average Memory Access Time (AMAT), are designed for sequential data accesses, and have inherent
Turgeman, Lior; Fixler, Dror
2013-06-01
In fluorescence fluctuation polarization sensitive experiments, the limitations associated with detecting the rotational timescale are usually eliminated by applying fluorescence correlation spectroscopy analysis. In this paper, the variance of the time-averaged fluorescence intensity extracted from the second moment of the measured fluorescence intensity is analyzed in the short time limit, before fluctuations resulting from rotational diffusion average out. Since rotational correlation times of fluorescence molecules are typically much lower than the temporal resolution of the system, independently of the time bins used, averaging over an ensemble of time-averaged trajectories was performed in order to construct the time-averaged intensity distribution, thus improving the signal-to-noise ratio. Rotational correlation times of fluorescein molecules in different viscosities of the medium within the range of the anti-bunching time (1-10 ns) were then extracted using this method. PMID:23760786
Leading a supply chain turnaround.
Slone, Reuben E
2004-10-01
Just five years ago, salespeople at Whirlpool were in the habit of referring to their supply chain organization as the "sales disablers." Now the company excels at getting products to the right place at the right time--while managing to keep inventories low. How did that happen? In this first-person account, Reuben Slone, Whirlpool's vice president of Global Supply Chain, describes how he and his colleagues devised the right supply chain strategy, sold it internally, and implemented it. Slone insisted that the right focal point for the strategy was the satisfaction of consumers at the end of the supply chain. Most supply chain initiatives do the opposite: They start with the realities of a company's manufacturing base and proceed from there. Through a series of interviews with trade customers large and small, his team identified 27 different capabilities that drove industry perceptions of Whirlpool's performance. Knowing it was infeasible to aim for world-class performance across all of them, Slone weighed the costs of excelling at each and found the combination of initiatives that would provide overall competitive advantage. A highly disciplined project management office and broad training in project management were key to keeping work on budget and on benefit. Slone set an intense pace--three "releases" of new capabilities every month--that the group maintains to this day. Lest this seem like a technology story, however, Slone insists it is just as much a "talent renaissance." People are proud today to be part of Whirlpool's supply chain organization, and its new generation of talent will give the company a competitive advantage for years to come. PMID:15559580
arXiv:astroph/0601091 A Measurement of Time-Averaged Aerosol Optical Depth using
arXiv:astroÂph/0601091 v1 4 Jan 2006 A Measurement of Time-Averaged Aerosol Optical Depth using Air in terms of optical depth, time-averaged over extended periods. Although the technique lacks statistical power to make the critical hourly measurements that only specialized active instruments can achieve, we
Scheduling to Minimize Average Completion Time Revisited: Deterministic On-line Algorithms
Megow, Nicole
2004-02-06
We consider the scheduling problem of minimizing the average weighted completion time on identical parallel machines when jobs are arriving over time. For both the preemptive and the nonpreemptive setting, we show that ...
Manned Orbital Transfer Vehicle (MOTV). Volume 5: Turnaround analysis
NASA Technical Reports Server (NTRS)
Boyland, R. E.; Sherman, S. W.; Morfin, H. W.
1979-01-01
The development of a low-cost reliable turnaround process to employ the MOTV in enhancing the utilization of the geosynchronous space region is analyzed. It is indicated that a routine effective turnaround/maintenance plan must make maximum use of flight data for maintenance planning, a high degree of test automation, and MOTV maintainability features in order to minimize tests, facilitate repair, and reduce manpower requirements. An effective turnaround plan provides a payback of reduced risks.
ERIC Educational Resources Information Center
Public Impact, 2008
2008-01-01
Evidence collected over the last 30 years suggests that effective school leaders significantly influence student learning and other aspects of school performance. Documented experience also indicates that individual leaders in failing organizations in various sectors, including education, can effect rapid, dramatic improvements. School turnaround…
Sanner, Scott
to exactly evaluate BMA over the exponentially- sized powerset of NB feature models in linear that it is indeed possible to exactly evaluate BMA over the exponentially-sized powerset of NB feature modelsBayesian Model Averaging Naive Bayes (BMA-NB): Averaging over an Exponential Number of Feature
The intensity of the time-averaged geomagnetic eld: the last 5 Myr
Utrecht, Universiteit
The intensity of the time-averaged geomagnetic ¢eld: the last 5 Myr M.T. Juarez a , L. Tauxe estimates of the ancient geomagnetic field contains more than 1500 data points collected through decades of the ¢rst to estimate the average strength of the geomagnetic ¢eld. Based on the paleointensity data
The State Role in School Turnaround: Emerging Best Practices
ERIC Educational Resources Information Center
Rhim, Lauren Morando, Ed.; Redding, Sam, Ed.
2014-01-01
This publication explores the role of the state education agency (SEA) in school turnaround efforts. An emphasis is placed on practical application of research and best practices related to the SEA's critical leadership role in driving and supporting successful school turnaround efforts. The publication is organized around the four goals of…
Long-term Turnaround Planning for Integrated Chemical Sites
Grossmann, Ignacio E.
) Bikram Sharda, Matt Morrison, Scott Bury, Scott Miller, John Wassick (The Dow Chemical Company) 1 #12Long-term Turnaround Planning for Integrated Chemical Sites Satya Amaran, Nick Sahinidis (CMU;Motivation · Large companies spend on the order of hundreds of millions of dollars on turnarounds annually
Using Competencies to Improve School Turnaround Principal Success
ERIC Educational Resources Information Center
Steiner, Lucy; Hassel, Emily Ayscue
2011-01-01
This paper aims first to shed light on one element of leadership: the characteristics--or "competencies"--of turnaround leaders who succeed in driving rapid, dramatic change. Second, it recounts the elements of support that districts must provide these leaders to enable and sustain a portfolio of successful school turnarounds. Fortunately, decades…
Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?
ERIC Educational Resources Information Center
Mette, Ian M.
2013-01-01
This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…
"Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform
ERIC Educational Resources Information Center
Johnson, Amanda Walker
2013-01-01
"Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas. The neoliberal…
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
NASA Astrophysics Data System (ADS)
Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.
Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.
Back on track. Leading a hospital turnaround.
1993-01-01
Taking a failing hospital from the brink of financial ruin and turning it into a thriving, profitable entity takes more than a dose of good luck. It takes a strong leader who has the vision, determination and skill to execute a successful turnaround and put that hospital back on track. In addition to addressing financial factors, an effective leader will promote a good working relationship between the hospital management, board, staff, patients, caregivers and community members. Following are seven California hospitals that have all faced financial crises in recent years. These hospitals have made the transition out of the red and into the black with the kind of leadership that relies on trust, teamwork, common sense and ingenuity. PMID:10128881
Functional Convex Averaging and Synchronization for Time-Warped Random Curves
Müller, Hans-Georg
statistics such as a functional mean. Observed random functions in warped time space are representedFunctional Convex Averaging and Synchronization for Time-Warped Random Curves Xueli Liu and Hans that can be best described as a sample of curves are now fairly common in science and engineering. When
Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio
NASA Astrophysics Data System (ADS)
Li, Shenghong; Bi, Guoan
2014-12-01
Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.
NASA Astrophysics Data System (ADS)
Martin, Nathan; Wrist, Andrew; Zhang, Zheng; Hubner, James
2012-11-01
Air flow over flexible membrane wings can induce vibration. The vibrating nature and the time-averaged curvature of a membrane wing may separately contribute to its improved aerodynamic characteristics compared to a flat plate of similar planform. To assess the effect of the time-averaged shape, a comparison of vibrating membrane wings and corresponding time-averaged shape of an aspect ratio two planform was conducted for membranes pre-tensions of 1%, 2%, and 4% and various cell aspect ratios. The membrane displacements were recorded using digital image correlation for each model at 6° and 18° angles of attack. The displacements were averaged, imported into CAD software, and printed using rapid prototyping equipment. The lift, drag, and pitching moment coefficients were acquired through wind tunnel testing at Reynolds number 50,000. The results indicate that membrane wings generate more drag but are more efficient than their time-averaged shapes due to greater lift. Funding received by NSF REU Grant 1062611 and AFOSR Grant FA9950-10-1-0152.
A fast, time averaging data acquisition system for the PC-AT bus
NASA Astrophysics Data System (ADS)
Ambrosetti, Roberto; Ricci, Domenico
1991-09-01
A data acquisition system designed for magnetic resonance instrumentation is presented. Its performance may be suitable for a broad set of applications where repetitive and noisy signals must be time-averaged in order to increase the signal-to-noise ratio. The system is built from easily available, low cost parts on a standard board for the PC-AT (16-bit) bus. 12-bit data acquisition and 32-bit time averaging take place in the memory of the PC under control of its CPU. The sustained data rate, including on-line signal averaging, can be as high as 250 kHz (500 kHz with interlaced data acquisitions). Timing is accomplished by on-board software programmable hardware timers. The CPU is then free for other tasks during the time, if any, between acquisition scans. Thus on-line display of the time-averaged data and their FT spectrum may be performed without impairing the timing accuracy, which is limited by the 10-MHz board clock to 100 ns. All functions of the interface are controlled by machine-language routines, callable from within quickbasic programs.
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-01-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ? 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
Dynamic stability of the time-averaged orbiting potential trap: Exact classical analysis
NASA Astrophysics Data System (ADS)
Gov, S.; Shtrikman, S.
1999-08-01
We calculate exactly the modes of motion of the time-averaged orbiting potential (TOP) trap with its four degrees of freedom, namely the three translations and the spin, taken into account. We find that, when gravity is neglected, there are two parameters in the problem namely, the angular velocity of the rotating field and its strength. We present the stability diagram in these parameters. We find the mode frequencies calculated from the time-averaged potential model used by the inventors of the TOP is an excellent approximation to our exact results. However, for other parameters, this may not be the case.
Some opportunities for vibration analysis with time averaging in digital Fresnel holography.
Picart, Pascal; Leval, Julien; Mounier, Denis; Gougeon, Samuel
2005-01-20
Features offered by the combination of time averaging and digital Fresnel holography are investigated. In particular, we introduce the concept of the zero-crossing phase of Bessel fringes, which allows a highly contrasted determination of the dark fringes in the hologram. We discuss some particularities of the digital reconstruction and show how time-averaged digital holography can be used to study vibration drifts. Experiment results are presented in the case of a loudspeaker under a sinusoidal excitation; digital and analogical holography are also compared. PMID:15717822
Climate Projections Using Bayesian Model Averaging and Space-Time Dependence
Haran, Murali
Climate Projections Using Bayesian Model Averaging and Space-Time Dependence K. Sham Bhat, Murali Haran, Adam Terando, and Klaus Keller. Abstract Projections of future climatic changes are a key input to the design of climate change mitiga- tion and adaptation strategies. Current climate change projections
Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay
Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay J. A. Robbins,1 dated using 210 Pb/226 Ra, and chronologies were verified by comparing sediment lead temporal records of the Florida Keys. Dates of sediment lead peaks (1978 2) accord with prior observations of a 6 year lag between
Widen, Joakim; Waeckelgaard, Ewa [Department of Engineering Sciences, The Aangstroem Laboratory, Uppsala University, P.O. Box 534, SE-751 21 Uppsala (Sweden); Paatero, Jukka; Lund, Peter [Advanced Energy Systems, Helsinki University of Technology, P.O. Box 2200, FI-02015 HUT (Finland)
2010-03-15
The trend of increasing application of distributed generation with solar photovoltaics (PV-DG) suggests that a widespread integration in existing low-voltage (LV) grids is possible in the future. With massive integration in LV grids, a major concern is the possible negative impacts of excess power injection from on-site generation. For power-flow simulations of such grid impacts, an important consideration is the time resolution of demand and generation data. This paper investigates the impact of time averaging on high-resolution data series of domestic electricity demand and PV-DG output and on voltages in a simulated LV grid. Effects of 10-minutely and hourly averaging on descriptive statistics and duration curves were determined. Although time averaging has a considerable impact on statistical properties of the demand in individual households, the impact is smaller on aggregate demand, already smoothed from random coincidence, and on PV-DG output. Consequently, the statistical distribution of simulated grid voltages was also robust against time averaging. The overall judgement is that statistical investigation of voltage variations in the presence of PV-DG does not require higher resolution than hourly. (author)
An upper bound to time-averaged space-charge limited diode currents
NASA Astrophysics Data System (ADS)
Griswold, M. E.; Fisch, N. J.; Wurtele, J. S.
2010-11-01
The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.
An upper bound to time-averaged space-charge limited diode currents
Griswold, M. E.; Fisch, N. J.; Wurtele, J. S.
2010-11-15
The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.
Where the world stands still: turnaround as a strong test of ?CDM cosmology
Pavlidou, V.; Tomaras, T.N., E-mail: pavlidou@physics.uoc.gr, E-mail: tomaras@physics.uoc.gr [Department of Physics and ITCP, University of Crete, 71003 Heraklion (Greece)
2014-09-01
Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ?CDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ?CDM model is, for a given mass M, equal to (3GM/? c{sup 2}){sup 1/3}, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ?CDM cosmology. Current data appear to favor ?CDM over alternatives with local inhomogeneities and no ?. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ?CDM cosmology.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Real-time fluid transport characterization through direct acquisition of the averaged propagator
NASA Astrophysics Data System (ADS)
Kittler, Wilfred; Hunter, Mark; Galvosas, Petrik
2015-08-01
Brownian motion (diffusion) and coherent flow are fundamental for many natural and artificial processes. Therefore, its accurate measurement and description is highly desirable in many areas of science, engineering, and technology. Currently available methods commonly involve multiple experiments and substantial processing of acquired data. This contribution proposes a theoretical and experimental framework that enables one to directly examine the dynamics of fluid matter subject to diffusion and flow through the acquisition of the so-called averaged propagator. This statistical function holds all information on particle mobility due to flow and diffusion averaged over the observed fluid. The proposed method is based on a single instantaneous nuclear magnetic resonance measurement event. It also removes the need for data postprocessing by capturing the averaged propagator directly as the acquired signal, which enables the monitoring of diffusion and flow in real time.
Zujovi?, Zoran D; Bowmaker, Graham A
2006-08-01
A relatively rapid phase alternation of the effective field in the time averaged precession frequency (TAPF) sequence results in averaging of the proton RF spin-lock field. The spin-locking of the proton magnetization becomes less efficient and thus shortens T(1rho)(H), the proton spin-lattice relaxation time in the rotating frame. The relaxation time also depends on the ratio of tau(1) and tau(2) intervals i.e. tau(1)/tau(2) and not only on the number of tau(c)=tau(1)+tau(2) blocks, i.e. the number of the phase transients. Experiments are performed on solid samples of ferrocene and glycine and for some time intervals, T(1rho)(H) is shortened by factors of 9-100 compared to the relaxation times obtained in the standard experiment. PMID:16757196
k=10 GS PC TPDA GES Average Time Results -Child -Sample Size 500
Brown, Laura E.
0 5 10 15 20 25 90 MMHC OR1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC k=10 GS PC 11516 MMHC OR1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC k=10 GS PC TPDA GES Time1 k=5 OR1 k=10 OR1 k=20 OR2 k=5 OR2 k=10 OR2 k=20 SC k=5 SC k=10 GS PC TPDA GES Time Average Time
Convergence of time averages of weak solutions of the three-dimensional Navier-Stokes equations
Ciprian Foias; Ricardo M. S. Rosa; Roger Temam
2015-03-11
Using the concept of stationary statistical solution, which generalizes the notion of invariant measure, it is proved that, in a suitable sense, time averages of almost every Leray-Hopf weak solution of the three-dimensional incompressible Navier-Stokes equations converge as the averaging time goes to infinity. This system of equations is not known to be globally well-posed, and the above result answers a long-standing problem, extending to this system a classical result from ergodic theory. It is also showed that, from a measure-theoretic point of view, the stationary statistical solution obtained from a generalized limit of time averages is independent of the choice of the generalized limit. Finally, any Borel subset of the phase space with positive measure with respect to a stationary statistical solution is such that for almost all initial conditions in that Borel set and for at least one Leray-Hopf weak solution starting with that initial condition, the corresponding orbit is recurrent to that Borel subset and its mean sojourn time within that Borel subset is strictly positive.
Convergence of Time Averages of Weak Solutions of the Three-Dimensional Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Foias, Ciprian; Rosa, Ricardo M. S.; Temam, Roger M.
2015-08-01
Using the concept of stationary statistical solution, which generalizes the notion of invariant measure, it is proved that, in a suitable sense, time averages of almost every Leray-Hopf weak solution of the three-dimensional incompressible Navier-Stokes equations converge as the averaging time goes to infinity. This system of equations is not known to be globally well-posed, and the above result answers a long-standing problem, extending to this system a classical result from ergodic theory. It is also shown that, from a measure-theoretic point of view, the stationary statistical solution obtained from a generalized limit of time averages is independent of the choice of the generalized limit. Finally, any Borel subset of the phase space with positive measure with respect to a stationary statistical solution is such that for almost all initial conditions in that Borel set and for at least one Leray-Hopf weak solution starting with that initial condition, the corresponding orbit is recurrent to that Borel subset and its mean sojourn time within that Borel subset is strictly positive.
A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.
P2D-1 High Contrast Ultrasound Imaging by Motion-Compensated Time-Averaging Method
H. Yoshikawa; T. Azuma; K. Sasaki; K. Kawabata; C. Xu; Siddhartha Sikdar; Ravi Managuli; Yongmin Kim; S. Umemura
2006-01-01
Motion-compensated time-averaged imaging (MTI), a method of accumulating frames with a motion compensation technique using small subregions was conducted to improve power Doppler and B-mode imaging. In our previous study, MTI was shown to be able to clearly image vasculature with contrast agents without image blurring even if the target tissue was moving [Yoshikawa, H, 2006]. The purpose of this
Analysis of cardiac left-ventricular volume based on time warping averaging
E. G. Caiani; A. Porta; G. Baselli; M. Turiel; S. Muzzupappa; M. Pagani; A. Malliani; S. Cerutti
2002-01-01
The cardiac left-ventricular (LV) volume signal, obtained by acoustic quantification, is affected by noise and respiratory\\u000a modulation, resulting in a large beat-to-beat variability that affects the computation of LV function indices. A new method\\u000a is proposed to improve the evaluation of LV indices by applying a signal averaging technique based on dynamic time warping\\u000a to consecutive LV volume waveforms. Volume
Effects of time averaging on optical scintillation in a ground-to-satellite atmospheric propagation.
Toyoshima, M; Araki, K
2000-04-20
Temporal natures for a variance of turbulence-induced log-intensity fluctuations are obtained. The variance of the optical fluctuation is reduced when the optical signals are integrated in a photodetector, and we express the index of reduction (called the time-averaging factor) by using an autocovariance function of the optical fluctuation. The optical fluctuations for a ground-to-satellite path are caused by both atmospheric turbulence and the beam-pointing jitter error of the optical transmitter. The turbulence-induced optical scintillation can be discriminated from the fluctuation that is due to the beam-pointing jitter error. The compared result from the probability density function of the optical signal reveals good agreement. The temporal autocovariance functions of optical scintillation are obtained and used to calculate the time-averaging factor. The analytically expected effects of time averaging are verified by the experimental results. The estimations contribute to the link budget design for the optical tracking channel through atmospheric turbulence. PMID:18345087
Real-Time Impulse Noise Suppression from Images Using an Efficient Weighted-Average Filtering
NASA Astrophysics Data System (ADS)
Hosseini, Hossein; Hessar, Farzad; Marvasti, Farokh
2015-08-01
In this paper, we propose a method for real-time high density impulse noise suppression from images. In our method, we first apply an impulse detector to identify the corrupted pixels and then employ an innovative weighted-average filter to restore them. The filter takes the nearest neighboring interpolated image as the initial image and computes the weights according to the relative positions of the corrupted and uncorrupted pixels. Experimental results show that the proposed method outperforms the best existing methods in both PSNR measure and visual quality and is quite suitable for real-time applications.
Luque-Vasquez, Fernando Minjarez-Sosa, J. Adolfo Rosas-Rosas, Luz del Carmen
2010-06-15
This paper deals with a class of semi-Markov control models with Borel state and control spaces, possibly unbounded costs, and unknown holding times distribution F. Assuming that F does not depend on state-action pairs, we combine suitable methods of statistical estimation of the mean holding time with control procedures to construct an average cost optimal Markovian policy {pi}-hat={l_brace}f{sub n}{r_brace}, and an optimal stationary policy {l_brace}f{sub {infinity}}{r_brace}, where f{sub n} converges to f{sub {infinity}} in the sense of Schael.
Analytical study of flow phenomena in SSME turnaround duct geometries
NASA Astrophysics Data System (ADS)
McLallin, K. L.
1985-04-01
The SSME fuel turbopump hot gas manifold was identified as a source of loss and flow distortion which significantly affects the performance and durability of both the drive turbine and the LOX injector area of the main combustion chamber. Two current SSME geometries were studied, the full power level (FPL) and the first manned orbital flight (FMOF) configuration. The effects of turnaround duct geometry on flow losses and distortions, by varying wall curvature and flow area variation in the 180 deg turnaround region were examined. The effects of the duct inlet flow phenomena such as the radial distortion of the inlet flow and inlet swirl level on turnaround duct performance were also investigated. It is shown that of the two current geometries, the FMOF configuration had lower pressure losses and generated less flow distortion, but had a small flow separation bubble at the 180 deg turnaround exit. It is found that by optimizing wall curvature and flow diffusion in the turnaround, improved duct performance can be achieved.
TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA
Iuppa, R. [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy)] [Dipartimento di Fisica dell'Universita 'Tor Vergata' di Roma, via della Ricerca Scientifica 1, I-00133 Roma (Italy); Di Sciascio, G., E-mail: roberto.iuppa@roma2.infn.it, E-mail: giuseppe.disciascio@roma2.infn.it [Istituto Nazionale di Fisica Nucleare, Sezione di Roma Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy)
2013-04-01
Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.
NASA Astrophysics Data System (ADS)
Young, Christopher; Lucca Fabris, Andrea; Gascon, Nicolas; Cappelli, Mark
2014-10-01
An extensive experimental campaign characterizes a 70 mm diameter stationary plasma thruster operating on xenon in the 200--500 W power range. This study resolves both time-averaged properties and oscillatory phenomena in the plasma discharge. Specifically, we explore the time variation of the plume ion velocity field referenced to periodic discharge current oscillations using time-synchronized laser induced fluorescence (LIF) measurements. This LIF scheme relies on a triggered signal acquisition gate locked at a given phase of the current oscillation period. The laser is modulated at a characteristic frequency and homodyne detection through a lock-in amplifier extracts the induced fluorescence signal out of the bright background emission. This work is sponsored by the U.S. Air Force Office of Scientific Research with Dr. Mitat Birkan as program manager. CVY acknowledges support from the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.
Time-Averaged and Oscillatory Characterization of a Hall Plasma Discharge
NASA Astrophysics Data System (ADS)
Young, Chris V.; Lucca Fabris, Andrea; Gascon, Nicolas; Cappelli, Mark A.
2014-10-01
We characterize a 70 mm diameter stationary plasma thruster operating on xenon at 200-500 W using nonintrusive laser measurements. This study resolves both time-averaged properties and oscillatory phenomena in the plasma discharge. Specifically, we explore how the plume ion velocity field evolves in time with respect to periodic discharge current oscillations using time-synchronized laser induced fluorescence (LIF) techniques. In this LIF scheme, a triggered signal acquisition gate is locked at a given phase of the current oscillation period, allowing for drift in the oscillation. The laser is modulated at a characteristic frequency and the induced fluorescence signal is extracted out of the bright background emission using homodyne detection with a lock-in amplifier.
A Simple Law for the Average Time History of Gamma-Ray Bursts and Their Time Dilations
Boris E. Stern
1995-09-29
Individual gamma ray bursts (GRBs) have very diverse time behavior - from a single pulse to a long complex sequence of chaotic pulses of different timescales. I studied light curves of GRBs using data from the CGRO's BATSE experiment and found that the average post-peak time history for a sample of 460 bursts obeys an unique and simple analytical law: $I \\sim \\exp(-(t/t_0)^{1/3})$ where $t$ is time measured from the peak of the event and $t_0$ is a constant ranging from 0.3 sec for strong bursts to $\\sim 1$ sec for weak bursts. The average peak aligned profile follows this law with good accuracy in the whole range availible for analysis (from fractions of a second to $\\sim$ 150 seconds after the peak). Such a law with a single time constant characterising the overall sample of GRBs should have important physical meaning. The dependence of $t_0$ versus brightness of GRBs is presented. The fact that $t_0$ depends on the brightness apparantly confirms the recently discovered effect of time dilation of weak bursts which has a possible cosmological interpretation. The time dilation is detected at a confidence level of $7\\sigma$ and it is slightly larger than was previously reported.
NASA Astrophysics Data System (ADS)
Markelov, Denis A.; Matveev, Vladimir V.; Ingman, Petri; Lähderanta, Erkki; Boiko, Natalia I.
2011-09-01
A new theoretical description of the interior mobility of carbosilane dendrimers has been tested. Experiments were conducted using measurements of the 1H NMR spin-lattice relaxation time, T1H, of two-, three- and four-generation carbosilane dendrimers with three different types of terminal groups in dilute chloroform solutions. Temperature dependences of the NMR relaxation rate, 1/T1H, were obtained for the internal CH2 - groups of the dendrimers in the range of 1/T1H maximum, allowing us to directly evaluate the average time of the internal spectrum for each dendrimer. It was found that the temperature of 1/T1H maximum is practically independent of the number of generations, G; therefore, the theoretical prediction was confirmed experimentally. In addition, the average time of the internal spectrum of carbosilane dendrimers was found to be near 0.2 ns at room temperature, and this value correlates well with the values previously obtained for other dendrimer structures using other experimental techniques.
Asynchronous input gear damage diagnosis using time averaging and wavelet filtering
NASA Astrophysics Data System (ADS)
Jafarizadeh, M. A.; Hassannejad, R.; Ettefagh, M. M.; Chitsaz, S.
2008-01-01
Vibration signals are often used for fault diagnosis in mechanical systems because they are containing dynamic information of mechanical elements. Vibration signals from a gearbox are usually noisy and the signal-to-noise ratio (SNR) is so low that feature extraction of signal components is very difficult, especially in practical situations. One of the solutions to this problem is applying signal time-averaging techniques in time domain for signal denoising, but using this method is only possible when gearbox input shaft rotation is constant or synchronous. In this paper, a new noise canceling method, based on time-averaging method for asynchronous input, is developed, and then complex Morlet wavelet is implemented for feature extraction and diagnosis of different kind of local gear damages. The complex Morlet wavelet, used in this work, is adaptive because the parameters are not fixed. The proposed method is implemented on a simulated signal and real test rig of Yahama motorcycle gearbox. Both simulation and experimental results have proved that the method is very promising in analysis of the signal and fault diagnosis of gearbox.
Berman, G P; Chernobrod, B M; Gorshkov, V N; Lizon, D C; Moody, D I; Nguyen, D C; Torous, S V
2009-01-01
We demonstrate experimentally and numerically that the application of a partially coherent beam (PCB) in combination with time averaging leads to a significant reduction in the scintillation index. We use a simplified experimental approach in which the atmospheric turbulence is simulated by a phase diffuser. The role of the speckle size, the amplitude of the phase modulation, and the strength of the atmospheric turbulence are examined. We obtain good agreement between our numerical simulations and our experimental results. This study provides a useful foundation for future applications of PCB-based methods of scintillation reduction in physical atmospheres.
Time Average Holography Study of Human Tympanic Membrane with Altered Middle Ear Ossicular Chain
NASA Astrophysics Data System (ADS)
Cheng, Jeffrey T.; Ravicz, Michael E.; Rosowski, John J.; Hulli, Nesim; Hernandez-Montes, Maria S.; Furlong, Cosme
2009-02-01
Computer-assisted time average holographic interferometry was used to study the vibration of the human tympanic membrane (TM) in cadaveric temporal bones before and after alterations of the ossicular chain. Simultaneous laser Doppler vibrometer measurements of stapes velocity were performed to estimate the conductive hearing loss caused by ossicular alterations. The quantified TM motion described from holographic images was correlated with stapes velocity to define relations between TM motion and stapes velocity in various ossicular disorders. The results suggest that motions of the TM are relatively uncoupled from stapes motion at frequencies above 1000 Hz.
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-01
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467
Comani, Silvia; Alleva, Giovanna
2007-01-01
Fetal cardiac time intervals (fCTI) are dependent on fetal growth and development, and may reveal useful information for fetuses affected by growth retardation, structural cardiac defects or long QT syndrome. Fetal cardiac signals with a signal-to-noise ratio (SNR) of at least 15 dB were retrieved from fetal magnetocardiography (fMCG) datasets with a system based on independent component analysis (ICA). An automatic method was used to detect the onset and offset of the cardiac waves on single cardiac cycles of each signal, and the fCTI were quantified for each heartbeat; long rhythm strips were used to calculate average fCTI and their variability for single fetal cardiac signals. The aim of this work was to compare the outcomes of this system with the estimates of fCTI obtained with a classical method based on the visual inspection of averaged beats. No fCTI variability can be measured from averaged beats. A total of 25 fMCG datasets (fetal age from 22 to 37 weeks) were evaluated, and 1768 cardiac cycles were used to compute fCTI. The real differences between the values obtained with a single cycle analysis and visual inspection of averaged beats were very small for all fCTI. They were comparable with signal resolution (+/-1 ms) for QRS complex and QT interval, and always <5 ms for the PR interval, ST segment and T wave. The coefficients of determination between the fCTI estimated with the two methods ranged between 0.743 and 0.917. Conversely, inter-observer differences were larger, and the related coefficients of determination ranged between 0.463 and 0.807, assessing the high performance of the automated single cycle analysis, which is also rapid and unaffected by observer-dependent bias. PMID:17151419
Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay
Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.
2000-01-01
Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.
High-frequency sampling and kernel estimation for continuous-time moving average processes
Brockwell, Peter; Klüppelberg, Claudia
2011-01-01
Interest in continuous-time processes has increased rapidly in recent years, largely because of the high-frequency data available in many areas of application, particularly in finance and turbulence. We develop a method for estimating the kernel function of a continuous-time moving average (CMA) process $Y$ which takes advantage of the high-frequency of the data. In order to do so we examine the relation between the CMA process $Y$ and the discrete-time process $Y^\\Delta$ obtained by sampling $Y$ at times which are integer multiples of some small positive $\\Delta$. In particular we derive asymptotic results as $\\Delta\\downarrow 0$ which generalize results of \\cite{bfk:2011:1} for high-frequency sampling of CARMA processes. We propose an estimator of the continuous-time kernel based on observations of $Y^\\Delta$, investigate its properties and illustrate its performance using simulated data. Particular attention is paid to the performance of the estimator as $\\Delta\\downarrow 0$. Time-domain and frequency-doma...
Time-averaged current analysis of a thunderstorm using ground-based measurements
NASA Technical Reports Server (NTRS)
Driscoll, Kevin T.; Blakeslee, Richard J.; Koshak, William J.
1994-01-01
The amount of upward current provided to the ionosphere by a thunderstorm that appeared over the Kennedy Space Center (KSC) on July 11, 1978, is reexamined using an analytic equation that describes a bipolar thunderstorm's current contribution to the global circuit in terms of its generator current, lightning currents, the altitudes of its charge centers, and the conductivity profile of the atmosphere. Ground-based measurements, which were obtained from a network of electric field mills positioned at various distances from the thunderstorm, were used to characterize the electrical activity inside the thundercloud. The location of the lightning discharges, the type of lightning, and the amount of charge neutralized during this thunderstorm were computed through a least squares inversion of the measured changes in the electric fields following each lightning discharge. These measurements provided the information necessary to implement the analytic equation, and consequently, a time-averaged estimate of this thunderstorm's current contribution to the global circuit was calculated. From these results the amount of conduction current supplied to the ionosphere by this small thunderstorm was computed to be less than 25% of the time-averaged generator current that flowed between the two vertically displaced charge centers.
NASA Astrophysics Data System (ADS)
Basu, Santasri; McCrae, Jack E.; Fiorino, Steven T.
2015-05-01
A time-lapse imaging experiment was conducted to monitor the effects of the atmosphere over some period of time. A tripod-mounted digital camera captured images of a distant building every minute. Correlation techniques were used to calculate the position shifts between the images. Two factors causing shifts between the images are: atmospheric turbulence, causing the images to move randomly and quickly, plus changes in the average refractive index gradient along the path which cause the images to move vertically, more slowly and perhaps in noticeable correlation with solar heating and other weather conditions. A technique for estimating the path-averaged C 2n from the random component of the image motion is presented here. The technique uses a derived set of weighting functions that depend on the size of the imaging aperture and the patch size in the image whose motion is being tracked. Since this technique is phase based, it can be applied to strong turbulence paths where traditional irradiance based techniques suffer from saturation effects.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Key Decisions of a First-Year "Turnaround" Principal
ERIC Educational Resources Information Center
Duke, Daniel; Salmonowicz, Michael
2010-01-01
This article examines the decisions made by one principal in her first year as a school turnaround specialist in a low-performing urban elementary school. Researchers focused on decisions related to the principal's three high-priority concerns: (1) elimination of an ineffective instructional program; (2) creation of a culture of teacher…
Negotiation Areas for "Transformation" and "Turnaround" Intervention Models
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…
Can Principals' Emotional Intelligence Matter to School Turnarounds?
ERIC Educational Resources Information Center
Cai, Qijie
2011-01-01
Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: the…
The Turnaround Mindset: Aligning Leadership for Student Success
ERIC Educational Resources Information Center
Fairchild, Tierney Temple; DeMary, Jo Lynne
2011-01-01
This book provides a valuable balance between what one must know and what one must do to turn around low-performing schools. The 3-E framework simplifies this complex process by focusing resources on the environment, the executive, and the execution of the turnaround plan. Central to each of these components is a spotlight on the values supporting…
A miniature turn-around for distributed temperature sensing
NASA Astrophysics Data System (ADS)
Sun, Xiaoguang; Burgess, David T.; Bedard, Kyle; Li, Jie; Hines, Mike
2015-05-01
In many fiber optic distributed temperature sensing (DTS) systems, a dual-ended configuration can correct the temperature measurement error associated with wavelength dependent loss (WDL) of the optical fiber and can provide a more accurate temperature measurement in comparison with a single-ended fiber system. In this configuration, two pieces of fiber are laid parallel to each other and connected at the distal end by a turn-around device, creating a U-shaped optical path that provides accessibility to both legs from the proximal end of the system. In many applications, tightly confined spaces constrain the fiber bend diameter and thus the size of the turn-around device. In this paper we will report a miniature turn-around built with a short section of a graded index (GI) fiber. The device measures less than 300 ?m in diameter and less than 2 mm in length. The insertion loss of the miniature turn-around is measured and will be compared with the theoretical simulations.
Rethinking the nonmetropolitan turnaround: renewed rural growth or extended urbanization?
Wang, Xiaodong
2007-04-25
This dissertation proposes a new, synthesized perspective for explaining the Ã¢Â?Â?Nonmetropolitan TurnaroundÃ¢Â?Â in the 1970s and 1990s. By studying the definition of urbanization carefully, using the human ecological perspective, many processes...
Turnaround: Leading Stressed Colleges and Universities to Excellence
ERIC Educational Resources Information Center
Martin, James; Samels, James E.
2008-01-01
Nearly one thousand colleges and universities in the United States face major challenges--from catastrophic hurricanes to loss of accreditation to sagging enrollment. What can leaders of such at-risk institutions do to improve their situation? "Turnaround" gives college and university leaders the tools they need to put their fragile institutions…
Long-term Turnaround Planning for Integrated Chemical Sites
Grossmann, Ignacio E.
Sahinidis (CMU) Bikram Sharda, Matt Morrison, Scott Bury, Scott Miller, John Wassick (The Dow ChemicalLong-term Turnaround Planning for Integrated Chemical Sites Satya Amaran, Tong Zhang, Nick Company) 1 #12;Problem Statement · Exploit network interactions, storage availability, and prices
129. Back side technical facilities passageways between turnaround trans. & ...
129. Back side technical facilities passageways between turnaround trans. & building no. 104, "plans, sections & elevations" - architectural, AS-BLT AW 36-25-13, sheet 9 of 40, dated 23 November, 1960. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
Policy Perspective: School Turnaround in England. Utilizing the Private Sector
ERIC Educational Resources Information Center
Corbett, Julie
2014-01-01
This paper, written by strategic partner of the Center on School Turnaround (CST), Julie Corbett, provides research and examples on England's approach to turning around its lowest performing schools. The English education system utilizes private vendors to support chronically low-performing schools and districts. The introduction is followed by…
Turnaround and Closure Rates in the Charter and District Sectors
ERIC Educational Resources Information Center
Stuit, David
2012-01-01
This article examines the turnaround and closure rates of charter and district schools in 10 states to determine if one school sector has done comparatively better than the other at addressing chronically low-performing schools. Low-performing charter and district schools were identified in 2003-04 based on their reading and math proficiency rates…
California Turnaround Schools: An Analysis of School Improvement Grant Effectiveness
ERIC Educational Resources Information Center
Graham, Khalil N.
2013-01-01
The purpose of this study was to evaluate the effectiveness of School Improvement Grants (SIGs) in the state of California (CA) in increasing student achievement using the turnaround implementation model. The American Recovery and Reinvestment Act of 2009 (ARRA) included educational priorities focused on fixing America's lowest achieving schools.…
arXiv:astro-ph/0601091v14Jan2006 A Measurement of Time-Averaged Aerosol Optical Depth using
arXiv:astro-ph/0601091v14Jan2006 A Measurement of Time-Averaged Aerosol Optical Depth using Air of the aerosol attenuation in terms of optical depth, time-averaged over extended periods. Although the technique lacks statistical power to make the critical hourly measurements that only specialized active
Mauzerall, Denise
Estimating the average time for inter-continental transport of air pollutants Junfeng Liu. L. Mauzerall (2005), Estimating the average time for inter- continental transport of air pollutants of air pollution despite the fact that significant quantities of pollutants may be transported outside
The time course of top-down control on saccade averaging.
Heeman, J; Theeuwes, J; Van der Stigchel, S
2014-07-01
When objects in a visual scene are positioned in close proximity, eye movements to these objects tend to land at an intermediate location between the objects (i.e., the global effect). This effect is most pronounced for short latency saccades and is therefore believed to be reflexive and dominantly controlled by bottom-up information. At longer latencies this effect can be modulated by top-down factors. The current study established the time course at which top-down information starts to have an influence on bottom-up averaging. In a standard global effect task two peripheral stimuli (a red and a green abrupt onset) were positioned within an angular distance of 20°. In the condition in which observers received no specific target instruction, the eyes landed in between the red and green element establishing the classic global effect. However, when observers were instructed to make a saccade to the red element during a whole block or when the target color varied from trial-to-trial (red or green), a clear effect of the target instruction on the accuracy of the landing position of the primary saccade was found. With increasing saccade latencies, the eyes landed closer to the instructed target. Crucially, however, this effect was even seen for the shortest saccade latencies (as early as 200 ms), suggesting that saccade averaging is affected early on by top-down processes. PMID:24732567
ARECIBO MULTI-FREQUENCY TIME-ALIGNED PULSAR AVERAGE-PROFILE AND POLARIZATION DATABASE
Hankins, Timothy H. [Physics Department, New Mexico Tech, Socorro, NM 87801 (United States); Rankin, Joanna M. [Physics Department, University of Vermont, Burlington, VT 05401 (United States)], E-mail: thankins@nrao.edu, E-mail: Joanna.Rankin@uvm.edu
2010-01-15
We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.
Shear banding in a lyotropic lamellar phase. I. Time-averaged velocity profiles
NASA Astrophysics Data System (ADS)
Salmon, Jean-Baptiste; Manneville, Sébastien; Colin, Annie
2003-11-01
Using velocity profile measurements based on dynamic light scattering and coupled to structural and rheological measurements in a Couette cell, we present evidences for a shear banding scenario in the shear flow of the onion texture of a lyotropic lamellar phase. Time-averaged measurements clearly show the presence of structural shear banding in the vicinity of a shear-induced transition, associated with the nucleation and growth of a highly sheared band in the flow. Our experiments also reveal the presence of slip at the walls of the Couette cell. Using a simple mechanical approach, we demonstrate that our data confirm the classical assumption of the shear banding picture, in which the interface between bands lies at a given stress ?*. We also outline the presence of large temporal fluctuations of the flow field, which are the subject of the second part of this paper [Salmon et al., Phys. Rev. E 68, 051504 (2003)].
Shear-banding in a lyotropic lamellar phase, Part 1: Time-averaged velocity profiles
J-. B. Salmon; S. Manneville; A. Colin
2003-07-24
Using velocity profile measurements based on dynamic light scattering and coupled to structural and rheological measurements in a Couette cell, we present evidences for a shear-banding scenario in the shear flow of the onion texture of a lyotropic lamellar phase. Time-averaged measurements clearly show the presence of structural shear-banding in the vicinity of a shear-induced transition, associated to the nucleation and growth of a highly sheared band in the flow. Our experiments also reveal the presence of slip at the walls of the Couette cell. Using a simple mechanical approach, we demonstrate that our data confirms the classical assumption of the shear-banding picture, in which the interface between bands lies at a given stress $\\sigma^\\star$. We also outline the presence of large temporal fluctuations of the flow field, which are the subject of the second part of this paper [Salmon {\\it et al.}, submitted to Phys. Rev. E].
NASA Astrophysics Data System (ADS)
Umemura, K.; Ebina, K.
2014-12-01
Climate is the average of weather over some time period and shows characteristic behavior in each time scale. In paleoclimatic research, values of climate variables are measured from proxies that give time series of time-averaged variables. Therefore, understanding the dynamics of time-averaged variable is important to investigate climate variations thorough different time scales. In our recent study, we formulated how stochastic dynamics changes corresponding to averaging time intervals using one dimensional first order stochastic differential equation which contains parametrically controlled terms of deterministic single-well or double-well potential force and random force. The dynamics of time-averaged variable is described by conditional probability density function. In the case of single-well, the function is analytically derived as normal distribution with scaling parameters. In the case of double-well potential, the function is obtained as skew generalized normal distribution function through numerical simulations. The mathematical framework of stochastic dynamics of time-averaged variable is general and applicable to analysis of many kinds of climate time series data. In this study, we apply the above framework to the analysis of proxy data from ice core and discuss about time scaling of the past climate variations. We test several models to infer the optimal model description for the data.
Analysis of the time-averaged properties of sediment motion in a local scour process
NASA Astrophysics Data System (ADS)
Radice, Alessio; Porta, Giovanni; Franzetti, Silvio
2009-03-01
We examined the kinematics of sediment particles in the proximity of a vertical plate attached to the side wall of a pressurized duct. The laboratory phenomenon investigated was archetypal of local scour near bridge abutments in covered rivers. We conducted a clear-water scour experiment where we measured the temporal evolution of the local erosion depth; the latter was found to follow a logarithmic trend. The concentration and velocity of the moving particles were measured on the basis of digital movies taken at various evolution stages of the process. The spatial pattern of the sediment motion showed that the action of the principal vortex system becomes progressively more evident with the general lowering of the sediment transport intensity detected as the scour hole increases in time. Furthermore, the phenomenology of local sediment transport was analyzed; key findings of the research include (1) the time-averaged values of the sediment transport rate are linked with those of concentration and velocity through relationships which are almost invariant, regardless of the experimental time and of the location within the scour hole; (2) intermittency and irregularity of the sediment fluxes increase as their mean intensity decreases; (3) the relative weight of the temporal fluctuations of concentration and velocity in the time-mean sediment transport activity is significant throughout the entire process. Potential implications of the present results for the modeling of local scour processes in pressurized-flow conditions are discussed. The considerations made are likely to have analogous applicability to local scour with a free surface, since the large-scale evolutions of the erosion depth in the two conditions may not be greatly different. Given the qualitative analogy of the present findings with earlier ones obtained for one-dimensional bed load, it is suggested that similar considerations may hold also for the small-scale modeling of a wider class of sediment transport processes.
NASA Technical Reports Server (NTRS)
Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.
1995-01-01
Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.
Area-preserving maps models of gyro-averaged ${\\bf E} \\times {\\bf B}$ chaotic transport
J. D. da Fonseca; D. del-Castillo-Negrete; I. L. Caldas
2014-09-10
Discrete maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on area-preserving maps describing finite Larmor radius (FLR) effects on ${\\bf E} \\times {\\bf B}$ chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. FLR effects are included by gyro-averaging the Hamiltonians of the maps which, depending on the zonal flow profile, can have monotonic or non-monotonic frequencies. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that in both cases FLR leads to chaos suppression, changes in the stability of fixed points, and robustness of transport barriers. FLR effects are also responsible for changes in the phase space topology and zonal flow bifurcations. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the Larmor radius of the threshold for the destruction of transport barriers.
Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi
2015-03-01
In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ? (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by ?(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.
NASA Astrophysics Data System (ADS)
Mao, Yijun; Xu, Chen; Qi, Datong
2015-02-01
A vector aeroacoustics method is developed to analyze the acoustic energy flow path from the rotating source. In this method, the instantaneous and time-averaged active acoustic intensity vectors are evaluated from the time-domain and frequency-domain acoustic pressure and acoustic velocity formulations, respectively. With the above method, the acoustic intensity vectors and the acoustic energy streamlines are visualized to investigate the propagation feature of the noise radiated from the monopole and dipole point sources and the rotor in subsonic rotation. The result reveals that a portion of the acoustic energy spirals many circles before moving towards the far field, and another portion of the acoustic energy firstly flows inward along the radial direction and then propagates along the axial direction. Further, an acoustic black hole exists in the plane of source rotation, from which the acoustic energy cannot escape once the acoustic energy flows into it. Moreover, by visualizing the acoustic intensity field around the rotating sources, the acoustic-absorption performance of the acoustic liner built in the casing and centerbody is discussed.
New device for time-averaged measurement of volatile organic compounds (VOCs).
Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio
2014-07-01
Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes through a glass cell containing adsorbent material where the VOCs are retained. The adsorbent used, made in LABAQUA, is a mixture of alginic acid and activated carbon. Due to its high permeability it allows the passage and retention of THMs in a suitable way, thus solving many of the problems of other common adsorbents. Also, to avoid degradation of the adsorbent, it is wrapped in a low density polyethylene (LDPE) membrane. After a sampling period of between 1 and 14 days, the adsorbent is collected and analyzed in the laboratory to quantify the VOC average concentration. This device resolves some of the limitations of the classical sampling system (spot samples), since we will take into account the fluctuations in the concentration of VOCs by averaging the same over time. This study presents the results obtained by the device for quantifying the VOCs legislated in the Directive 2000/60/EC. We present the validation of linearity over time and the limits of quantification, as well as the results of sample rate (Rs) obtained for each compound. The results demonstrate the high robustness and high sensitivity of the device. In addition the system has been validated in real waste water samples, comparing the results obtained with this device with the values of classical spot sampling, obtaining excellent results. PMID:24388502
Detecting macroeconomic phases in the Dow Jones Industrial Average time series
NASA Astrophysics Data System (ADS)
Wong, Jian Cheng; Lian, Heng; Cheong, Siew Ann
2009-11-01
In this paper, we perform statistical segmentation and clustering analysis of the Dow Jones Industrial Average (DJI) time series between January 1997 and August 2008. Modeling the index movements and log-index movements as stationary Gaussian processes, we find a total of 116 and 119 statistically stationary segments respectively. These can then be grouped into between five and seven clusters, each representing a different macroeconomic phase. The macroeconomic phases are distinguished primarily by their volatilities. We find that the US economy, as measured by the DJI, spends most of its time in a low-volatility phase and a high-volatility phase. The former can be roughly associated with economic expansion, while the latter contains the economic contraction phase in the standard economic cycle. Both phases are interrupted by a moderate-volatility market correction phase, but extremely-high-volatility market crashes are found mostly within the high-volatility phase. From the temporal distribution of various phases, we see a high-volatility phase from mid-1998 to mid-2003, and another starting mid-2007 (the current global financial crisis). Transitions from the low-volatility phase to the high-volatility phase are preceded by a series of precursor shocks, whereas the transition from the high-volatility phase to the low-volatility phase is preceded by a series of inverted shocks. The time scale for both types of transitions is about a year. We also identify the July 1997 Asian Financial Crisis to be the trigger for the mid-1998 transition, and an unnamed May 2006 market event related to corrections in the Chinese markets to be the trigger for the mid-2007 transition.
NASA Astrophysics Data System (ADS)
Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.
2009-02-01
In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (?CT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.
Mercury's Time-Averaged and Induced Magnetic Fields from MESSENGER Observations
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Winslow, R. M.; Anderson, B. J.; Purucker, M. E.; Korth, H.; Al Asad, M. M.; Slavin, J. A.; Baker, D. N.; Hauck, S. A.; Phillips, R. J.; Zuber, M. T.; Solomon, S. C.
2012-12-01
Observations from MESSENGER's Magnetometer (MAG) have allowed the construction of a baseline, time-averaged model for Mercury's magnetosphere. The model, constructed with the approximation that the magnetospheric shape can be represented as a paraboloid, includes two external (magnetopause and magnetotail) current systems and an internal (dipole) field. We take advantage of the geometry of the orbital MAG data to constrain all but one of the model parameters, and their ranges, directly from the observations. These parameters are then used as a priori constraints in the magnetospheric model, and the remaining parameter, the dipole moment, is estimated from a grid search. The model provides an excellent fit to the MAG observations, with a root-mean-square misfit of less than 20 nT globally. The mean distance from the planetary dipole origin to the magnetopause subsolar point, RSS, is 1.45 RM (where RM = 2440 km) and the mean planetary dipole moment is 190 nT- RM3. Temporal variations in the global-scale magnetic fields result from changes in solar wind ram pressure, Pram, at Mercury that arise from the planet's 88-day eccentric orbit around the Sun and from transient, rapid changes in solar wind conditions. For a constant planetary dipole moment, RSS varies as Pram-1/6. However, magnetopause crossings obtained from several Mercury years of MESSENGER observations indicate that RSS is proportional to Pram-1/a where a is greater than 6, suggesting induction in Mercury's highly conducting metallic interior. We obtain an effective dipole moment that varies by up to ˜15% about its mean value. We further investigate the periodic 88-day induction signature and use the paraboloid model to describe the spatial structure in the inducing magnetopause field, together with estimates for the outer radius of Mercury's liquid core and possible overlying solid iron sulfide layer, to calculate induced core fields. The baseline magnetospheric model is adapted to include the 88-day periodic induction signature, and residuals to this time-varying global model from magnetically quiet orbits are then used to investigate structure at higher degree and order in the internal and external fields.
Upper Bounds for the Average Error Probability of a Time-Hopping Wideband
Gubner, John A.
structure is proposed in which collisions with interfering users are discarded. Two formulas-user receiver structure is presented and its signal processing described. An exact formula for the average a comparison of the bounds and formulae developed throughout the paper. With reasonable numerical assumptions
Toint, Philippe
1 From average travel time budgets to daily travel time distributions: an appraisal of two-paul.hubert@insee.fr 4999 words, 5 figures, 5 tables = 7499 Keywords: travel time, distributions, travel behaviour. #12 and human energy expenditure for travel, which is assumed to be constant in time and space. The second one
Design Considerations of the Final Turnaround Regions for the CLIC Drive Beam
Apsimon, R; Latina, A; Schulte, D; Uythoven, J
2014-01-01
The optics design of the final turnaround regions for the CLIC drive beam is presented. This includes the extraction region, the turnaround loop and the phase feed forward chicane for correcting errors on the bunch phase. The design specifications of the kicker and septum magnets are provided. Tracking simulations and detailed studies of coherent and incoherent synchrotron radiation have been used to optimise the optics in the turnaround region in order to minimise transverse and longitudinal emittance growth.
Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain
NASA Astrophysics Data System (ADS)
Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.; Stewart, J. B.; Verma, S. B.; Weaver, H. L.; Wesely, M. L.
1992-11-01
The underlying mean and variance properties of surface net radiation, soil heat flux, and sensible-latent heat fluxes are examined over the densely instrumented grassland region encompassing the First ISLSCP Field Experiment (FIFE). Twenty-two surface flux stations at 20 sites were deployed during the four 1987 intensive field campaigns (IFCs). Flux variability is addressed together with the problem of scaling up to area-averaged fluxes. Successful parameterization of area-averaged fluxes in atmospheric models is based on accounting for internal spatial and temporal scales correctly. Mean and variance properties of fluxes are examined in both daily and diurnally averaged frameworks. Results are compared and contrasted for clear and cloudy situations and checked for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (slope factors and aspect ratios). Examination of the sensitivity of domain-averaged fluxes to different averaging procedures demonstrates that this may be an important consideration. The results reveal six key features of the 1987 surface fluxes: (1) cloudiness variability and ample rainfall throughout the growing season led to near-consistency in flux magnitudes during the first three IFCs; (2) burn treatment, grazing conditions, and topography have clearly delineated influences on the diurnal cycle flux amplitudes but do not alter the evaporative fraction significantly; (3) cloudiness is the major control on flux variability in terms of both mean and variance properties but has little impact on the Bowen ratio or evaporative fraction; (4) spatial weighting of fluxes based on a biophysicaltopographical cross stratification generates a measurable bias with respect to straight arithmetic averaging (up to 20 W m-2 in available heating); (5) structure function analysis demonstrates significant underlying spatial autocorrelation structure in the fluxes, but the observed distance dependence is due to cloudiness controls, not surface controls; (6) Monte Carlo analysis of high resolution vegetation indices obtained from SPOT satellite measurements suggest that the mean domain amplitudes of the diurnal sensible and latent heat flux cycles can be biased up to 30-40 W m -2 by repositioning the 20 site locations within the experimental domain.
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta
2004-01-01
Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature information could reliably be recovered, however the turbulence estimates contained significant uncertainty. Resulting flow parameter estimates are presented for surveys of Mach 0.6, 0.95, and 1.4 jet flows. Velocity, density, and temperature were determined with accuracies of 5 m/s, 1.5%, and 1%, respectively, in flows with no particles present, and with accuracies of 5 m/s, 1-4%, and 2% in flows with particles. Comparison with hotwire data for the Mach 0.6 condition demonstrated turbulence estimates with accuracies of about 5 m/s outside the jet core where Mie scattering from dust/smoke particulates aided in the estimation of turbulence. Turbulence estimates could not be recovered with any significant accuracy for measurement points where no particles were present.
Daily Average Consumption of 2 Long-Acting Opioids: An Interrupted Time Series Analysis
Puenpatom, R. Amy; Szeinbach, Sheryl L.; Ma, Larry; Ben-Joseph, Rami H.; Summers, Kent H.
2012-01-01
Background Oxycodone controlled release (CR) and oxymorphone extended release (ER) are frequently prescribed long-acting opioids, which are approved for twice-daily dosing. The US Food and Drug Administration approved a reformulated crush-resistant version of oxycodone CR in April 2010. Objective To compare the daily average consumption (DACON) for oxycodone CR and for oxymorphone ER before and after the introduction of the reformulated, crush-resistant version of oxycodone CR. Methods This was a retrospective claims database analysis using pharmacy claims from the MarketScan database for the period from January 2010 through March 2011. The interrupted time series analysis was used to evaluate the impact of the introduction of reformulated oxycodone CR on the DACON of the 2 drugs—oxycodone CR and oxymorphone ER. The source of the databases included private-sector health data from more than 150 medium and large employers. All prescription claims containing oxycodone CR and oxymorphone ER dispensed to members from January 1, 2010, to March 31, 2011, were included in the analysis. Prescription claims containing duplicate National Drug Codes, missing member identification, invalid quantities or inaccurate days supply of either drug, and DACON values of <1 and >500 were removed. Results The database yielded 483,063 prescription claims for oxycodone CR and oxymorphone ER from January 1, 2010, to March 31, 2011. The final sample consisted of 411,404 oxycodone CR prescriptions (traditional and reformulated) dispensed to 85,150 members and 62,656 oxymorphone ER prescriptions dispensed to 11,931 members. Before the introduction of reformulated oxycodone CR, DACON values for the highest strength available for each of the 2 drugs were 0.51 tablets higher for oxycodone CR than for oxymorphone ER, with mean DACON values of 3.5 for oxycodone CR and 3.0 for oxymorphone ER (P <.001). The differences of mean DACON between the 2 drugs for all lower strengths were 0.46 tablets, with mean DACON values of 2.7 for oxycodone CR and 2.3 for oxymorphone ER (P <.001). After the introduction of the new formulation, the difference in mean DACON between the 2 drugs was slightly lower: 0.45 tablets for the highest-strength and 0.40 tablets for the lower-strength pairs. Regression analyses showed that the immediate and overall impact of the reformulation of oxycodone CR on the DACON of oxycodone CR was minimal, whereas no changes were seen in the DACON of oxymorphone ER. The estimated DACON for oxycodone CR decreased by 0.1 tablets, or 3.7% (P <.001), 6 months after the new formulation was introduced. Conclusion The mean DACON was 0.4 tablets per day higher for oxycodone CR compared with oxymorphone ER for all dosage strengths for the entire study period. After the introduction of the reformulated oxycodone CR, the DACON for this drug was slightly mitigated; however, there was a minimal impact on the mean differences between oxycodone CR and oxymorphone ER. PMID:24991311
Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain
NASA Technical Reports Server (NTRS)
Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.
1992-01-01
The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (aspect ratios and slope factors).
Meeting the challenge of a group practice turnaround.
Porn, L M
2001-03-01
Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing. PMID:11258274
Madan M. Jalla; A. Emadi; Geoffrey A. Williamson; Babak Fahimi
2004-01-01
This paper presents a real time state estimation algorithm for control and security maintenance of multi-converter MES power electronic systems. Estimation of the state variables of the system includes the effects of ripples and converter switching actions. The proposed state estimator is based on the weighted least squares (WLS) approach. Generalized state space averaging technique in which we consider the
Srinivasan, Manoj
Walking, running, and resting under time, distance, and average speed constraints: Optimality of walk-run-rest mixtures Leroy L. Long III and Manoj Srinivasan Mechanical and Aerospace Engineering to running beyond a characteristic transition speed. Here, we study human choice between walking and running
Real-Time Forcast Model Analysis of Daily Average Building Load for a Thermal Storage System Control
Song, L.; Joo, I. S.; Guwana, S.
2009-01-01
1 Real-Time Forecast Model Analysis of Daily Average Building Load for a Thermal Storage System Control Li Song, Ph.D., P.E. School of Aerospace and Mechanical Engineering University of Oklahoma Ik-Seong Joo, Ph.D., P.E. Subroto...
Minsuk Leet; Sang Lyul Mint; Chang Yun Park; Young Hyun Baet; Heonshik Shint; Chong-sang Kim
1993-01-01
One of the obstacles to using RISC processors in ahard real-time environment is the unpredictability ofcaches. This unpredictability stems from basing themon a design that tries to optimize the average case executiontime. In this paper, we propose a dual modeinstruction prefetch scheme as an alternative to instructioncaching schemes. In the proposed scheme, athread is associated with each instruction block. Thethread
ERIC Educational Resources Information Center
Deal, Willard M., Jr.; And Others
This study was conducted to ascertain the effects of employment on the academic performance of community college students. The grade point averages of 830 full-time students at Wilkes Community College (North Wilkesboro, N.C.) were analyzed to determine: (1) if working students' GPA's differed significantly from those of the total student…
An upper bound to time-averaged space-charge limited diode currents M. E. Griswold,1
Wurtele, Jonathan
an open question of whether the average current can violate the ChildLangmuir limit under time, USA Received 20 July 2010; accepted 27 September 2010; published online 1 November 2010 The Child is offered. © 2010 American Institute of Physics. doi:10.1063/1.3503661 The ChildLangmuir law1 gives
Brandt, P N
2008-01-01
The decrease in the rms contrast of time-averaged images with the averaging time is compared between four datasets: (1) a series of solar granulation images recorded at La Palma in 1993; (2) a series of artificial granulation images obtained in numerical simulations by Rieutord et al. (2002); (3) a similar series computed by Steffen and his colleagues (see Wedemeyer et al., 2004}); (4) a random field with some parameters typical of the granulation, constructed by Rast (2002). In addition, (5) a sequence of images was obtained from real granulation images using a temporal and spatial shuffling procedure, and the contrast of the average of n images from this sequence as a function of n is analysed. The series (1) of real granulation images exhibits a considerably slower contrast decrease than do both the series (3) of simulated granulation images and the series (4) of random fields. Starting from some relatively short averaging times t, the behaviour of the contrast in series (3) and (4) resembles the t^{-1/2} ...
ERIC Educational Resources Information Center
Maxwell, Gerri M.; Huggins, Kristin S.; Scheurich, James J.
2010-01-01
"Central High School," a rural school composed largely of students of color from low income homes, experienced a quick, remarkable turnaround of student academic success, measured by State and Federal accountability standards. This turnaround began with an external restructuring grant initiative that required a state-approved external consultant.…
School Turnarounds: Resisting the Hype, Giving Them Hope. Education Outlook No. 2
ERIC Educational Resources Information Center
Hess, Frederick M.; Gift, Thomas
2009-01-01
Education reformers are abuzz over school "turnarounds," a simple idea that has undeniable appeal. School turnarounds offer the opportunity to take familiar educational institutions and improve them through coaching, mentoring, capacity building, best practices, and other existing tools. Unlike most reform efforts, which focus on incremental…
Higher Education in Competitive Markets: Literature on Organizational Decline and Turnaround
ERIC Educational Resources Information Center
Paul, David A.
2005-01-01
Higher educational institutions face fundamental challenges as they adapt to increasingly competitive markets. This literature review provides a context on markets, failure, and turnarounds, and discusses the literature on the shift of the higher educational industry into competitive markets. The literature on turnarounds in the corporate…
High-Turnaround and Flexibility in Design and Construction of Mass Housing Proceedings IGLC-7 181
Tommelein, Iris D.
High-Turnaround and Flexibility in Design and Construction of Mass Housing Proceedings IGLC-7 181 HIGH-TURNAROUND AND FLEXIBILITY IN DESIGN AND CONSTRUCTION OF MASS HOUSING Amarjit Singh1 , Rick Barnes this challenge requires unconventional and innovative methods of housing construction. Only modern methods
School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform
ERIC Educational Resources Information Center
Peck, Craig; Reitzug, Ulrich C.
2014-01-01
School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…
Dancing in a Minefield: An Analysis of Turnaround Specialists in Arizona Schools
ERIC Educational Resources Information Center
McMillie, Kyann L.
2010-01-01
In 2008, educational leaders from the Arizona Department of Education (ADE) assigned a group of turnaround specialists to work in four failing public schools in a large, urban school district in Phoenix, Arizona in hopes of improving those schools. The utilization of turnaround specialists in failing schools was Arizona's method of enacting…
Empirical wind model for the middle and lower atmosphere. Part 1: Local time average
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.
Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R
2010-01-01
Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.
Creeden, Daniel; McCarthy, John C; Ketteridge, Peter A; Schunemann, Peter G; Southward, Timothy; Komiak, James J; Chicklis, Evan P
2007-05-14
We have modeled and demonstrated a scalable, compact, fiber-pumped terahertz source based on difference frequency mixing (DFM) in zinc germanium phosphide (ZGP) capable of producing high average and peak-power pulses. Currently, our terahertz source produces 2 mW of average THz power and >40 W of peak power with sub-nanosecond pulses at a repetition rate of 100 kHz in the range of 2-3 THz without cryogenic cooling or ultra-fast optics. This high average power laser-based terahertz output enables the real-time imaging of concealed objects using an off-the-shelf uncooled microbolometer focal-plane array. With this THz system, we have imaged objects obscured inside in a common shipping envelope, demonstrating the potential of compact laser-based terahertz sources for use in security screening applications. PMID:19546954
Amit, Hagay
2006-01-01
-dependent parts of core flow Hagay Amit, Peter Olson Department of Earth and Planetary Sciences, Johns Hopkins is generated by fluid flow in the Earth's metallic liquid outer core. This process, the geodynamo, has beenPhysics of the Earth and Planetary Interiors 155 (2006) 120139 Time-average and time
Coherent averaging of the passive fathometer response using short correlation time
Gerstoft, Peter
experimental data.13,8,9 The technique relies on surface generated noise from wind and waves.10,11 Ship- ping s and, for correlation times less than a few seconds, the observed sig- nal-to-noise ratio (SNR) agrees-correlation of ambient noise data from a drifting vertical array, has been the subject of much discussion in recent years
associated with high value - high time sensitive products, are the most demanding activities in terms the efficiency of routes, reducing service areas, and significantly increasing distance travelled (Figliozzi not yet been studied in the literature. The existing body of literature has mostly focused
Pokorski, Krzysztof; Patorski, Krzysztof
2010-07-01
An application of the continuous wavelet transform to modulation extraction of additive moiré fringes and time-average patterns is proposed. We present numerical studies of the influence of various parameters of the wavelet transformation itself and a fringe pattern under study on the demodulation results. To facilitate the task of demodulating a signal with zero crossing values, a two-frame approach for wavelet ridge extraction is proposed. Experimental studies of vibration mode patterns by time-average interferometry provide excellent verification of numerical findings. They compare very well with the results of our previous investigations using the temporal phase-shifting method widely considered as the most accurate one. No need of performing phase shifting represents significant simplification of the experimental procedure. PMID:20648129
Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround
NASA Astrophysics Data System (ADS)
Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.
An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.
Ultra-stable laser with average fractional frequency drift rate below $5\\times10^{-19}/\\mathrm{s}$
Hagemann, Christian; Lisdat, Christian; Falke, Stephan; Legero, Thomas; Sterr, Uwe; Riehle, Fritz; Martin, Michael J; Ye, Jun
2014-01-01
Cryogenic single-crystal optical cavities have the potential to provide highest dimensional stability. We have investigated the long-term performance of an ultra-stable laser system which is stabilized to a single-crystal silicon cavity operated at 124 K. Utilizing a frequency comb, the laser is compared to a hydrogen maser that is referenced to a primary caesium fountain standard and to the $^{87}\\mathrm{Sr}$ optical lattice clock at PTB. With fractional frequency instabilities of $\\sigma_y(\\tau)\\leq2\\times10^{-16}$ for averaging times of $\\tau=60\\mathrm{~s}$ to $1000\\mathrm{~s}$ and $\\sigma_y(1 \\mathrm{d})\\leq 2\\times10^{-15}$ the stability of this laser, without any aid from an atomic reference, surpasses the best microwave standards for short averaging times and is competitive with the best hydrogen masers for longer times of one day. The comparison of modeled thermal response of the cavity with measured data indicates a fractional frequency drift below $5\\times 10^{-19}/\\mathrm{s}$, which we do not expec...
NASA Astrophysics Data System (ADS)
Zhang, B.; Billings, S. A.
2015-08-01
Although a vast number of techniques for the identification of nonlinear discrete-time systems have been introduced, the identification of continuous-time nonlinear systems is still extremely difficult. In this paper, the Nonlinear Difference Equation with Moving Average noise (NDEMA) model which is a general representation of nonlinear systems and contains, as special cases, both continuous-time and discrete-time models, is first proposed. Then based on this new representation, a systematic framework for the identification of nonlinear continuous-time models is developed. The new approach can not only detect the model structure and estimate the model parameters, but also work for noisy nonlinear systems. Both simulation and experimental examples are provided to illustrate how the new approach can be applied in practice.
Quantifying the average of the time-varying hazard ratio via a class of transformations.
Chen, Qingxia; Zeng, Donglin; Ibrahim, Joseph G; Chen, Ming-Hui; Pan, Zhiying; Xue, Xiaodong
2015-04-01
The hazard ratio derived from the Cox model is a commonly used summary statistic to quantify a treatment effect with a time-to-event outcome. The proportional hazards assumption of the Cox model, however, is frequently violated in practice and many alternative models have been proposed in the statistical literature. Unfortunately, the regression coefficients obtained from different models are often not directly comparable. To overcome this problem, we propose a family of weighted hazard ratio measures that are based on the marginal survival curves or marginal hazard functions, and can be estimated using readily available output from various modeling approaches. The proposed transformation family includes the transformations considered by Schemper et al. (Statist Med 28:2473-2489, 2009) as special cases. In addition, we propose a novel estimate of the weighted hazard ratio based on the maximum departure from the null hypothesis within the transformation family, and develop a Kolmogorov[Formula: see text]Smirnov type of test statistic based on this estimate. Simulation studies show that when the hazard functions of two groups either converge or diverge, this new estimate yields a more powerful test than tests based on the individual transformations recommended in Schemper et al. (Statist Med 28:2473-2489, 2009), with a similar magnitude of power loss when the hazards cross. The proposed estimates and test statistics are applied to a colorectal cancer clinical trial. PMID:25073864
Austin G. Fowler
2014-10-10
Consider a 2-D square array of qubits of extent $L\\times L$. We provide a proof that the minimum weight perfect matching problem associated with running a particular class of topological quantum error correction codes on this array can be exactly solved with a 2-D square array of classical computing devices, each of which is nominally associated with a fixed number $N$ of qubits, in constant average time per round of error detection independent of $L$ provided physical error rates are below fixed nonzero values, and other physically reasonable assumptions. This proof is applicable to the fully fault-tolerant case only, not the case of perfect stabilizer measurements.
NASA Astrophysics Data System (ADS)
Brandt, P. N.; Getling, A. V.
2008-06-01
The decrease in the rms contrast of time-averaged images with the averaging time is compared between four data sets: (1) a series of solar granulation images recorded at La Palma in 1993, (2) a series of artificial granulation images obtained in numerical simulations by Rieutord et al. ( Nuovo Cimento 25, 523, 2002), (3) a similar series computed by Steffen and his colleagues (see Wedemeyer et al. in Astron. Astrophys. 44, 1121, 2004), (4) a random field with some parameters typical of the granulation, constructed by Rast ( Astron. Astrophys. 392, L13, 2002). In addition, (5) a sequence of images was obtained from real granulation images by using a temporal and spatial shuffling procedure, and the contrast of the average of n images from this sequence as a function of n is analysed. The series (1) of real granulation images exhibits a considerably slower contrast decrease than do both the series (3) of simulated granulation images and the series (4) of random fields. Starting from some relatively short averaging times t, the behaviour of the contrast in series (3) and (4) resembles the t -1/2 statistical law, whereas the shuffled series (5) obeys the n -1/2 law from n=2 on. Series (2) demonstrates a peculiarly slow decline of contrast, which could be attributed to particular properties of the boundary conditions used in the simulations. Comparisons between the analysed contrast-variation laws indicate quite definitely that the brightness field of solar granulation contains a long-lived component, which could be associated with locally persistent dark intergranular holes and/or with the presence of quasi-regular structures. The suggestion that the random field (4) successfully reproduces the contrast-variation law for the real granulation (Rast in Astron. Astrophys. 392, L13, 2002) can be dismissed.
P. N. Brandt; A. V. Getling
2008-02-01
The decrease in the rms contrast of time-averaged images with the averaging time is compared between four datasets: (1) a series of solar granulation images recorded at La Palma in 1993; (2) a series of artificial granulation images obtained in numerical simulations by Rieutord et al. (2002); (3) a similar series computed by Steffen and his colleagues (see Wedemeyer et al., 2004}); (4) a random field with some parameters typical of the granulation, constructed by Rast (2002). In addition, (5) a sequence of images was obtained from real granulation images using a temporal and spatial shuffling procedure, and the contrast of the average of n images from this sequence as a function of n is analysed. The series (1) of real granulation images exhibits a considerably slower contrast decrease than do both the series (3) of simulated granulation images and the series (4) of random fields. Starting from some relatively short averaging times t, the behaviour of the contrast in series (3) and (4) resembles the t^{-1/2} statistical law, while the shuffled series (5) obeys the n^{-1/2} law from n = 2 on. Series (2) demonstrates a peculiarly slow decline of contrast, which could be attributed to particular properties of the boundary conditions used in the simulations. Comparisons between the analysed contrast-variation laws indicate quite definitely that the brightness field of solar granulation contains a long-lived component, which could be associated with locally persistent dark intergranular holes and/or with the presence of quasi-regular structures. The suggestion that the random field (4) successfully reproduces the contrast-variation law for the real granulation (Rast, 2002) can be declined.
Destainville, Nicolas; Salomé, Laurence
2006-01-15
Single-molecule tracking is a powerful way to look at the dynamic organization of plasma membranes. However, there are some limitations to its use. For example, it was recently observed, using numerical simulation, that time-averaging effects inherent to the exposure time of detectors are likely to bias the apparent motion of molecules confined in microdomains. Here, we solve this apparently limiting issue analytically. We explore this phenomenon by calculating its effects on the observed diffusion coefficients and domain sizes. We demonstrate that the real parameters can be easily recovered from the measured apparent ones. Interestingly, we find that single-molecule tracking can be used to explore events occurring at a timescale smaller than the exposure time. PMID:16299068
Nicolas Destainville; Laurence Salome
2006-01-09
Single-molecule tracking is a powerful way to look at the dynamic organization of plasma membranes. However, there are some limitations to its use. For example, it was recently observed, using numerical simulation, that time-averaging effects inherent to the exposure time of detectors are likely to bias the apparent motion of molecules confined in microdomains. Here, we solve this apparently limiting issue analytically. We explore this phenomenon by calculating its effects on the observed diffusion coefficients and domain sizes. We demonstrate that the real parameters can be easily recovered from the measured apparent ones. Interestingly, we find that single-molecule tracking can be used to explore events occurring at a timescale smaller than the exposure time.
P. Moskal; N. G. Sharma; M. Silarski; T. Bednarski; P. Bia?as; J. Bu?ka; E. Czerwi?ski; A. Gajos; D. Kami?ska; L. Kap?on; A. Kochanowski; G. Korcyl; J. Kowal; P. Kowalski; T. Kozik; W. Krzemie?; E. Kubicz; Sz. Nied?wiecki; M. Pa?ka; L. Raczy?ski; Z. Rudy; O. Rundel; P. Salabura; A. S?omski; J. Smyrski; A. Strzelecki; A. Wieczorek; W. Wi?licki; I. Wochlik; M. Zieli?ski; N. Zo?
2015-08-06
In this article we present a novel method of hit time and hit position reconstruction in long scintillator detectors. We take advantage of the fact that for this kind of detectors amplitude and shape of registered signals depends strongly on the position where particle hit the detector. The reconstruction is based on determination of the degree of similarity between measured and averaged signals stored in a library for a set of well-defined positions along the scintillator. Preliminary results of validation of the introduced method with experimental data obtained by means of the double strip prototype of the J-PET detector are presented.
NASA Astrophysics Data System (ADS)
Elbarghathi, F.; Wang, T.; Zhen, D.; Gu, F.; Ball, A.
2012-05-01
Vibration signals from a gearbox are usually very noisy which makes it difficult to find reliable symptoms of a fault in a multistage gearbox. This paper explores the use of time synchronous average (TSA) to suppress the noise and Continue Wavelet Transformation (CWT) to enhance the non-stationary nature of fault signal for more accurate fault diagnosis. The results obtained in diagnosis an incipient gear breakage show that fault diagnosis results can be improved by using an appropriate wavelet. Moreover, a new scheme based on the level of wavelet coefficient amplitudes of baseline data alone, without faulty data samples, is suggested to select an optimal wavelet.
Patorski, Krzysztof; Trusiak, Maciej
2013-07-15
Time-averaged fringe patterns in vibration testing of MEMS (microelectromechanical systems) are unaffected by carrier displacements. They are additive superimposition type moirés. These features and Hilbert transform vulnerability to additive trend are utilized for visualization of centers of dark Bessel fringes. Two frames with shifted carrier are subtracted for background and noise correction. Two normalized images of this pattern are calculated with slightly different bias levels and subtracted. The method does not require precise phase shifting between two frames, cosinusoidal carrier and linear recording. It enables detecting light power variations and phase shifting nonuniformities. Synthetic and experimental results corroborate the robustness of the method. PMID:23938535
Moskal, P; Silarski, M; Bednarski, T; Bia?as, P; Bu?ka, J; Czerwi?ski, E; Gajos, A; Kami?ska, D; Kap?on, L; Kochanowski, A; Korcyl, G; Kowal, J; Kowalski, P; Kozik, T; Krzemie?, W; Kubicz, E; Nied?wiecki, Sz; Pa?ka, M; Raczy?ski, L; Rudy, Z; Rundel, O; Salabura, P; S?omski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Wi?licki, W; Wochlik, I; Zieli?ski, M; Zo?, N
2015-01-01
In this article we present a novel method of hit time and hit position reconstruction in long scintillator detectors. We take advantage of the fact that for this kind of detectors amplitude and shape of registered signals depends strongly on the position where particle hit the detector. The reconstruction is based on determination of the degree of similarity between measured and averaged signals stored in a library for a set of well-defined positions along the scintillator. Preliminary results of validation of the introduced method with experimental data obtained by means of the double strip prototype of the J-PET detector are presented.
NASA Astrophysics Data System (ADS)
Park, J. J.
2009-12-01
Frequency-dependent coherence between atmospheric CO2 and historical temperatures reveals climate feedbacks within Earth's carbon cycle. Kuo et al (1990) showed that CO2 lagged global-average temperatures by 5 months at interannual periods during 1958-1988, but this relationship has changed over time. Since 1979, at Mauna Loa and other observation sites, interannual coherence exhibits a 90o phase lag that suggests a direct correlation between temperatures and the time-derivative of CO2, not a simple time-lagged response. The coherence transition can be explained if the response time of CO2 to global temperature fluctuations has lengthened from 5-6 months to at least 15 months. This longer response time may reflect saturation of the oceanic carbon sink, but a transient shift in ocean circulation may also play a role. Coherent annual-cycle fluctuations in CO2 and temperature are evident in the 1958-1988 time series, but not the 1979-2008 interval. Coherence of CO2 with gridpoint temperature anomalies suggests that interannual temperature-CO2 correlations are dominated by the tropical oceans. The large influence of the terrestrial biosphere on atmospheric CO2, therefore, is not mediated significantly by interannual temperature anomalies. Kuo, C., C. Lindberg, and D. J. Thomson (1990), Coherence established between atmospheric carbon dioxide and global temperature, Nature, 343, 709--714.
NASA Astrophysics Data System (ADS)
Ben-David, Avishai; Vanderbeek, Richard G.; Gotoff, Steven W.; D'Amico, Francis M.
1997-10-01
The effects of flight geometry, signal averaging and time- lag correlation coefficient on airborne CO2 dial lidar measurements are shown in simulations and field measurements. These factors have implications for multi- vapor measurements and also for measuring a shingle vapor with a wide absorption spectra for which one would like to make DIAL measurements at many wavelengths across the absorption spectra of the gas. Thus it is of interest to know how many wavelengths and how many groups of wavelengths can be used effectively in DIAL measurements. Our data indicate that for our lidar about 80 wavelengths can be used for DIAL measurements of a stationary vapor. The lidar signal is composed of fluctuations with three time scales: a very short time scale due to system noise which is faster than the data acquisition sampling rate of the receiver, a medium time scale due to atmospheric turbulence, and a long time scale due to slow atmospheric transmission drift from aerosol in homogeneities. The decorrelation time scale of fluctuations for airborne lidar measurements depends on the flight geometry.
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2009
2009-01-01
Given the importance of good teaching and leadership for school success, turnaround schools should think carefully about how to structure professional environments that reward and motivate excellence. A system of "Pay-for-Contribution" that includes tools such as hard-to-staff and skill shortage pay, performance pay, and/or retention pay, will…
Rapid Turnaround of Costing/Designing of Space Missions Operations
NASA Technical Reports Server (NTRS)
Kudrle, Paul D.; Welz, Gregory A.; Basilio, Eleanor
2008-01-01
The Ground Segment Team (GST), at NASA's Jet Propulsion Laboratory in Pasadena, California, provides high-level mission operations concepts and cost estimates for projects that are in the formulation phase. GST has developed a tool to track costs, assumptions, and mission requirements, and to rapidly turnaround estimates for mission operations, ground data systems, and tracking for deep space and near Earth missions. Estimates that would often take several weeks to generate are now generated in minutes through the use of an integrated suite of cost models. The models were developed through interviews with domain experts in areas of Mission Operations, including but not limited to: systems engineering, payload operations, tracking resources, mission planning, navigation, telemetry and command, and ground network infrastructure. Data collected during interviews were converted into parametric cost models and integrated into one tool suite. The tool has been used on a wide range of missions from small Earth orbiters, to flagship missions like Cassini. The tool is an aid to project managers and mission planners as they consider different scenarios during the proposal and early development stages of their missions. The tool is also used for gathering cost related requirements and assumptions and for conducting integrated analysis of multiple missions.
Agile turnaround using post-stall maneuvers for tail-sitter VTOL UAVs
Takaaki Matsumoto; Atsushi Konno; Ren Suzuki; Atsushi Oosedo; Kenta Go; Masaru Uchiyama
2010-01-01
Miniature vertical take-off and landing unmanned aerial vehicles (VTOL UAVs) make various missions possible alone such as surveillance in partially-destroyed building and at broad hazard area where many obstacle exist. In such missions, agile turnaround using post-stall maneuvers is useful to avoid obstacles. This paper discusses agile turnaround strategies utilizing post-stall maneuvers for tail-sitter VTOL UAVs. Two agile turn strategies
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
NASA Astrophysics Data System (ADS)
Shimada, Ken-ichi; Ishii, Toshiki; Hoshizawa, Taku; Takashima, Yuzuru
2015-09-01
Mechanical instabilities during recording and involved degradation of signal quality in high density and high data transfer rate holographic data storage system (HDSS) is one of the obstacles to prevent the technology from being a stable system. We analytically formulated effects of mechanical instabilities of a Galvano mirror and spindle motor on the HDSS by incorporating the concept of a time-averaged holography. Mechanical parameters such as amplitude and frequency of mechanical oscillation are related to optical parameters such as amplitude and phase of reference and signal beams. Especially, the analytical formulation led to a new method of optical and post compensation for mechanical instability during recording hologram. The optical post compensation method enables a robust implementation of HDSS against mechanical instabilities.
Mumbare, Sachin S; Gosavi, Shriram; Almale, Balaji; Patil, Aruna; Dhakane, Supriya; Kadu, Aniruddha
2014-01-01
Background: India's National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Materials and Methods: Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q) nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA) was applied. Results: Forecasting showed that the replacement level of 2.1 total fertility rate (TFR) will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Conclusion: Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples. PMID:25364146
NASA Astrophysics Data System (ADS)
Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.
2009-02-01
While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.
NASA Astrophysics Data System (ADS)
Chen, XinJian
2007-11-01
This paper presents a laterally averaged two-dimensional trajectory model for narrow rivers and estuaries and its use for estimating transport time scales in the Alafia River estuary, Florida. The model simulates trajectories of conservative particles in narrow rivers and estuaries with variable width. In the model, the advective movement of the particle is calculated using a high order approach involving multiple sub-timesteps, while the diffusive movement of the particle is modeled using a random walk procedure. A reflection routine is used to prevent particles from crossing the bottom and free surface boundaries. The trajectory model was validated against the analytical solution for an idealized advection-diffusion case before it was applied to the Alafia River estuary. The velocity field at each time step was provided by a hydrodynamic mode model named LAMFE that has been well calibrated for the Alafia River estuary. The trajectory model was run for 16 constant freshwater flow rates at the upstream boundary to calculate both the estuarine residence time (ERT) and the age of fresh water entering the system from the upstream boundary. Depending on the rate of the freshwater inflow, it was found that the ERT of the estuary varies roughly between 2 and 10 days, while the age of water at the most downstream cross section varies between <2 days to about 7 days. An empirical relationship between ERT and the freshwater inflow at the upstream boundary is obtained to be ERT=49.8+226.5e-0.0113Q.
NASA Astrophysics Data System (ADS)
Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.
2012-04-01
Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what result in the detection of some control tests that are no longer adequate, due to different reasons. After the modification of some test, based in our experience, a set of quality control tests is now presented, updated according to technology advances and classified. The presented set of quality tests allows radiation and meteorological data to be tested in order to know their plausibility to be used as inputs in theoretical or empirical methods for scientific research. The research leading to those results has partly receive funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 262892 (ENDORSE project).
NASA Astrophysics Data System (ADS)
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-02-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4-6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100-1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-01-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra
NASA Technical Reports Server (NTRS)
Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.
1996-01-01
We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.
Keith S Cover
2008-03-20
While the multiexponential nature of T2 decays measured in vivo is well known, characterizing T2 decays by a single time constant is still very useful when differentiating among structures and pathologies in MRI images. A novel, robust, fast and very simple method is presented for both estimating and displaying the average time constant for the T2 decay of each pixel from a multiecho MRI sequence. The average time constant is calculated from the average of the values measured from the T2 decay over many echoes. For a monoexponential decay, the normalized decay average varies monotonically with the time constant. Therefore, it is simple to map any normalized decay average to an average time constant. This method takes advantage of the robustness of the normalized decay average to both artifacts and multiexponential decays. Color intensity projections (CIPs) were used to display 32 echoes acquired at a 10ms spacing as a single color image. The brightness of each pixel in each color image was determined by the intensity of the corresponding pixel in the earliest image and the hue was determined by the normalized decay average. Examples demonstrate the effectiveness of using CIPs to display the results of a multiecho sequence for a healthy subject and a multiple sclerosis patient.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
A Rapid Turnaround Two-Stage Adiabatic Demagnetization Refrigerator for Cooling to 50 mK
NASA Astrophysics Data System (ADS)
Shirron, P. J.; DiPirro, M. J.; Panek, J. S.; Francis, J. J.; Warner, B. A.; Jackson, M. L.
2006-04-01
Many research and development programs require rapid access to very low temperatures (˜50 mK). For detector development, relatively large experiment volumes are also needed for tests involving integrated detectors and readout amplifiers (which may need to be stationed at a different temperature). To provide this capability in a versatile, fast turnaround system, we have constructed a two-stage adiabatic demagnetization refrigerator (ADR) that is operated in a simple bucket-style dewar. The ADR/cryostat is separated into two concentric inserts; the outer insert supports two 3.3 T magnets, magnet leads and magnetic shielding, and the inner one consists of a hermetic experiment volume coupled to the ADR's salt pills and heat switches. The magnet insert remains in the dewar at all times, while the ADR insert may be inserted and removed even when the dewar is cold. The cooldown from room temperature takes less than 1.5 hours, and cycling of the ADR as little as 30 minutes. Future tests will investigate the use of neon exchange gas to accelerate cooling of the ADR and more thermally isolated components such as wiring and structural supports. Design and operation of the system are discussed.
Kilburn, Tina R.; Eriksen, Hanne-Lise Falgreen; Underbjerg, Mette; Thorsen, Poul; Mortensen, Erik Lykke; Landrø, Nils Inge; Bakketeig, Leiv S.; Grove, Jakob; Sværke, Claus; Kesmodel, Ulrik Schiøler
2015-01-01
Background Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT) and information processing time (IPT) in young children. Method Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60–64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R) was administered. Results Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1–4. Conclusion This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring. PMID:26382068
Dunn, M.G.; Kim, J. (Calspan-UB Research Center, Buffalo, NY (United States)); Boyle, R.J. (NASA Lewis Research Center, Cleveland, OH (United States)); Civinskas, K.C.
1994-01-01
Time-averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row and the first-stage blade row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. These measurements were made at 10, 50, and 90 percent span on both the pressure and suction surfaces of the component. Stanton-number distributions are also reported for the second-stage vane at 50 percent span. A shock tube is used as a short-duration source of heated and pressurized air to which the turbine is subjected. Platinum thin-film gages are used to obtain the heat-flux measurements and miniature silicone-diaphragm pressure transducers are used to obtain the surface pressure measurements. The first-stage vane Stanton number distributions are compared with predictions obtained using a quasi-three dimensional Navier-Stokes solution and a version of STAN5. This same N-S technique was also used to obtain predictions for the first blade and the second vane.
NASA Technical Reports Server (NTRS)
Dunn, M. G.; Kim, J.; Civinskas, K. C.; Boyle, R. J.
1992-01-01
Time-averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row and the first-stage blade row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the component. Stanton-number distributions are also reported for the second-stage vane at 50 percent span. A shock tube is used as a short-duration source of heated and pressurized air to which the turbine is subjected. Platinum thin-film gages are used to obtain the heat-flux measurements and miniature silicone-diaphragm pressure transducers are used to obtain the surface pressure measurements. The first-stage vane Stanton number distributions are compared with predictions obtained using a quasi-3D Navier-Stokes solution and a version of STAN5. This same N-S technique was also used to obtain predictions for the first blade and the second vane.
David Damanik; Serguei Tcheremchantsev
2008-01-22
We develop further the approach to upper and lower bounds in quantum dynamics via complex analysis methods which was introduced by us in a sequence of earlier papers. Here we derive upper bounds for non-time averaged outside probabilities and moments of the position operator from lower bounds for transfer matrices at complex energies. Moreover, for the time-averaged transport exponents, we present improved lower bounds in the special case of the Fibonacci Hamiltonian. These bounds lead to an optimal description of the time-averaged spreading rate of the fast part of the wavepacket in the large coupling limit. This provides the first example which demonstrates that the time-averaged spreading rates may exceed the upper box-counting dimension of the spectrum.
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler. However, the effects of temperature and humidity have been observed. Therefore, adjustments of experimental sampling constants at different environmental conditions will be necessary. PMID:22651222
ERIC Educational Resources Information Center
Cucchiara, Maia Bloomfield; Rooney, Erin; Robertson-Kraft, Claire
2015-01-01
School turnaround--a reform strategy that strives for quick and dramatic transformation of low-performing schools--has gained prominence in recent years. This study uses interviews and focus groups conducted with 86 teachers in 13 schools during the early stages of school turnaround in a large urban district to examine teachers' perceptions of the…
Blowing the Whistle: How IS Auditors Can Turnaround Troubled Software Projects -- And Why reserved. Blowing the Whistle: How IS Auditors Can Turnaround Troubled Software Projects -- And Why a responsibility to report bad news--or blow the whistle--on a project, one may reasonably ask: "In practice, how
ERIC Educational Resources Information Center
MacTaggart, Terrence, Ed.
2007-01-01
This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the true…
Julien M. E. Fraïsse; Daniel Braun
2015-04-13
We investigate in detail a recently introduced "coherent averaging scheme" in terms of its usefulness for achieving Heisenberg limited sensitivity in the measurement of different parameters. In the scheme, $N$ quantum probes in a product state interact with a quantum bus. Instead of measuring the probes directly and then averaging as in classical averaging, one measures the quantum bus or the entire system and tries to estimate the parameters from these measurement results. Combining analytical results from perturbation theory and an exactly solvable dephasing model with numerical simulations, we draw a detailed picture of the scaling of the best achievable sensitivity with $N$, the dependence on the initial state, the interaction strength, the part of the system measured, and the parameter under investigation.
Yan Zhao; Pavel Belov; Yang Hao
2007-01-01
The accuracy of finite-difference time-domain (FDTD) modelling of left-handed metamaterials (LHMs) is dramatically improved by using an averaging technique along the boundaries of the LHM slabs. The material frequency dispersion of the LHMs is taken into account using auxiliary differential equation (ADE) based dispersive FDTD methods. The dispersive FDTD method with averaged permittivity along the material boundaries is implemented for
CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success
ERIC Educational Resources Information Center
Gorman, Lynn
2012-01-01
Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…
Participatory Democracy and Struggling Schools: Making Space for Youth in School Turnarounds
ERIC Educational Resources Information Center
Kirshner, Ben; Jefferson, Anton
2015-01-01
Background/Context:Federal policy, as codified in Race to the Top (RTT) funding guidelines, outlines four types of intervention: turnaround, restart, closure, and transformation. RTT has embraced a technocratic paradigm for school reform that frames choice less as the opportunity for the public to deliberate about what it wants from its schools…
Choosing a School Turnaround Provider. Lessons Learned. Volume 1, Issue 3
ERIC Educational Resources Information Center
Lockwood, Anne Turnbaugh; Fleischman, Steve
2010-01-01
Droves of school turnaround providers are chasing the massive federal infusion of funds flowing into failing schools. They arrive armed with glossy materials, impressive sounding claims, and, often, citing their prior relationships or experiences with one's school to support their promises of great service and impressive outcomes. But, are their…
Turnaround radius in an accelerated universe with quasi-local mass
NASA Astrophysics Data System (ADS)
Faraoni, Valerio; Lapierre-Léonard, Marianne; Prain, Angus
2015-10-01
We apply the Hawking-Hayward quasi-local energy construct to obtain in a rigorous way the turnaround radius of cosmic structures in General Relativity. A splitting of this quasi-local mass into local and cosmological parts describes the interplay between local attraction and cosmological expansion.
ERIC Educational Resources Information Center
Stuit, David A.
2010-01-01
This study investigates the successes of the charter and district sectors in eliminating bad schools via dramatic turnarounds in performance and/or shutdowns. It identified 2,025 low-performing charter and district schools across ten states, each of which is home to a sizable number of charter schools. These particular schools were tracked from…
Tinkering and Turnarounds: Understanding the Contemporary Campaign to Improve Low-Performing Schools
ERIC Educational Resources Information Center
Duke, Daniel L.
2012-01-01
An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The article…
A Case Study of Change Strategies Implemented in a Turnaround Elementary School
ERIC Educational Resources Information Center
Colson, Jo Ann
2012-01-01
This case study examined the change strategies in a turnaround school at the elementary level to understand and describe how change occurred and was sustained at this campus. This study examined the factors which contributed to the change in academic success of students, examined beliefs about change that led to the change process, identified the…
ERIC Educational Resources Information Center
Herman, Rebecca; Graczewski, Cheryl; James-Burdumy, Susanne; Murray, Matthew; Perez-Johnson, Irma; Tanenbaum, Courtney
2013-01-01
The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…
Prevosto, L.; Mancinelli, B.; Kelly, H.; Instituto de Física del Plasma , Departamento de Física, Facultad de Ciencias Exactas y Naturales Ciudad Universitaria Pab. I, 1428 Buenos Aires
2013-12-15
This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.
NASA Astrophysics Data System (ADS)
Vandenrijt, Jean-François; Thizy, Cédric; Georges, Marc P.
2015-08-01
We present investigation of interferometric methods for vibration mode identification to be applied on shakers in industrial environment. We consider long wave infrared ESPI in time-averaged mode with the use of phase-stepping which allows transforming Bessel fringes, typical to the time averaging, into phase values which provide a better way for identification of vibration mode shapes. The use of long laser wavelength allows measuring larger amplitudes of vibrations compared to what is achieved in visible light. Also longer wavelengths allow lower sensitivity to external perturbations. Time-averaged phase-stepped shearography in visible is also used as a compared alternative to LWIR ESPI for working in industrial environments. Although not demonstrated here, an interesting feature for the future investigations on LWIR ESPI is its potentiality of simultaneous temperature measurement during vibrations.
Reducing barriers to timely MR imaging scheduling.
Wessman, Brooke V; Moriarity, Andrew K; Ametlli, Vanda; Kastan, David J
2014-01-01
Scheduling a magnetic resonance (MR) imaging study at the authors' large health system in 2011 required considerable preparation before an appointment time was given to a patient. Difficulties in promptly scheduling appointments resulted from the varying time required for examinations, depending on the requested protocol, availability of appropriate MR imaging equipment, examination timing, prior insurance authorization verification, and proper patient screening. These factors contributed to a backlog of patients to schedule that regularly exceeded 300. A multidisciplinary process-improvement team was assembled to improve the turnaround time for scheduling an outpatient MR imaging examination (the interval between the time when the order was received and the time when the patient was informed about the MR imaging appointment). Process improvements targeted by the team included protocol turnaround time, schedule standardization, schedule intervals, examination timing, service standards, and scheduling redesign. Using lean methods and multiple plan-do-check-act cycles, the time to schedule an outpatient MR imaging examination improved from 117 hours to 33 hours, a 72% reduction, during the 9-month study period in 2011-2012. The number of patients in the scheduling queue was reduced by 90%. Overall MR imaging examinations within the specific patient population studied increased from 773 patient studies during the first month of intervention to 1444 studies the following month and averaged over 1279 patient studies per month throughout the study. PMID:25384302
Iuliano-Burns, S; Mirwald, R L; Bailey, D A
2001-01-01
Height, weight, and tissue accrual were determined in 60 male and 53 female adolescents measured annually over six years using standard anthropometry and dual-energy X-ray absorptiometry (DXA). Annual velocities were derived, and the ages and magnitudes of peak height and peak tissue velocities were determined using a cubic spline fit to individual data. Individuals were rank ordered on the basis of sex and age at peak height velocity (PHV) and then divided into quartiles: early (lowest quartile), average (middle two quartiles), and late (highest quartile) maturers. Sex- and maturity-related comparisons in ages and magnitudes of peak height and peak tissue velocities were made. Males reached peak velocities significantly later than females for all tissues and had significantly greater magnitudes at peak. The age at PHV was negatively correlated with the magnitude of PHV in both sexes. At a similar maturity point (age at PHV) there were no differences in weight or fat mass among maturity groups in both sexes. Late maturing males, however, accrued more bone mineral and lean mass and were taller at the age of PHV compared to early maturers. Thus, maturational status (early, average, or late maturity) as indicated by age at PHV is inversely related to the magnitude of PHV in both sexes. At a similar maturational point there are no differences between early and late maturers for weight and fat mass in boys and girls. PMID:11466961
ERIC Educational Resources Information Center
Sawyer, Richard
2013-01-01
Education officials and journalists frequently track changes over time in the average ACT® College Readiness Assessment Composite scores and ACT College Readiness Benchmark attainment rates of individual high schools. Using standard statistical methods, I examined how often changes in these statistics are unambiguously positive or negative, rather…
Averaging undifferentiable monitored parameters
A. D. Bolychevtsev; L. B. Bystritskaya; A. L. Oksman; V. G. Onoprienko; A. I. Yatsenko
1987-01-01
Averaging monitored parameters over time intervals is an essential part of data processing in calculating the indices for the performance of, equipment in an industrial organization: power station, gas transport organization, and so on. The averaging is performed discretely by data-acquisition systems (DAS) or other such facilities, which periodically accumulate the input data on the current parameters and process them.
Alam, Todd M; Hart, David; Rempe, Susan L B
2011-08-14
Ab initio molecular dynamics (AIMD) simulations have been used to predict the time-averaged Li NMR chemical shielding for a Li(+) solution. These results are compared to NMR shielding calculations on smaller Li(+)(H(2)O)(n) clusters optimized in either the gas phase or with a polarizable continuum model (PCM) solvent. The trends introduced by the PCM solvent are described and compared to the time-averaged chemical shielding observed in the AIMD simulations where large explicit water clusters hydrating the Li(+) are employed. Different inner- and outer-coordination sphere contributions to the Li NMR shielding are evaluated and discussed. It is demonstrated an implicit PCM solvent is not sufficient to correctly model the Li shielding, and that explicit inner hydration sphere waters are required during the NMR calculations. It is also shown that for hydrated Li(+), the time averaged chemical shielding cannot be simply described by the population-weighted average of coordination environments containing different number of waters. PMID:21701731
NASA Astrophysics Data System (ADS)
Jarochowska, Emilia; Munnecke, Axel
2015-01-01
Stable carbon isotope curves are used as a precise stratigraphic tool in the Paleozoic, even though they are commonly based on shallow-water carbonate record, characterized by low stratigraphic completeness. Identification of episodes of large-scale redeposition and erosion may improve ?13Ccarb-based correlations. Here, a series of at least three episodes of high-energy onshore redeposition are described from the Makarivka Member (new unit) of the Ustya Formation from the Homerian (middle Silurian) of Podolia, Ukraine. The Makarivka Member is emplaced within a tidal flat succession. Its most prominent part is divided into a lower polymictic conglomerate of sand- to boulder-sized clasts representing a range of subtidal facies, and an upper heterolithic unit composed of grainstone and mudstone laminae. The aim of the study is to identify the mechanism of deposition of the allochthonous conglomeratic material in this Member. Based on analogies with recent tsunami deposits, the conglomerate is interpreted to reflect the strongest landward-directed current in the tsunami run-up phase, and the heterolith - alternating high-density landward currents, stagnant intervals allowing mud and land-derived debris to settle, and backwash flows. The tsunamite was deposited during an interval of decreasing isotopic values of the Mulde excursion, a global ?13C excursion reaching + 5.2‰ in the studied sections. Clast redeposition in an interval characterized by rapidly changing ?13Ccarb offers the opportunity to evaluate the degree of temporal and spatial averaging caused by the tsunami. The clasts in the polymictic conglomerate show scattered ?13Ccarb values (- 0.3‰ to + 2.1‰) compared to homogenous (1.3‰ to 1.6‰) values in the matrix. The presence of clasts characterized by low ?13Ccarb values is explained by their decrease with bathymetry rather than erosion of pre-excursion strata, whereas high values characterize material entrained from the sea-floor and strata directly underlying the tsunamite. Close (1.3‰ and 1.5‰) average ?13Ccarb values suggest that the matrix of the conglomerate is potentially a product of clast grinding.
A Q-GERT analysis of the space shuttle ground turnaround system at Vandenberg Air Force Base
NASA Astrophysics Data System (ADS)
Graham, S.; Jones, T. W.
1982-09-01
The efficient ground turnaround of the Space Shuttle is critical to the execution of national policies. Research identified four major subsystems within the Vandenberg Ground Turnaround System; the Orbiter Maintenance Subsystem, External Tank Subsystem, Solid Rocket Booster Subsystem, and Launch Pad Subsystem. A Q-GERT simulation of the ground Turnaround System was conducted to investigate the system and observe the interactions between the major subsystems. The Q-GERT model simulated the integration of Space Shuttle components to determine the subsystem most sensitive to change. The results indicate the Launch Pad Subsystem is the most critical in the Vandenberg Ground Turnaround System. However, further research is recommended in areas of logistics support, spares availability, and transportation of components.
ERIC Educational Resources Information Center
Smith, Scott G.
2015-01-01
In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…
The challenge and the future of health care turnaround plans: evidence from the Italian experience.
Ferrè, Francesca; Cuccurullo, Corrado; Lega, Federico
2012-06-01
Over the last two decades, health policy and governance in Italy have undergone decentralisation at the regional level. The central government was expected to play a guiding role in defining minimum care standards and controlling health expenditures at the regional level in order to keep the entire Italian National Health System (INHS) on track. Although health performance trends have been consistent across regions, public health expenditures have been variable and contributed to a cumulative deficit of 38 billion Euros from 2001 to 2010. To address the deficit, the government called for a resolution introducing a partial bail-out plan and later institutionalised a process to facilitate a turnaround. The upturn started with the development of a formal regional turnaround plan that proposed strategic actions to address the structural determinants of costs. The effectiveness of this tool was widely questioned, and many critics suggested that it was focused more on methods to address short-term issues than on the long-term strategic reconfiguration that is required for regional health systems to ultimately address the structural causes of deficits.We propose an interpretative framework to understand the advantages and disadvantages of turnaround plans, and we apply the findings to the development of policy recommendations for the structure, methods, processes and contexts of the implementation of this tool. PMID:22482493
NASA Astrophysics Data System (ADS)
Jeon, Jae-Hyung; Leijnse, Natascha; Oddershede, Lene B.; Metzler, Ralf
2013-04-01
We report the results of single tracer particle tracking by optical tweezers and video microscopy in micellar solutions. From careful analysis in terms of different stochastic models, we show that the polystyrene tracer beads of size 0.52-2.5 ?m after short-time normal diffusion turn over to perform anomalous diffusion of the form
Jo Vandesompele; Katleen De Preter; Filip Pattyn; Bruce Poppe; Nadine Van Roy; Anne De Paepe; Frank Speleman
2002-01-01
BACKGROUND: Gene-expression analysis is increasingly important in biological research, with real-time reverse transcription PCR (RT-PCR) becoming the method of choice for high-throughput and accurate expression profiling of selected genes. Given the increased sensitivity, reproducibility and large dynamic range of this methodology, the requirements for a proper internal control gene for normalization have become increasingly stringent. Although housekeeping gene expression has
Shaffer, F.; Massah, H.; Sinclair, J.; Shahnam, M.
1995-04-30
This paper describes an experimental facility to measure time-averaged properties of particle-wall collisions. A particle tracking velocimetry (PTV) system measures particle rebound angles and velocities and impact angles and velocities for specific particles. From these measured properties, values of normal and tangential coefficient of restitution are derived. The PTV system rapidly measures thousands of particle-wall collisions enabling determination of time-averaged properties. In this initial work we use 191 {micro}m glass particles colliding with a glass plate at a fixed angle of 17.5{degree} and a velocity of 17.07 m/s. Observations are also made of aspherical particles colliding with gross sliding. In the future, this experimental system will be used with particles and wall materials of industrial significance.
Using corporate finance to engineer an organizational turnaround.
Sussman, Jason H; Dziesinski, Ray R
2002-11-01
Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process. PMID:12656032
NASA Technical Reports Server (NTRS)
Adamczyk, John J.
1996-01-01
The role of unsteady flow processes in establishing the performance of axial flow turbomachinery was investigated. The development of the flow model, as defined by the time average flow equations associated with the passage of the blade row embedded in a multistage configuration, is presented. The mechanisms for unsteady momentum and energy transport is outlined. The modeling of the unsteady momentum and energy transport are discussed. The procedure for simulating unsteady multistage turbomachinery flows is described.
Robert H. Litzenberger; Cherukuri U. Rao
1971-01-01
This paper develops an econometric model of the valuation of electric utility shares. This model, based upon the Sharpe-Lintner capital market theory, yields indirect estimates of the marginal rate of time preference and average risk aversion of investors in electric utility shares during the period 1960-66. In general, the empirical findings are consistent with the Sharpe-Lintner positive theory of the
Restructuring and the Nonmetropolitan Turnaround: The California Evidence
Warf, Barney L.
1986-01-01
in the state's nonmetropoli- tan counties. The principal locus of growth shifted in the 1970s from the large coastal cities to the foothills of the Sierra Nevada, especially to the northern part of the Central Valley (Allen 1977, Sokolow 1977). Figures 1 and 2.... Roughly one-half million persons, about 2 percent of California's population, moved to the state's 33 nonmetropolitan counties in the 1970s (Bradshaw and Blakely 1979). For the first time since the nineteenth-century Gold Rush, the northern fifty counties...
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-07-01
Efficient use of feed resources has become a clear challenge for the U.S. pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio, residual feed intake (RFI), real-time ultrasound back fat thickness (BF), ultrasound muscle depth, intramuscular fat content (IMF), birth weight (BW at birth), and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes B models with 35,140 SNP on 18 autosomes after quality control. Significance of nonoverlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes factor, and nonparametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166 to 140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream of the MC4R gene, was significantly associated with ADFI, ADG, and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN, and PENK. No QTL were identified for RFI, IMF, and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24962532
Field demonstration of rapid turnaround, multilevel groundwater screening
Tingle, A.R.; Baker, L.; Long, D.D.; Miracle, M.
1994-09-01
A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.
Holyoak, Marcel; Meese, Robert J; Graves, Emily E
2014-01-01
In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we employ should be useful in other metapopulation studies for calculating time-averaged reproductive output for different sites. PMID:24817307
NASA Technical Reports Server (NTRS)
Jessee, R. D.
1970-01-01
Averaging circuit provides a secondary control signal during inoperative periods of an intermittent primary control system. It can also provide an average pulse rate over a fixed time interval, such as in a digital frequency meter.
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
Buckley, M.
2011-01-01
-On to Good Bones Union Station Revitalization Union Station Revitalization San Juan Science City--- BioScience Strategy Change Paradigms ? Planning by Governments for Strategic Industries on Underutilized Sites ? Public... expensive Infrastructure Government- Sponsored Initiatives San Juan Science City 83 acres (85.34 cuerdas) Link Centro Medico to Science City SCIENCE CITY DISTRICT ILLUSTRATIVE PLAN URBAN LUNG CENTRAL PARK CIVIC PLAZA Town Square...
ERIC Educational Resources Information Center
McLester, Susan
2011-01-01
The Obama administration has grand hopes for turning around the nation's lowest-performing schools, in part by allocating $3.5 billion for School Improvement Grants. Unfortunately, there simply aren't enough qualified principals to replace those mandated to be fired under two of the four school improvement models that the federal government says…
NASA Technical Reports Server (NTRS)
Kim, J. H.; Hudson, R. D.; Thompson, A. M.
1996-01-01
Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass burning in South America and Africa dominates the 03 budget in the tropical southern hemisphere during the study period.
NASA Astrophysics Data System (ADS)
Mark, William D.
2015-10-01
The transmission-error frequency spectrum of meshing gear pairs, operating at constant speed and constant loading, is decomposed into harmonics arising from the fundamental period of the gear pair, rotational harmonics of the individual gears of the pair, and tooth-meshing harmonics. In the case of hunting-tooth gear pairs, no rotational harmonics from the individual gears, other than the tooth-meshing harmonics, are shown to occur at the same frequencies. Time-synchronous averages utilizing a number of contiguous revolutions of the gear of interest equal to an integer multiple of the number of teeth on the mating gear is shown to eliminate non-tooth-meshing transmission-error rotational-harmonic contributions from the mating gear, and those from the gear pair, in the case of hunting-tooth gear pairs, and to minimize these contributions in the case of non-hunting-tooth gear pairs. An example computation is shown to illustrate the effectiveness of the suggested time-synchronous-averaging procedure.
NASA Astrophysics Data System (ADS)
Kosnik, Matthew A.; Hua, Quan; Kaufman, Darrell S.; Zawadzki, Atun
2015-03-01
Carbon-14 calibrated amino acid racemization (14C/AAR) data and lead-210 (210Pb) data are used to examine sediment accumulation rates, stratigraphic order, and the extent of time-averaging in sediments collected from the One Tree Reef lagoon (southern Great Barrier Reef, Australia). The top meter of lagoonal sediment preserves a stratigraphically ordered deposit spanning the last 600 yrs. Despite different assumptions, the 210Pb and 14C/AAR chronologies are remarkably similar indicating consistency in sedimentary processes across sediment grain sizes spanning more than three orders of magnitude (0.1-10 mm). Estimates of long-term sediment accumulation rates range from 2.2 to 1.2 mm yr-1. Molluscan time-averaging in the taphonomically active zone is 19 yrs, whereas below the depth of final burial (~15 cm), it is ~110 yrs/5 cm layer. While not a high-resolution paleontological record, this reef lagoon sediment is suitable for paleoecological studies spanning the period of Western colonization and development. This sedimentary deposit, and others like it, should be useful, albeit not ideal, for quantifying anthropogenic impacts on coral reef systems.
Cleary, P. W.; Hijmans, T. W.; Walraven, J. T. M. [Van der Waals-Zeeman Institute of the University of Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)
2010-12-15
We report on the manipulation of the center-of-mass motion ('sloshing') of a Bose-Einstein condensate in a time-averaged orbiting potential (TOP) trap. We start with a condensate at rest in the center of a static trapping potential. When suddenly replacing the static trap with a TOP trap centered about the same position, the condensate starts to slosh with an amplitude much larger than the TOP micromotion. We show, both theoretically and experimentally, that the direction of sloshing is related to the initial phase of the rotating magnetic field of the TOP. We show further that the sloshing can be quenched by applying a carefully timed and sized jump in the phase of the rotating field.
Maximum and minimum sensitizable timing analysis using data dependent delays
Singh, Karandeep
2007-09-17
Modern digital designs require high performance and low cost. In this scenario, timing analysis is an essential step for each phase of the integrated circuit design cycle. To minimize the design turn-around time, the ability ...
NASA Astrophysics Data System (ADS)
Schmidt, Anja; Carslaw, Kenneth; Mann, Graham; Merikanto, Joonas
2010-05-01
Volcanoes are a strong source of sulphur dioxide (SO2) with time-averaged emission inventories (e.g. Andres and Kasgnoc, 1998) indicating that volcanoes account for around 40% of the total annual SO2 flux in the pre-industrial atmosphere. We use a global aerosol microphysics model (GLOMAP-mode) to quantify the contribution of time-averaged volcanic sulphur emissions (from both continuous passive degassing and explosive volcanoes) on the global cloud condensation nuclei (CCN) budget. GLOMAP-mode is capable of simulating microphysical processes, such as binary homogeneous nucleation, hygroscopic growth, coagulation, condensation, cloud processing (oxidation of dissolved SO2 to SO4 in cloud droplets), as well as dry and wet deposition. For this study we use a sulphur chemistry scheme which includes 7 species (DMS, DMSO, MSA, SO2, H2SO4, COS, CS2). The runs were conducted using four internally mixed aerosol components, sulphate (SO4), sea salt, black carbon (BC) and organic carbon (OC). We simulated the impact of volcanic degassing in a pre-industrial setting (i.e. using 1750 BC and OC emissions in the absence of any anthropogenic emissions) using the volcanic emission inventory by Dentener et al. (2006). This volcanic inventory is based on datasets by Andres and Kasgnoc (1998) and Halmer et al. (2002) and accounts for an annual flux of ~13 Tg(S) of volcanic SO2. Our simulations suggest that volcanic degassing contributes on average ~50 CCN (>35nm in radius) per cubic centimetre to the annual zonal mean CCN concentrations in the tropical boundary layer. The simulations also reveal complex changes in annual zonal mean total particle concentrations (CN). CN concentrations are more than double in large parts of the tropical boundary layer when comparing the unperturbed run (i.e. without volcanic degassing) to the run featuring time-averaged volcanic degassing. However, the simulations also reveal that the additional SO2 and its subsequent conversion to sulphate aerosol enhances microphysical processes such as the coagulation of existing and/or entrained particles in upper parts of the atmosphere and in parts of the Southern Hemisphere boundary layer. Our model simulations suggest that volcanoes were a very important source of CCN in the pre-industrial atmosphere. Equatorwards of about 50° volcanoes can sustain CCN concentrations in excess of 50 cm-3, and up to 500 cm-3 over large tropical regions, even when the other main sulphur source (from DMS) is eliminated. Andres, R. J. and Kasgnoc, A. D.: A time-averaged inventory of subaerial volcanic sulfur emissions, J. Geophys. Res., 103, 25 251-25 262, 1998. Dentener, F., Kinne, S., Bond, T., Boucher, O., Cofala, J., Generoso, S., Ginoux, P., Gong, S., Hoelzemann, J. J., Ito, A., Marelli, L., Penner, J. E., Putaud, J. P., Textor, C., Schulz, M., van der Werf, G. R., and Wilson, J.: Emissions of primary aerosol and precursor gases in the years 2000 and 1750 prescribed data-sets for AeroCom, Atmos. Chem. Phys., 6, 4321-4344, 2006. Halmer, M., Schmincke, H., and Graf, H.-F.: The annual volcanic gas input into the atmosphere, in particular into the stratosphere, A global data-set for the past 100 years, J. Volca. Geotherm. Res., 115, 511-528, 2002.
ERIC Educational Resources Information Center
Calkins, Andrew; Guenther, William; Belfiore, Grace; Lash, Dave
2007-01-01
The turnaround recommendations and framework in "The Turnaround Challenge" grew out of both new research and synthesis of extensive existing research, as carried out by Mass Insight Education & Research Institute and its partners since September 2005. If the main report is the tip of the proverbial iceberg, this supplement represents at least part…
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Technical Reports Server (NTRS)
Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.
1987-01-01
Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta
2004-01-01
A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.
ERIC Educational Resources Information Center
Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan
2013-01-01
Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…
ERIC Educational Resources Information Center
Jacobson, Stephen L.; Szczesek, Jill
2013-01-01
This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…
Contribution of the demographic factor to the migration turnarounds in Japan, Sweden and Canada.
Ishikawa, Y
1999-01-01
"It is well known that similar changes of migration pattern occurred during recent decades in most developed countries; core regions recorded a greatly decreased net in-migration in the 1970s, but increased net in-migration in the 1980s. Based on shift-share analysis of the migration data of Japan, Sweden and Canada, this paper reveals that changing cohort size, especially related to the fertility fluctuation of baby boom and bust, has played an important role in the turnarounds. Furthermore, it is elucidated from a series of correlation analyses that this demographic factor was closely linked with labour-market restructuring in terms of industrial/occupational employment changes." PMID:12294867
Orbit-on-demand vehicles - Mission possible, but one-day turnaround?
NASA Technical Reports Server (NTRS)
Talay, T. A.; Morris, W. D.; Lemessurier, R. W.
1985-01-01
The next generation launch-on-demand manned orbital vehicles will ideally be able to perform plane change maneuvers for rendezvous. Plane-changes of, e.g., 11 deg, will require a 3000 fps on-orbit maneuver velocity. Any plane-change capabilities beyond 800 fps rapidly increase the necessary vehicle weight. Sensitivity studies of the payload weight/vehicle weight, the presence or absence of air-breathing engines, high acceleration, etc., indicated that advances are required in the payload stowage pallet, thermal protection system, fault detection electronics, line-replaceable units, and accessibility to realize one-day turnaround. The spacecraft would also need autonomous systems checks and launch from minimum support sites reached by air ferry. A horizontal take-off, two-stage subsonic launch vehicle so far offers the lowest weight, maximum flexibility and minimum support combination of all configurations under consideration.
Kira, S. [Okayama Univ. Medical School (Japan)] [Okayama Univ. Medical School (Japan); Sakano, M.; Nogami, Y. [Okayama Univ. of Science (Japan)] [Okayama Univ. of Science (Japan)
1997-06-01
There have been several different methods of measurement for waterborne pollutants. The most frequently utilized method for sample preparation has been a liquid-to-liquid partition or a liquid-to-solid partition. In these methods, pollutants such as polycyclic aromatic hydrocarbons (PAHs) are extracted to organic solvents directly from the sample water, or the pollutants are once adsorbed to solid phase adsorbent, and subsequently eluted with organic solvents. In either case, the measured level represents at the time of the sampling, namely a spot-sampling. On the other hand, a time-weighted average concentration (TWA) has been used as a determinant to evaluate an atmospheric environment. But it has been an elaborated work for us to estimate TWA of pollutants in water, since a frequent spot-sampling of water is required at a field site. Further no data on the TWA of PAHs in the field water has been published, however the TWA of pollutants could be an important factor for a chronic effect on biota. In our previous report, we set up a continuous sampling device, using Sep-Pak C18 cartridge and a peristaltic pump, which enabled us to measure a TWA of benzo(a)pyrene in an experimental water system. The present paper describes a portable sampling device that can continuously sample PAHs in water. We have evaluated basic characteristics of the sampling device in the laboratory, and optimized chromatographic detection of 4 PAHs, fluoranthene, perylene, benzo(b)-fluoranthene (BbF) and benzo(a)pyrene (BaP). After these procedures, we have brought this sampling device to field water sites to verify its performance. The levels of PAHs was calculated as TWA for 24 hr period of time in water at a site. 9 refs., 1 fig., 1 tab.
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
NASA Astrophysics Data System (ADS)
Moran-Lopez, Tiberius; Schilling, Oleg
2013-11-01
Reshocked Richtmyer-Meshkov turbulent mixing of gases with various Atwood numbers and shock Mach numbers is simulated using a third-order weighted essentially nonoscillatory implementation of a K- ? multicomponent Reynolds-averaged Navier-Stokes model. First, mixing layer widths from simulations with Mach number Ma = 1 . 20 , Atwood number At = 0 . 67 (air/SF6), and different times of reshock are shown to be in very good agreement with the experimental data of Leinov et al. [J. Fluid Mech. 626, 449 (2009)]. Second, widths from simulations with Ma = 1 . 50 and At = +/- 0 . 21 , +/- 0 . 67 and +/- 0 . 87 (corresponding to air/CO2, air/SF6 and H2/air) are compared to the large-eddy simulation data of Lombardini et al. [J. Fluid Mech. 670, 439 (2011)] and discussed. Budgets of the turbulent transport equations are considered to elucidate the mechanisms contributing to turbulent mixing in reshocked Richtmyer-Meshkov instability. Convergence of the mixing layer widths, mean fields, and turbulent fields under grid refinement is also assessed. Reshocked Richtmyer-Meshkov turbulent mixing of gases with various Atwood numbers and shock Mach numbers is simulated using a third-order weighted essentially nonoscillatory implementation of a K- ? multicomponent Reynolds-averaged Navier-Stokes model. First, mixing layer widths from simulations with Mach number Ma = 1 . 20 , Atwood number At = 0 . 67 (air/SF6), and different times of reshock are shown to be in very good agreement with the experimental data of Leinov et al. [J. Fluid Mech. 626, 449 (2009)]. Second, widths from simulations with Ma = 1 . 50 and At = +/- 0 . 21 , +/- 0 . 67 and +/- 0 . 87 (corresponding to air/CO2, air/SF6 and H2/air) are compared to the large-eddy simulation data of Lombardini et al. [J. Fluid Mech. 670, 439 (2011)] and discussed. Budgets of the turbulent transport equations are considered to elucidate the mechanisms contributing to turbulent mixing in reshocked Richtmyer-Meshkov instability. Convergence of the mixing layer widths, mean fields, and turbulent fields under grid refinement is also assessed. This work was funded by the U. S. Department of Energy NNSA under the Predictive Science Academic Alliances Program by grant DE-FC52-08NA28616 and performed under the auspices of the DOE by LLNL under Contract DE-AC52-s07NA27344.
Wójcik, J; Kujawska, T; Nowicki, A; Lewin, P A
2008-12-01
The primary goal of this work was to verify experimentally the applicability of the recently introduced time-averaged wave envelope (TAWE) method [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczy?ski, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F=80 mm) square (20 x 20 mm) and rectangular (10 x 25mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczy?ski, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within+/-1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [P.T. Christopher, K.J. Parker, New approaches to nonlinear diffractive field propagation, J. Acoust. Soc. Am. 90 (1) (1991) 488-499.]. The reduction in computation time depends on several parameters, including the source geometry, dimensions, fundamental resonance frequency, excitation level as well as the strength of the medium nonlinearity. For the non-axisymmetric focused transducers mentioned above and excited by a tone burst corresponding to moderately nonlinear and highly nonlinear conditions the execution time of computations was 3 and 12h, respectively, when using a 1.5 GHz clock frequency, 32-bit processor PC laptop with 2 GB RAM memory, only. Such prediction of the full 4D pulsed field is not possible when using conventional, Fourier-series scheme as it would require increasing the RAM memory by at least 2 orders of magnitude. PMID:18474387
Nuttall; Wilson
2000-11-01
Realistic simulations with spatially coherent noise have been run in order to compare the performance of adaptive beamforming (ABF), inverse beamforming (IBF), and conventional beamforming (CBF) for the case of finite-averaging times, where the actual spatial coherence of the acoustic field, or covariance matrix, is not known a priori, but must be estimated. These estimation errors cause large errors in the ABF estimate of the directionality of the acoustic field, partly because ABF is a highly nonlinear algorithm. In addition, it is shown that ABF is fundamentally limited in its suppression capability at very low frequency (VLF), based on the sidelobe level of the conventional beampattern in the direction of the noise interferer [G. L. Mohnkern, "Effects of Errors and Limitations on Interference Suppression," NOSC Technical Document 1478, Naval Ocean Systems Center (1989)]. The simulations include a low-level plane wave signal of interest, a stronger noise plane wave interferer, and spatially random background noise. Both IBF and ABF performed significantly better than CBF, and IBF's performance was slightly better than ABF's performance. The performances of IBF and the ABF algorithm, the minimum variance distortionless response (MVDR) [A. H. Nuttall and D. W. Hyde, "Unified Approach to Optimum and Suboptimum Processing for Arrays," USL Report Number 992, Naval Underwater Systems Center, New London, CT (22 April 1969)] were recently compared independently [J. S. D. Solomon, A. J. Knight, and M. V. Greening, "Sonar Array Signal Processing for Sparse Linear Arrays," Defense Science and Technology Organization (DSTO) Technical Report (June 1999)] using measured data, with the result that IBF outperformed MVDR. This result is significant because MVDR requires orders of magnitude more processing power than IBF or CBF. PMID:11108366
Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)
NASA Technical Reports Server (NTRS)
Gross, R. Steven
1996-01-01
This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.
Using Adaptive Turnaround Documents to Electronically Acquire Structured Data in Clinical Settings
Biondich, Paul G.; Anand, Vibha; Downs, Stephen M.; McDonald, Clement J.
2003-01-01
We developed adaptive turnaround documents (ATDs) to address longstanding challenges inherent in acquiring structured data at the point of care. These computer-generated paper forms both request and receive patient tailored information specifically for electronic storage. In our pilot, we evaluated the usability, accuracy, and user acceptance of an ATD designed to enrich a pediatric preventative care decision support system. The system had an overall digit recognition rate of 98.6% (95% CI: 98.3 to 98.9) and a marksense accuracy of 99.2% (95% CI: 99.1 to 99.3). More importantly, the system reliably extracted all data from 56.6% (95% CI: 53.3 to 59.9) of our pilot forms without the need for a verification step. These results translate to a minimal workflow burden to end users. This suggests that ATDs can serve as an inexpensive, workflow-sensitive means of structured data acquisition in the clinical setting. PMID:14728139
Daraio, Chiara
such as proximity to the chamber wall, complex viscous function, acoustic streaming and complicated particle shapes presents the velocity vectors of the acoustic streaming around the cylinder in the case R = 20-averaged acoustic forces acting on rigid circular cylinders in ideal and viscous fluids Jingtao Wang, Jürg Dual
Unimodular Gravity and Averaging
NASA Astrophysics Data System (ADS)
Coley, Alan
The question of the averaging of inhomogeneous spacetimes in cosmology is important for the correct interpretation of cosmological data. In this paper we suggest a conceptually simpler approach to averaging in cosmology based on the averaging of scalars within unimodular gravity. As an illustration, we consider the example of an exact spherically symmetric dust model, and show that within this approach averaging introduces correlations (corrections) to the effective dynamical evolution equation in the form of a spatial curvature term.
Unimodular Gravity and Averaging
A. Coley; J. Brannlund; J. Latta
2011-02-16
The question of the averaging of inhomogeneous spacetimes in cosmology is important for the correct interpretation of cosmological data. In this paper we suggest a conceptually simpler approach to averaging in cosmology based on the averaging of scalars within unimodular gravity. As an illustration, we consider the example of an exact spherically symmetric dust model, and show that within this approach averaging introduces correlations (corrections) to the effective dynamical evolution equation in the form of a spatial curvature term.
Estimating Average Wind Velocity Along a Trajectory
NASA Technical Reports Server (NTRS)
Bertsch, P.
1986-01-01
Average Wind Velocity (VWAVE) program calculates average wind velocity over time for particular vehicle trajectory. Calculation based on wind profile, which is wind magnitude at various altitudes. Average of wind profile over altitude does not correlate well with actual apparent effect of wind. Wind profiles with low average velocities more severe than some wind profiles with high average velocities. VWAVE written in FORTRAN V for interactive execution.
Peter Kellman; Christophe Chefd'hotel; Christine H. Lorenz; Christine Mancini; Andrew E. Arai; Elliot R. McVeigh
2008-01-01
Real-time imaging may be clinically important in patients with congestive heart failure, arrhythmias, or in pediatric cases. However, real-time imaging typically has compromised spatial and temporal resolution compared with gated, segmented studies. To combine the best features of both types of imaging, a new method is proposed that uses parallel imaging to improve temporal resolution of real-time acquired images at
Herrinton, Lisa J; Liu, Liyan; Altschuler, Andrea; Dell, Richard; Rabrenovich, Violeta; Compton-Phillips, Amy L
2015-01-01
The cost to build and to maintain traditional registries for many dire, complex, low-frequency conditions is prohibitive. The authors used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing elected utilization and to influence clinicians to change practices to improve care. The platform, tested in 5 medical specialty departments, enabled the specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improve opportunities. Each miniregistry required 1 to 2 hours of collaboration by a specialist. Turnaround was 1 to 14 days. PMID:25785640
NASA Astrophysics Data System (ADS)
Lakshminarayana, B.; Luo, J.
1993-07-01
The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Tullos, Desiree
Conservation Program has been designed to reduce hearing loss at the College of Agricultural Sciences about hearing and its loss are likely to use hearing protection. Prior to working in a noisy area62 Chapter 1 - Hearing Conservation Program It has been shown that an eight hour time
NASA Astrophysics Data System (ADS)
Ferraris, Stefano; Agnese, Carmelo; Baiamonte, Giorgio; Canone, Davide; Previati, Maurizio; Cat Berro, Daniele; Mercalli, Luca
2015-04-01
Modeling of rainfall statistical structure represents an important research area in hydrology, meteorology, atmospheric physics and climatology, because of the several theoretical and practical implications. The statistical inference of the alternation of wet periods (WP) and dry periods (DP) in daily rainfall records can be achieved through the modelling of inter-arrival time-series (IT), defined as the succession of times elapsed from a rainy day and the one immediately preceding it. It has been shown previously that the statistical structure of IT can be well described by the 3-parameter Lerch distribution (Lch). In this work, Lch was successfully applied to IT data belonging to a sub-alpine area (Piemonte and Valle d'Aosta, NW Italy); furthermore the same statistical procedure was applied to daily rainfall records to ITs associated. The analysis has been carried out for 26 daily rainfall long-series (? 90 yr of observations). The main objective of this work was to detect temporal trends of some features describing the statistical structure of both inter-arrival time-series (IT) and associated rainfall depth (H). Each time-series was divided on subsets of five years long and for each of them the estimation of the Lch parameter was performed, so to extend the trend analysis to some high quantiles.
Stochastic averaging of BALAJI DEVARAJU, Nico Sneeuw
Stuttgart, Universität
Stochastic averaging of GRACE data BALAJI DEVARAJU, Nico Sneeuw Geod¨atisches Institut der deterministic (Gaussian) or stochastic (Wiener), and at the same time isotropic or anisotropic. Stochastic op operators depend on an averaging radius. To per- form stochastic averaging the desired signal struc- ture
Ukawa, Shigekazu; Tamakoshi, Akiko; Yatsuya, Hiroshi; Yamagishi, Kazumasa; Ando, Masahiko; Iso, Hiroyasu
2015-01-01
Background Sedentary behavior is associated with cardiovascular disease, diabetes mellitus, and cancer morbidity, and watching television (TV) is an important sedentary behavior. The aim of this study is to clarify the association between TV viewing time and chronic obstructive pulmonary disease (COPD)-related mortality in Japanese adults. Methods Using the Cox proportional hazard model, we assessed COPD-related mortality by TV viewing time in a national cohort of 33 414 men and 43 274 women without cancer, stroke, myocardial infarction, or tuberculosis at baseline (1988–1990). Results The median follow-up was 19.4 years; 244 men and 34 women died of COPD. Men watching ?4 hours/day of TV were more likely to die of COPD than those watching <2 hours/day (hazard ratio 1.63; 95% confidence interval, 1.04–2.55), independent of major confounders. No association was found in women. Conclusions Avoiding a sedentary lifestyle, particularly prolonged TV viewing, may help in preventing death from COPD among men. PMID:25947581
ERIC Educational Resources Information Center
Picucci, Ali Callicoatte; Brownson, Amanda; Kahlert, Rahel; Sobel, Andrew
These case studies described how seven high-poverty middle schools were able to demonstrate strong academic improvement in a short amount of time so they were performing at levels consistent with, and often better than, higher-income schools in their states. These case studies focus on procedural knowledge that may prove useful to other schools…
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-04-28
Efficient use of feed resources has become a clear challenge for the US pork industry as feed costs continue to be the largest variable expense. The availability of the Illumina Porcine60K BeadChip has greatly facilitated whole-genome association studies to identify chromosomal regions harboring genes influencing those traits. The current study aimed at identifying genomic regions associated with variation in feed efficiency and several production traits in a Duroc terminal sire population, including ADFI, ADG, feed conversion ratio (FCR), residual feed intake (RFI), real-time ultrasound back fat thickness (BF), muscle depth (MD), intramuscular fat content (IMF), birth weight (BW at birth) and weaning weight (BW at weaning). Single trait association analyses were performed using Bayes-B models with 35,140 SNP on 18 autosomes after quality control. Significance of non-overlapping 1-Mb length windows (n = 2,380) were tested across 3 QTL inference methods: posterior distribution of windows variances from Monte Carlo Markov Chain, naive Bayes Factor and non-parametric bootstrapping. Genes within the informative QTL regions for the traits were annotated. A region ranging from166-140 Mb (4-Mb length) on SSC 1, approximately 8 Mb upstream MC4R gene, was significantly associated with ADFI, ADG and BF, where SOCS6 and DOK6 are proposed as the most likely candidate genes. Another region affecting BW at weaning was identified on SSC 4 (84-85 Mb), harboring genes previously found to influence both human and cattle height: PLAG1, CHCHD7, RDHE2 (or SDR16C5), MOS, RPS20, LYN and PENK. No QTL were identified for RFI, IMF and BW at birth. In conclusion, we have identified several genomic regions associated with traits affecting nutrient utilization that could be considered for future genomic prediction to improve feed utilization. PMID:24778334
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Covariant approximation averaging
Eigo Shintani; Rudy Arthur; Thomas Blum; Taku Izubuchi; Chulwoo Jung; Christoph Lehner
2015-07-08
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Vibration analysis by time-average holography
NASA Technical Reports Server (NTRS)
Aprahamian, R.; Evensen, D. A.
1971-01-01
Method photographs vibration modes as high as 100 kHz. Technique also measures vibration decay and damping characteristics. Compensation techniques for structural size limitations and for background noise excitations are discussed.
Searching for the Beginning of the Ozone Turnaround Using a 22-Year Merged Satellite Data Set
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Meeson, Blanche W. (Technical Monitor)
2001-01-01
We have used the data from six satellite instruments that measure the total column amount of ozone to construct a consistent merged data set extending from late 1978 into 2000. The keys to constructing a merged data set are to minimize potential drift of individual instruments and to accurately establish instrument-to-instrument offsets. We have used the short-wavelength D-pair measurements (306nm-313nm) of the SBUV and SBUV/2 instruments near the equator to establish a relatively drift-free record for these instruments. We have then used their overlap with the Nimbus 7 and EP TOMS instruments to establish the relative calibration of the various instruments. We have evaluated the drift uncertainty in our merged ozone data (MOD) set by examining both the individual instrument drift uncertainty and the uncertainty in establishing the instrument- to-instrument differences. We conclude that the instrumental drift uncertainty over the 22-year data record is 0.9 %/decade (2-sigma). We have compared our MOD record with 37 ground stations that have a continuous record over that time period. We have a mean drift with respect to the stations of +0.3 %/decade which is within 1-sigma of our uncertainty estimate. Using the satellite record as a transfer standard, we can estimate the capability of the ground instruments to establish satellite calibration. Adding the statistical variability of the station drifts with respect to the satellite to an estimate of the overall drift uncertainty of the world standard instrument, we conclude that the stations should be able to be used to establish the drift of the satellite data record to within and uncertainty of 0.6 %/decade (2-sigma). Adding to this an uncertainty due to the-incomplete global coverage of the stations, we conclude that the station data should be able to establish the global trend with an uncertainty of about 0.7 %/decade, slightly better than for the satellite record. We conclude that merging the two records together gives only a slight improvement in the uncertainty. Keeping them separate gives the greater confidence of two independent measures of the ozone trend and potential recovery. We fit the trend in our MOD record through May of 1991 and then extrapolated forward to see if the data at the end of the record was above the statistical model as a measure of ozone recovery as was done in the last WMO/UNEP assessment report. Because our data set drifts with respect to the ground-stations through May of 1991, we calculated a smaller global trend (-1.1 %/decade) than in the WMO/UNEP report. Our data in 1998 and 1999 was, on average 2 DU above the extrapolated statistical model with a 2-sigma uncertainty of 6 DU. For the combined mid-latitudes of the northern and southern hemispheres, the data was 5 DU above the extrapolated statistical model with a 2-sigma uncertainty of 10 DU. These may be signs of recovery, but they are still statistically insignificant.
NASA Astrophysics Data System (ADS)
Siegel, Edward Carl-Ludwig; Newcomb, Simon; Strutt-Rayleigh, John William; Poincare, Henri; Weyl, Hermann; Benford, Frederick; Antonoff, Marvin
2015-03-01
DIGIT-PHYSICS: DIGITS?: For a Very Long Time Giving Us All The FINGER!!!: CONTRA Wigner,``On the Unreasonable Effectiveness of Physics in Mathematics!'' A Surprise in Theoretical/Experimental Physics and/or Ostensibly Pure-Mathematics: PHYSICS: Quantum-Mechanics/Statistical-.Mechanics. DIGITS-LAW(S); DIGITS' ostensibly ``pure-mathematics' 1:1-map onto the QUANTUM!!! [Google:''http://www.benfordonline.net/ list/ chronological'']: Newcomb[Am.J.Math.4,39(1881)]-Poincare[Calcul des Probabilité(1912)]-Weyl[Math.Ann., 77, 313(1916)-Benford[J.Am.Phil Soc,78,115 (1938)]-..-Antonoff/Siegel[AMS Joint-Mtg.,San Diego(2002)-abs.# 973-60-124] empirical inter-digit{on-ANY/ALL averageS)
= log[base =10] (1 + 1/d) = log[base =10] ([d +1]/d) upon algebraic-inversion is d = 1/[10?[
] -1] 1/[2.303..e?[
] -1] 1/[2.303..e?[< ?>] -1] 1/[2.303..e?[ ?] -1]: Digits Are Bosons Are Quanta Because (On Average) Quanta and Bosons Are and Always Were Digits!!! (Ex: atom energy-levels numbering: 0,...,9) ANY/ALL QUANTUM-physics[Planck(1901)-Einstein(1905)-Bose(1924)-Einstein(1925)-vs.Fermi(1927)-Dirac(1927)-...] is and always was Newcomb(1881) DIGIT-physics!!!
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum
Chen, Guang-Hong; Li, Yinsheng
2015-01-01
Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. Results: In numerical simulations, the 240? short scan angular span was divided into four consecutive 60? angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200?, three 66? angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. Conclusions: In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60? angular subsectors. PMID:26233197
NASA Astrophysics Data System (ADS)
Poland, Michael P.
2014-07-01
Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of K?lauea Volcano, Hawai`i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at K?lauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of K?lauea's 1983-present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of K?lauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at K?lauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
NASA Astrophysics Data System (ADS)
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
2001-04-12
A lot of people could have been having this idea at the same time, of course, ... turned out to be a definite yes: Lisp gave us a great advantage over competitors ... bet on the wrong technology, your competitors will crush you. ... work the best. ... We eventually had many competitors, on the order of twenty to thirty of them, but.
Snowpack: Decadal Averages Map
NSDL National Science Digital Library
California Energy Commission
This is an interactive map of California and the Sierra Nevada mountains, showing projected variations in water stored in snowpack, from 1950 to 2090, assuming low or high emission scenarios over that period of time. Interactive can be adjusted to show different months of the year and various climate models, graphed by site.
Few-Electron Quantum Dot in a Fast Turnaround Dilution Refrigerator
Zumbühl, Dominik
to be a promising approach. In the last years, many of the elements necessary for quantum computation have been] . Spin-states are long- lived, i.e. spins have long relaxation times.[5] (Decoherence times of up to one 2.1 Sample The sample is an in-house fabricated device with evaporated TiAu-gates on a GaAs-AlGaAs
IgG/anti-IgG immunoassay based on a turn-around point long period grating
NASA Astrophysics Data System (ADS)
Chiavaioli, F.; Biswas, P.; Trono, C.; Giannetti, A.; Tombelli, S.; Bandyopadhyay, S.; Basumallick, N.; Dasgupta, K.; Baldini, F.
2014-02-01
Long period fiber gratings (LPFGs) have been proposed as label-free optical biosensor for a few years. Refractive index changes, which modify the fiber transmission spectrum, are still used for evaluating a biochemical interaction that occurs along the grating region. A turn-around point (TAP) LPFG was manufactured for enhancing the refractive index sensitivity of these devices. Considering the simplicity and the fast process with respect to the silanization procedure, the functionalization of the fiber was carried out by Eudragit L100 copolymer. An IgG/anti-IgG immunoassay was implemented for studying the antigen/antibody interaction. A limit of detection lower than 100 ?g L-1 was achieved. Based on the same model assay, we compared the resonance wavelength shifts during the injection of 10 mg L-1 anti-IgG antigen between the TAP LPFG and a standard non-TAP one, in which the coupling occurs with a lower order cladding mode, as performance improvement of the LPFG-based biosensors.
Rosowski, John J.; Cheng, Jeffrey Tao; Ravicz, Michael E.; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme
2009-01-01
Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f > 4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841
NASA Astrophysics Data System (ADS)
Becker, Martin Andrew
1997-11-01
Unconformities and fossil rich layers are common elements in the stratigraphic architecture of upper Cretaceous sediments exposed on both the Atlantic and Eastern Gulf Coastal Plains. Contacts between the Eutaw Formation and Tombigbee Sands Member in Alabama, the Blufftown Formation and Cusseta Sands in Georgia and the Wenonah- Mt. Laurel and Navesink Formations in New Jersey are marked by erosional surfaces with overlying blankets and lenses of macrofossil residuum. These contacts correspond to bounding unconformities and transgressive lags separating Santonian-Campanian, lower Campanian-upper Campanian and Campanian-Maastrichtian depositional sequences. Regression and subsequent transgression of sea level at the top of these depositional sequences resulted in hydrodynamic sorting of sediments and fossils that had previously accumulated in shelf and lower shoreface paleoenvironments. Remobilization of sediments by shoreface retreat reworked fossil hard-parts which became concentrated above erosional surfaces as sea level rose. Because of the abundance of chondrichthyan, pelecypod and ammonite fossils, these lags have great biostratigraphic significance and provide a basis for examining time averaging in macrofossil zonation. Chondrichthyan teeth are composed of extremely durable and highly insoluble, biogenic apatite. This tooth apatite accurately records the Sr87/Sr86 isotopic signature of seawater, from which the numerical age of the teeth can be calculated using published age/concentration data. Teeth (e.g. Squalicorax kaupi, Scapanorhynchus texanus) from Santonian-Campanian lag deposits at the contact of the Eutaw Formation and Tombigbee Sands Member in Alabama yield approximate ages of 85-81 Ma. Teeth from lower-upper Campanian lag deposits at the contact of the Blufftown Formation and Cusseta Sands in Georgia yield approximate ages of 83-75 Ma. Teeth from Campanian-Maastrichtian lag deposits at the contact of the Wenonah-Mt. Laurel and Navesink Formations in New Jersey yield approximate ages of 80-76 Ma. Isotopic age determination from these chondrichthyan teeth indicate average hiatus of approximately 3-7 million years occur during the development of lag accumulations and transgressive unconformities. Santonian, Campanian and Maastrichtian macrofossils analyzed in this study are hydrodynamically stable components representing time-averaged fossil assemblages sorted together by physical processes and are not life cohorts. Abrupt appearance and disappearance of organisms found in upper Cretaceous lag deposits of the Atlantic and Eastern Gulf Coastal Plains are artifacts of a physical sorting processes associated with sea-level cyclicity.
ERIC Educational Resources Information Center
Conrad, Jill K.
2013-01-01
This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,…
Arithmetic averaging: A versatile technique for smoothing and trend removal
Clark, E.L.
1993-12-31
Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.
Differential absorption lidar signal averaging
NASA Technical Reports Server (NTRS)
Grant, William B.; Brothers, Alan M.; Bogan, James R.
1988-01-01
This paper presents experimental results using an atmospheric backscatter dual CO2 laser DIAL. It is shown that DIAL signals can be averaged to obtain an N exp -1/2 dependence decrease in the standard deviation of the ratio of backscattered returns from two lasers, where N is the number of DIAL signals averaged, and that such a lidar system can make measurements of gas concentrations with a precision of 0.7 percent in absorptance over 75 m in a short measurement time when the signal strength is high. Factors that eventually limit the rate of improvement in the SNR, such as changes in the ratio of the absorption and/or backscatter at the two laser frequencies and background noise, are discussed. In addition, it is noted that DIAL measurements made using hard-target backscatter often show departures from N exp -1/2 dependence improvement in the standard deviation because they are further limited by the combined effects of atmospheric turbulence and speckle (since the relative reproducibility of the speckle pattern on the receiver gives rise to correlations of the lidar signals).
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Herrinton, Lisa J; Liu, Liyan; Altschuler, Andrea; Dell, Richard; Rabrenovich, Violeta; Compton-Phillips, Amy L
2015-01-01
Context: Disease registries enable priority setting and batching of clinical tasks, such as reaching out to patients who have missed a routine laboratory test. Building disease registries requires collaboration among professionals in medicine, population science, and information technology. Specialty care addresses many complex, uncommon conditions, and these conditions are diverse. The cost to build and maintain traditional registries for many diverse, complex, low-frequency conditions is prohibitive. Objective: To develop and to test the Specialty Miniregistries platform, a collaborative interface designed to streamline the medical specialist’s contributions to the science and management of population health. Design: We used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing expected utilization, and to influence clinicians and others to change practices to improve care. The platform was composed of staff, technology, and structured collaborations, organized into a workflow. The platform was tested in five medical specialty departments. Main Outcome Measure: Proof of concept. Results: The platform enabled medical specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improvement opportunities. Their knowledge was used to build and to deploy the miniregistries. Each miniregistry required 1 to 2 hours of collaboration by a medical specialist. Turnaround was 1 to 14 days. Conclusions: The Specialty Miniregistries platform is useful for low-volume questions that often occur in specialty care, and it requires low levels of investment. The efficient organization of information workers to support accountable care is an emerging question. PMID:25785640
Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Luo, J.
1993-01-01
Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.
Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.
2012-01-01
ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .
Kuss Middle School: Expanding Time to Accelerate School Improvement
ERIC Educational Resources Information Center
Massachusetts 2020, 2012
2012-01-01
In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning, deepen…
Hebner, G A; Riley, M E; Greenberg, K E
2002-10-01
A method for measuring the interactions of dust particles within a two-dimensional (2D) layer in a plasma is presented, along with the use of dust as a probe for determining plasma presheath electric fields. Particle interactions were directly determined from the lateral compression of two-dimensional plasma dust crystals confined in a parabolic potential well. The shape of the parabolic potential well was confirmed by observing trajectories of single particles falling within the well. Particle trajectories were in good agreement with values calculated using gas temperatures derived from laser-induced-fluorescence measurements of the argon metastable velocity distributions and assuming diffusive gas scattering. Measured particle separations combined with an equation of state for the crystal were used to derive values for the plasma screening length and the charge on the particles. Screening lengths and charges were measured for a range of plasma power, pressure, and particle diameter (mass). Analysis of the particle heights and charge were used to derive the time-averaged electric fields at the edge of the rf driven plasma sheath. Measured electric fields were between 5 and 22 V/cm. For smaller particle diameters, the ion wind force was comparable to the force of gravity. These measurements showed that the particles are confined to the bulk-plasma side of the classical Bohm point. PMID:12443330
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Physical Theories with Average Symmetry
Roberto C. Alamino
2013-05-03
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
ERIC Educational Resources Information Center
Purinton, Ted; Azcoitia, Carlos
2011-01-01
Chilean educator and poet Gabriela Mistral warned that children's needs are immediate and comprise more than just academic concerns. Implementing comprehensive community schools is an increasingly successful approach to taking her warning to heart, particularly in neighborhoods with large immigrant populations. The reason is simple: education does…
Average Speed and Unit Conversion
NSDL National Science Digital Library
2009-01-01
Students will determine average speeds from data collected and convert units for speed problems. Students try to roll the ball with a prescribed average speed based on intuition. Then, based on unit conversion we will see how accurate the rolls really were.
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Tongue motion averaging from contour sequences.
Li, Min; Kambhamettu, Chandra; Stone, Maureen
2005-01-01
In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned first by a shape based Dynamic Programming (DP) method. The best representation of the speech motion is then obtained by averaging the time aligned contours from different repetitions. Procrustes analysis is used to measure the contour similarity in the time alignment process and to get the averaged best representation. To get the point correspondence for Procrustes analysis, a nonrigid point correspondence recovery method based on a local stretching model and a global constraint is developed. Synthetic validations and experiments on real tongue motion are also presented in this paper. PMID:16206480
Improved averaging for non-null interferometry
NASA Astrophysics Data System (ADS)
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
AVERAGE PREDICTIVE COMPARISONS FOR MODELS
Gelman, Andrew
on the values of the predictors. We consider various definitions based on averages over a population of interest measures whether a convicted felon received a prison sentence rather than a We thank John Carlin
Designing Software for Modular Static Average-case Analysis
M. Schellekens; D. Early; E. Popovici
2009-01-01
MOQA is a new domain-specific programming language to design software for which the average-case time analysis of its programs is guaranteed to be modular. Time in this context refers to a broad notion of cost, which can be used to estimate the actual running time, but also other quantitative information such as power consumption, while modularity means that the average
High average power pockels cell
Daly, Thomas P. (Pleasanton, CA)
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Averaging of temporal memories by rats.
Swanton, Dale N; Gooch, Cynthia M; Matell, Matthew S
2009-07-01
Rats were trained on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10 s and stimulus B (the other stimulus) indicated food availability after 20 s. Testing consisted of nonreinforced probe trials in which the stimulus was A, B, or the compound AB. On single-stimulus trials, rats responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats showed a single scalar peak of responding at a time midway between those for stimulus A and B. These results suggest that when provided with discrepant information regarding the temporal predictability of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use this average to generate an expectation of reward for the compound stimuli. PMID:19594288
Averaging of Temporal Memories by Rats
Swanton, Dale N.; Gooch, Cynthia M.; Matell, Matthew S.
2009-01-01
Rats were trained on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10 s and stimulus B (the other stimulus) indicated food availability after 20 s. Testing consisted of non-reinforced probe trials in which the stimulus was A, B, or the compound AB. On single-stimulus trials, rats responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats showed a single scalar peak of responding at a time midway between those for stimulus A and B. These results suggest that when provided with discrepant information regarding the temporal predictability of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use this average to generate an expectation of reward for the compound stimuli. PMID:19594288
Exploiting scale dependence in cosmological averaging
Mattsson, Teppo; Ronkainen, Maria E-mail: maria.ronkainen@helsinki.fi
2008-02-15
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre-Tolman-Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z{approx}2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Walker, Lawrence R.
Included Ó FREE High Speed Internet Ó Full Kitchen & Bath Ó 24 Hour Laundry Ó Minutes From The Strip Ó Maintenance o High Speed Internet Access o Public Transportation o Disability Access o Pet Friendly · Weig Dishwasher o Oversized Closets o Balcony o Ceiling Fan(s) o Elevator o View Average Availability o Usually
2013 FACT SHEET average student
The Naval Postgraduate School develops creative, technologically informed graduates and solutions and Information Sciences (GSOIS) · School of International Graduate Studies (SIGS) average resident degree student faculty, students and staff, as well as NPS alumni, in a 24/7 environment. federal library/information
ERIC Educational Resources Information Center
Murphy, Joseph
2008-01-01
In this article, we review research from the organizational sciences to develop lessons for educators and policy makers. The approach is an integrative review of the literature. We employ a comprehensive process to unpack and make sense of the turnaround literature from the organizational sciences. We rely on strategies appropriate for document…
Average entanglement for Markovian quantum trajectories
Vogelsberger, S. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Spehner, D. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Laboratoire de Physique et Modelisation des Milieux Condenses, Universite Joseph Fourier and CNRS, BP 166, F-38042 Grenoble (France)
2010-11-15
We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.
Terrestrial kilometric radiation: 3-average spectral properties
NASA Technical Reports Server (NTRS)
Kaiser, M. L.; Alexander, J. K.
1976-01-01
A study is presented of the average spectral properties of terrestrial kilometric radiation (TKR) derived from observations made by radio astronomy experiments onboard the IMP-6 and RAE-2 spacecraft. As viewed from near the equatorial plane, TKR is most intense and most often observed in the 21-24 hr local time zone and is rarely seen in the 09-12 hr zone. The peak flux density usually occurs near 240 kHz, but there is evidence that the peak occurs at a somewhat lower frequency on the dayside. The frequency of the peak in the average flux spectrum varies inversely with increasing substorm activity as inferred from the auroral electrojet index (AE) from a maximum near 300 kHz during very quiet times to a minimum below 200 kHz during very disturbed times. The absolute flux levels in the 100-600 kHz TKR band increase significantly with increasing AE. The average power associated with a particular source region seems to decrease rapidly with increasing source altitude.
Average configuration of the induced venus magnetotail
McComas, D.J.; Spence, H.E.; Russell, C.T.
1985-01-01
In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.
Averaging Robertson-Walker Cosmologies
Iain A. Brown; Georg Robbers; Juliane Behrend
2009-09-10
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the Buchert averaging formalism to linear Robertson-Walker universes containing matter, radiation and dark energy and evaluate numerically the discrepancies between the assumed and the averaged behaviour, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h=0.701. For the LCDM concordance model, the backreaction is of the order of Omega_eff~4x10^-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10^-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w_eff<-1/3 can be found for strongly phantom models.
NASA Astrophysics Data System (ADS)
Park, Da Hye; Jang, Sung Ho; Noh, Chang Hyeon; Lee, Jong Sik
Grid computing provides high performance like a super computer through sharing and using distributed heterogeneous computing resources. Grid computing processing time and cost are widely varied since a grid user or a grid middleware can select a variety of distributed heterogeneous resources. Therefore, grid computing absolutely needs a grid resource management method and model. In this paper, we propose two types of resource management model with resource reliability. The first model is the idle resource supplement model. The point of the idle resource supplement model is that adds to idle resources when existing resources can not process jobs. The second model is the validity time designation model which considers grid users. This model processes jobs during validity time that is decided by grid users. This paper evaluates system performance such as utilization, job-loss rate and average turn-around time. And, we estimate experiment results of our models in comparison with those of existing models such as a random model and a round-robin model. The experiment results demonstrate that the two models based on resource reliability measurement improve resource utilization and provide reliable job processing. And, we expect that our proposed models improve grid computing QoS.
NASA Astrophysics Data System (ADS)
Hoover, Wm. G.; Hoover, Carol G.; Petravic, Janka
2008-10-01
Homogeneous shear flows (with constant strainrate dvx/dy ) are generated with the Doll’s and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear normal-stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, Pxx-Pyy , depend on both the thermostat type and the chosen shearflow algorithm. The Doll’s and Sllod algorithms predict opposite signs for this normal-stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures: Txx>Tzz>Tyy .
Wm. G. Hoover; Carol G. Hoover; Janka Petravic
2008-07-19
Homogeneous shear flows (with constant strainrate du/dy) are generated with the Doll's and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, P(xx) - P(yy), depend on both the thermostat type and the chosen shearflow algorithm. The Doll's and Sllod algorithms predict opposite signs for this stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures, T(xx) > T(zz) > T(yy).
NSDL National Science Digital Library
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important asects of the task and its potential use. Here are the first few lines of the commentary for this task: John makes DVDs of his friend’s shows. He has realized that, because of his fixed costs, his average cost per DVD depends on the number of DVDs he prod...
Real-Time Patient Survey Data During Routine Clinical Activities for Rapid-Cycle Quality Improvement
Jones, Robert E
2015-01-01
Background Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. Objective We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. Methods We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Results Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)—8.0, 11.8, 16.8, seconds, respectively. Conclusions This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health care system. PMID:25768807
Hierarchical Average Reward Reinforcement Learning
Massachusetts at Amherst, University of
autonomous guided vehicle (AGV) scheduling problem and a modified version of the well-known Taxi problem. The other two domains are larger real-world single-agent and multiagent AGV scheduling problems. We model these AGV scheduling tasks using both discrete-time and continuous-time models and compare the performance
Hierarchical Average Reward Reinforcement Learning
Massachusetts at Amherst, University of
a small autonomous guided vehicle (AGV) scheduling problem and a modi#12;ed version of the well-known Taxi problem. The other two domains are larger real-world single-agent and multiagent AGV scheduling problems. We model these AGV scheduling tasks using both discrete-time and continuous-time models and compare
Self-similarity of higher-order moving averages
NASA Astrophysics Data System (ADS)
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Measuring complexity through average symmetry
NASA Astrophysics Data System (ADS)
Alamino, Roberto C.
2015-07-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle—measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalized, including to continuous cases and general networks. By applying this measure to a series of objects, it is shown that it can be consistently used for both small scale structures with exact symmetry breaking and large scale patterns, for which, differently from similar measures, it consistently discriminates between repetitive patterns, random configurations and self-similar structures
Topological quantization of ensemble averages
NASA Astrophysics Data System (ADS)
Prodan, Emil
2009-02-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schrödinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states.
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P. [Oregon State Univ., Corvallis, OR (United States)
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND
385: SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN 1947 SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN, 1947 Part I- -Sea surface temperature monthly average charts, northeastern Pacific Ocean 5 Part II- -Sea
Monthly Average Temperature for Boston, MA
NSDL National Science Digital Library
The phenomenon is monthly average temperature data for Boston, MA from March 1872 until September 2000. In addition to monthly averages, the National Weather Service table also shows the yearly average temperature.
Empty and Elusive Averages in Performance Measurement.
ERIC Educational Resources Information Center
Bonetti, S. M.
1992-01-01
Two pitfalls are identified in the use of arithmetic averages for performance measurement in higher education. First, asymmetric average targeting rules involve a fallacy of composition; and, second, comparisons with adjusted averages involve a serious methodological error. (DB)
Delineating the Average Rate of Change in Longitudinal Models
ERIC Educational Resources Information Center
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
Markov Decision Evolutionary Games with Expected Average Fitness
Markov Decision Evolutionary Games with Expected Average Fitness E. Altman INRIA, MAESTRO Group BP 1228, F-84911 Avignon Cedex, France Abstract Aim: To model and characterize evolutionary games time average fitness. Mathematical methods: The main mathematical tool is occupation measures (expected
Cosmic clocks, cosmic variance and cosmic averages
David L. Wiltshire
2007-11-01
Cosmic acceleration is explained quantitatively, purely in general relativity, as an apparent effect due to quasilocal gravitational energy differences that arise in the decoupling of bound systems from the global expansion of the universe. "Dark energy" is recognised as a misidentification of those aspects of gravitational energy which by virtue of the equivalence principle cannot be localised, namely gradients in the energy associated with the expansion of space and spatial curvature variations in an inhomogeneous universe, as we observe. Gravitational energy differences between observers in bound systems, such as galaxies, and volume-averaged comoving locations within voids in freely expanding space can be so large that the time dilation between the two significantly affects the parameters of any effective homogeneous isotropic model one fits to the universe. A new approach to cosmological averaging is presented, which implicitly solves the Sandage-de Vaucouleurs paradox. When combined with a nonlinear scheme for cosmological evolution with back-reaction via the Buchert equations, a new observationally viable quantitative model of the universe is obtained. The expansion age is increased, allowing more time for structure formation. The baryon density fraction obtained from primordial nucleosynthesis bounds can be significantly larger, yet consistent with primordial lithium abundance measurements. The angular scale of the first Doppler peak in the CMB anisotropy spectrum fits the new model despite an average negative spatial curvature at late epochs, resolving the anomaly associated with ellipticity in the CMB anisotropies. A number of other testable consequences are discussed, with the potential to profoundly change the whole of theoretical and observational cosmology. [Abridged
Obligatory and adaptive averaging in visual short-term memory.
Dubé, Chad; Sekuler, Robert
2015-04-01
Visual memory can draw upon averaged perceptual representations, a dependence that could be both adaptive and obligatory. In support of this idea, we review a wide range of evidence, including findings from our own lab. This evidence shows that time- and space-averaged memory representations influence detection and recognition responses, and do so without instruction to compute or report an average. Some of the work reviewed exploits fine-grained measures of retrieval from visual short-term memory to closely track the influence of stored averages on recall and recognition of briefly presented visual textures. Results show that reliance on perceptual averages is greatest when memory resources are taxed or when subjects are uncertain about the fidelity of their memory representation. We relate these findings to models of how summary statistics impact visual short-term memory, and discuss a neural signature for contexts in which perceptual averaging exerts maximal influence. PMID:26406353
On the Choice of Average Solar Zenith Angle
Cronin, Timothy W.
Idealized climate modeling studies often choose to neglect spatiotemporal variations in solar radiation, but doing so comes with an important decision about how to average solar radiation in space and time. Since both ...
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
Average Lorentz self-force from electric field lines
NASA Astrophysics Data System (ADS)
Aashish, Sandeep; Haque, Asrarul
2015-09-01
We generalize the derivation of electromagnetic fields of a charged particle moving with a constant acceleration Singal (2011 Am. J. Phys. 79 1036) to a variable acceleration (piecewise constants) over a small finite time interval using Coulomb's law, relativistic transformations of electromagnetic fields and Thomson's construction Thomson (1904 Electricity and Matter (New York: Charles Scribners) ch 3). We derive the average Lorentz self-force for a charged particle in arbitrary non-relativistic motion via averaging the fields at retarded time.
Averaging of nonlinearity-managed pulses.
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-09-01
We consider the nonlinear Schrodinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons. PMID:16253000
Greninger, Alexander L; Naccache, Samia N; Federman, Scot; Yu, Guixia; Mbala, Placide; Bres, Vanessa; Stryke, Doug; Bouquet, Jerome; Somasekar, Sneha; Linnen, Jeffrey M; Dodd, Roger; Mulembakani, Prime; Schneider, Bradley S; Muyembe-Tamfum, Jean-Jacques; Stramer, Susan L; Chiu, Charles Y
2015-01-01
We report unbiased metagenomic detection of chikungunya virus (CHIKV), Ebola virus (EBOV), and hepatitis C virus (HCV) from four human blood samples by MinION nanopore sequencing coupled to a newly developed, web-based pipeline for real-time bioinformatics analysis on a computational server or laptop (MetaPORE). At titers ranging from 10(7)-10(8) copies per milliliter, reads to EBOV from two patients with acute hemorrhagic fever and CHIKV from an asymptomatic blood donor were detected within 4 to 10 min of data acquisition, while lower titer HCV virus (1?×?10(5) copies per milliliter) was detected within 40 min. Analysis of mapped nanopore reads alone, despite an average individual error rate of 24 % (range 8-49 %), permitted identification of the correct viral strain in all four isolates, and 90 % of the genome of CHIKV was recovered with 97-99 % accuracy. Using nanopore sequencing, metagenomic detection of viral pathogens directly from clinical samples was performed within an unprecedented <6 hr sample-to-answer turnaround time, and in a timeframe amenable to actionable clinical and public health diagnostics. PMID:26416663
Adaptive common average filtering for myocontrol applications.
Rehbaum, Hubertus; Farina, Dario
2015-02-01
The use of electromyography (EMG) for the control of upper-limb prostheses has received great interest in neurorehabilitation engineering since decades. Important advances have been performed in the development of machine learning algorithms for myocontrol. This paper describes a novel adaptive filter for EMG preprocessing to be applied as conditioning stage for optimal subsequent information extraction. The aim of this filter is to improve both the quality (signal-to-noise ratio) and the selectivity of the EMG recordings. The filter is based on the classic common average reference (CAR), often used in EEG processing. However, while CAR is stationary, the proposed filter, which is referred to as adaptive common average reference (ACAR), is signal-dependent and its spatial transfer function is adapted over time. The ACAR filter is evaluated in this study for noise reduction and selectivity. Furthermore, it is proven that its application improves the performance of both pattern recognition and regression methods for myoelectric control. It is concluded that the proposed novel filter for EMG conditioning is a useful preprocessing tool in myocontrol applications. PMID:25388778
Average observational quantities in the timescape cosmology
Wiltshire, David L. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand, and International Center for Relativistic Astrophysics Network (ICRANet), Piazzale le della Repubblica 10, Pescara 65121 (Italy)
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
NSDL National Science Digital Library
In this activity learners explore the connections of digital time displays with numeric and geometric properties. Students look for times that have bilateral or rotational symmetry, or have a certain digital sum, etc. Ideas for implementation, extension and support are included.
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Code of Federal Regulations, 2010 CFR
2010-07-01
...ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES Averaging, Banking, and Trading Provisions § 89.204 Averaging. (a) Requirements for...
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
NASA Astrophysics Data System (ADS)
Amoroso, Richard L.
2013-09-01
The concept of time in the `clockwork' Newtonian world was irrelevant; and has generally been ignored until recently by several generations of physicists since the implementation of quantum mechanics. We will set aside the utility of time as a property relating to physical calculations of events relating to a metrics line element or as an aspect of the transformation of a particles motion/interaction in a coordinate system or in relation to thermodynamics etc., i.e. we will discard all the usual uses of time as a concept used to circularly define physical parameters in terms of other physical parameters; concentrating instead on time as an aspect of the fundamental cosmic topology of our virtual reality especially as it inseparably relates to the nature and role of the observer in natural science.
Averaged dynamics of ultra-relativisitc charged particles beams
Ricardo Gallego Torromé
2012-06-19
In this thesis, we consider the suitability of using the charged cold fluid model in the description of ultra-relativistic beams. The method that we have used is the following. Firstly, the necessary notions of kinetic theory and differential geometry of second order differential equations are explained. Then an averaging procedure is applied to a connection associated with the Lorentz force equation. The result of this averaging is an affine connection on the space-time manifold. The corresponding geodesic equation defines the averaged Lorentz force equation. We prove that for ultra-relativistic beams described by narrow distribution functions, the solutions of both equations are similar. This fact justifies the replacement of the Lorentz force equation by the simpler {\\it averaged Lorentz force equation}. After this, for each of these models we associate the corresponding kinetic model, which are based on the Vlasov equation and {\\it averaged Vlasov equation} respectively. The averaged Vlasov equation is simpler than the original Vlasov equation. This fact allows us to prove that the differential operation defining the averaged charged cold fluid equation is controlled by the {\\it diameter of the distribution function}, by powers of the {\\it energy of the beam} and by the time of evolution $t$. We show that the Vlasov equation and the averaged Vlasov equation have similar solutions, when the initial conditions are the same. Finally, as an application of the {\\it averaged Lorentz force equation} we re-derive the beam dynamics formalism used in accelerator physics from the Jacobi equation of the averaged Lorentz force equation.
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2014-12-01
The JPL Airborne Snow Observatory (ASO) is an integrated LIDAR and Spectrometer measuring snow depth and rate of snow melt in the Sierra Nevadas, specifically, the Tuolumne River Basin, Sierra Nevada, California above the O'Shaughnessy Dam of the Hetch Hetchy reservoir, and the Uncompahgre Basin, Colorado, amongst other sites. The ASO data was delivered to water resource managers from the California Department of Water Resources in under 24 hours from the time that the Twin Otter aircraft landed in Mammoth Lakes, CA to the time disks were plugged in to the ASO Mobile Compute System (MCS) deployed at the Sierra Nevada Aquatic Research Laboratory (SNARL) near the airport. ASO performed weekly flights and each flight took between 500GB to 1 Terabyte of raw data, which was then processed from level 0 data products all the way to full level 4 maps of Snow Water Equivalent, albedo mosaics, and snow depth from LIDAR. These data were produced by Interactive Data analysis Language (IDL) algorithms which were then unobtrusively and automatically integrated into an Apache OODT and Apache Tika based Big Data processing system. Data movement was both electronic and physical including novel uses of LaCie 1 and 2 TeraByte (TB) data bricks and deployment in rugged terrain. The MCS was controlled remotely from the Jet Propulsion Laboratory, California Institute of Technology (JPL) in Pasadena, California on behalf of the National Aeronautics and Space Administration (NASA). Communication was aided through the use of novel Internet Relay Chat (IRC) command and control mechanisms and through the use of the Notifico open source communication tools. This talk will describe the high powered, and light-weight Big Data processing system that we developed for ASO and its implications more broadly for airborne missions at NASA and throughout the government. The lessons learned from ASO show the potential to have a large impact in the development of Big Data processing systems in the years to come.
Averaging in LRS class II spacetimes
NASA Astrophysics Data System (ADS)
Kašpar, Petr; Svítek, Otakar
2015-02-01
We generalize Buchert's averaged equations (Gen Relativ Gravit 32; 105, 2000; Gen Relativ Gravit 33; 1381, 2001) to LRS class II dust model in the sense that all Einstein equations are averaged, not only the trace part. We derive the relevant averaged equations and we investigate backreaction on expansion and shear scalars in an approximate LTB model. Finally we propose a way to close the system of averaged equations.
Little's Law* L time average number in Line or system
Whitt, Ward
a certain cubic meter of air in central Minnesota, using automated instruments to continuously monitor the boundary of the "invisible cube" at an overall rate of 0.061 per hour, either going in or going out
Angular average of time-harmonic transport solutions Guillaume Bal
Biasutti, Michela
tomography, a medical imaging technique, with frequency-modulated sources. Frequency modulated sources in human tissues in the application of optical tomography [1]. In the latter medical imaging modality]. Department of Applied Physics and Applied Mathematics, Columbia University Laboratoire de Physique Th
Effect of wind averaging time on wind erosivity estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...
Markov Decision Evolutionary Games with Time Average Expected Fitness Criterion
@sophia.inria.fr Yezekael Hayel LIA/CERI University of Avignon 339, chemin des Meinajaries Agroparc BP 1228 F-84911 AVIGNON Cedex Yezekael.Hayel@univ- avignon.fr Hamidou Tembine LIA/CERI University of Avignon 339, chemin des/CERI, University of Avignon 339, chemin des Meinajaries Agroparc BP 1228 F-84911 AVIGNON Cedex Rachid
A Theoretical Analysis of the Average Time-Complexity of
van Dongen, Marc
;Heuristics Arc-consistency algorithms carry out support-checks to find out about the properties of CSPs for a D() (b D()) it will never carry out the check ( a, b ) C{ , } if a (b) is already known. They use arc-heuristics to select the constraint that will be used for the next support-check. They use
Average-Case Complexity Theory and Polynomial-Time Reductions
Krovi, Venkat
snow falls and numerous trips to Niagara Falls. I express my sincere thanks to all my friends at Bu#11 thoroughly enjoyed the Fall/Spring picnics, the grad conference, and the CSEGSA meetings. Thanks to all
Predictabilty of time averages: The influence of the boundary forcing
NASA Technical Reports Server (NTRS)
Shukla, J.
1982-01-01
The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.
Exploiting Bandwidth to Reduce Average Memory Access Time in Scalable
Bianchini, Ricardo
Supervised by Professor Thomas J. LeBlanc Department of Computer Science The College Arts and Sciences. Thomas J. LeBlanc on topics related to highperformance parallel computing. #12; iv Acknowledgments I would like to thank my friend and advisor, Tom LeBlanc, for having taught me almost everything I know
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Apkarian, A. Vania
into standard space and averaged across subjects. Comparisons between the groups were done using an unpaired t-test treatment show greater connectivity between DLPFC and affective regions (DMPFC, insula) and cognitive that anxiety is linked with brain regions identified in CBP persisting group (component 1
NASA Technical Reports Server (NTRS)
Moore, Franklin K; Ostrach, Simon
1957-01-01
The time-average characteristics of boundary layers over a flat plate in nearly quasi-steady flow are determined. The plate may be either insulated or isothermal. The time averages are found without specifying the plate velocity explicitly except that it is positive and has an average value.
Jamal, Wafaa; Saleem, Rola; Rotimi, Vincent O
2013-08-01
The use of matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for identification of microorganisms directly from blood culture is an exciting dimension to the microbiologists. We evaluated the performance of Bruker SepsiTyper kit™ (STK) for direct identification of bacteria from positive blood culture. This was done in parallel with conventional methods. Nonrepetitive positive blood cultures from 160 consecutive patients were prospectively evaluated by both methods. Of 160 positive blood cultures, the STK identified 114 (75.6%) isolates and routine conventional method 150 (93%). Thirty-six isolates were misidentified or not identified by the kit. Of these, 5 had score of >2.000 and 31 had an unreliable low score of <1.7. Four of 8 yeasts were identified correctly. The average turnaround time using the STK was 35 min, including extraction steps and 30:12 to 36:12 h with routine method. The STK holds promise for timely management of bacteremic patients. PMID:23726652
High average power active-mirror amplifier
D. C. Brown; K. K. Lee; R. Bowman; J. Menders; J. Kuper
1986-01-01
Operation of the first high average power Nd:glass active-mirror amplifier, a scalable laser device that may be used to configure solid-state laser systems with high average power output into the kilowatt regime, is reported. An extractable average power of over 120 W was achieved at the device laser material fracture limit, and at a repetition rate of 5 Hz.
High average power active-mirror amplifier.
Brown, D C; Bowman, R; Kuper, J; Lee, K K; Menders, J
1986-03-01
We report operation of the first high average power Nd:glass active-mirror amplifier, a scalable laser device that may be used to configure solid-state laser systems with high average power output into the kilowatt regime. An extractable average power of over 120 W was achieved at the device laser material fracture limit and at a repetition rate of 5 Hz. PMID:18231222
High average power active-mirror amplifier
Brown, D.C.; Bowman, R.; Kuper, J.; Lee, K.K.; Menders, J.
1986-03-01
We report operation of the first high average power Nd:glass active-mirror amplifier, a scalable laser device that may be used to configure solid-state laser systems with high average power output into the kilowatt regime. An extractable average power of over 120 W was achieved at the device laser material fracture limit and at a repetition rate of 5 Hz.
Moving average process underlying the holographic-optical-tweezers experiments
NASA Astrophysics Data System (ADS)
?lezak, Jakub; Drobczy?ski, S?awomir; Weron, Karina; Masajada, Jan
2013-12-01
We study statistical properties of the recordings which contain time-dependent positions of a bead trapped in optical tweezers. Analysis of such a time series indicates that the commonly accepted model, i.e., the autoregressive process of first order, is not sufficient to fit the data. We show a presence of the first-order moving average part in the dynamical model of the system. We explain origin of this part as an influence of the high frequency CCD camera on the measurements. The proposed autoregressive moving average model appears to reflect perfectly all statistical features of the high-frequency recording data.
Turnaround Project Signs Six States
ERIC Educational Resources Information Center
Maxwell, Lesli A.
2010-01-01
Dozens of schools are slated for aggressive interventions over the next three years under a new, multistate effort that aims to clear hurdles that have hindered previous attempts to improve underperforming schools. Education officials in Colorado, Delaware, Illinois, Louisiana, Massachusetts, and New York have agreed to partner with Mass Insight…
Turnaround Team Racing Summer's Clock
ERIC Educational Resources Information Center
Maxwell, Lesli A.
2010-01-01
A few weeks before a new team of teachers was to report to the former Shawnee High School in Louisville, Kentucky, Principal Keith Look discovered the master schedule for the 2010-11 school year to be in total disarray. More than 100 Shawnee students--all of them juniors and seniors--had been enrolled in classes with no connection to the credits…
Rural Turnaround: Challenges and Opportunities
ERIC Educational Resources Information Center
WestEd, 2014
2014-01-01
Amargosa Valley School in Nevada could be considered representative of almost any preK-8 school that serves a large proportion of socioeconomically disadvantaged students and is focused on turning around persistently low achievement. Like other schools supported by federal School Improvement Grants, Amargosa is beginning to implement a reform plan…
Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.
ERIC Educational Resources Information Center
Caruk, Joan Marie
To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
Annealing Between Distributions by Averaging Moments
Toronto, University of
Annealing Between Distributions by Averaging Moments Chris J. Maddison Dept. of Comp. Sci work with Chris Maddison, Ruslan Salakhutdinov)Annealing between distributions by averaging moments May temperature Roger Grosse (joint work with Chris Maddison, Ruslan Salakhutdinov)Annealing between distributions
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...divided by the quantity of aluminum produced during the period...comprising the averaging group. (2) To determine...total emissions by total aluminum production. (3...total emissions by total aluminum production. (c...making up each averaging group shall not exceed...
Scalar averaging in Szekeres dust models
NASA Astrophysics Data System (ADS)
Sussman, Roberto A.
2013-07-01
We consider a formalism of weighed proper volume scalar averages (the "q-average") for the study of quasi-spherical Szekeres models. We show that the q-average of the main fluid flow covariant scalars are spherically symmetric and satisfy FLRW evolution laws, so that fluctuations and perturbations with respect to these averages provide a full description of the deviation of the models from homogeneity and spherical symmetry. The main proper tensors of the models are given in terms of these fluctuations, with the averages of scalar invariant contractions expressed as second order statistical moments of the density and Hubble scalar expansion. We discuss a possible application of this formalism in connection to a gravitational entropy functional in which entropy production is directly related to a negative statistical correlation between density and velocity fluctuations.
Flexibility of spatial averaging in visual perception
Lombrozo, Tania; Judson, Jeff; MacLeod, Donald I.A
2005-01-01
The classical receptive field (RF) concept—the idea that a visual neuron responds to fixed parts and properties of a stimulus—has been challenged by a series of recent physiological results. Here, we extend these findings to human vision, demonstrating that the extent of spatial averaging in contrast perception is also flexible, depending strongly on stimulus contrast and uniformity. At low contrast, spatial averaging is greatest (about 11?min of arc) within uniform regions such as edges, as expected if the relevant neurons have orientation-selective RFs. At high contrast, spatial averaging is minimal. These results can be understood if the visual system is balancing a trade-off between noise reduction, which favours large areas of averaging, and detail preservation, which favours minimal averaging. Two distinct populations of neurons with hard-wired RFs could account for our results, as could the more intriguing possibility of dynamic, contrast-dependent RFs. PMID:15870034
INVERSIONS FOR AVERAGE SUPERGRANULAR FLOWS USING FINITE-FREQUENCY KERNELS
Svanda, Michal, E-mail: michal@astronomie.cz [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)
2012-11-10
I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.
Grade Point Average and Changes in (Great) Grade Expectations.
ERIC Educational Resources Information Center
Wendorf, Craig A.
2002-01-01
Examines student grade expectations throughout a semester in which students offered their expectations three times during the course: (1) within the first week; (2) midway through the semester; and (3) the week before the final examination. Finds that their expectations decreased stating that their cumulative grade point average was related to the…
Using Hierarchical Decomposition to Speed Up Average Consensus
Johansson, Karl Henrik
Using Hierarchical Decomposition to Speed Up Average Consensus Michael Epstein Kevin Lynch Karl H the increased convergence speed of the smaller subgraphs, we show how this scheme can achieve faster overall). Yang et al. (2006) attempted to speed up the time to convergence, while tra
DEFORMOTION Deforming Motion, Shape Average and the Joint
Soatto, Stefano
average" in order to track moving and deforming objects through time. Fig. 1. A jellyfish is "moving while direction while "locally deforming." The jellyfish in Fig. 1 is just another example to illustrate the same our intuition? For instance, in Fig. 6, how do we describe the "motion" of a jellyfish? Or in Fig. 5
Anomalous transport and observable average in the standard map
Lydia Bouchara; Ouerdia Ourrad; Sandro Vaienti; Xavier Leoncini
2015-09-02
The distribution of finite time observable averages and transport in low dimensional Hamiltonian systems is studied. Finite time observable average distributions are computed, from which an exponent $\\alpha$ characteristic of how the maximum of the distributions scales with time is extracted. To link this exponent to transport properties, the characteristic exponent $\\mu(q)$ of the time evolution of the different moments of order $q$ related to transport are computed. As a testbed for our study the standard map is used. The stochasticity parameter $K$ is chosen so that either phase space is mixed with a chaotic sea and islands of stability or with only a chaotic sea. Our observations lead to a proposition of a law relating the slope in $q=0$ of the function $\\mu(q)$ with the exponent $\\alpha$.
Lagged average predictions in a predictability experiment
NASA Technical Reports Server (NTRS)
Roads, John O.
1988-01-01
Lagged average predictions are examined here within the context of an idealized predictability experiment. Lagged predictions contribute to making better forecasts than the forecasts obtained from using only the latest initial state. Analytic models suggest that lagged predictions contribute the greatest amount when the error growth rates are small. Little dependence upon the magnitude of the intial error is found if the growth rates remain constant. It is also shown how lagged average forecasts can be used to predict the error. Discriminating forecasts made only when the error is predicted to be small are shown to have much better than average skill.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811
GROUP ACTION INDUCED AVERAGING FOR HARDI PROCESSING
Çetingül, H. Ertan; Afsari, Bijan; Wright, Margaret J.; Thompson, Paul M.; Vidal, Rene
2012-01-01
We consider the problem of processing high angular resolution diffusion images described by orientation distribution functions (ODFs). Prior work showed that several processing operations, e.g., averaging, interpolation and filtering, can be reduced to averaging in the space of ODFs. However, this approach leads to anatomically erroneous results when the ODFs to be processed have very different orientations. To address this issue, we propose a group action induced distance for averaging ODFs, which leads to a novel processing framework on the spaces of orientation (the space of 3D rotations) and shape (the space of ODFs with the same orientation). Experiments demonstrate that our framework produces anatomically meaningful results. PMID:22903055
Spacetime Average Density (SAD) Cosmological Measures
Don N. Page
2014-10-22
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
Thermal ghost imaging with averaged speckle patterns
Shapiro, Jeffrey H.
We present theoretical and experimental results showing that a thermal ghost imaging system can produce images of high quality even when it uses detectors so slow that they respond only to intensity-averaged (that is, ...
Convergence speed in distributed consensus and averaging
Olshevsky, Alexander
We study the convergence speed of distributed iterative algorithms for the consensus and averaging problems, with emphasis on the latter. We first consider the case of a fixed communication topology. We show that a simple ...
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of...
Averaging Sampled Sensor Outputs To Detect Failures
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.
1990-01-01
Fluctuating signals smoothed by taking consecutive averages. Sampling-and-averaging technique processes noisy or otherwise erratic signals from number of sensors to obtain indications of failures in complicated system containing sensors. Used under both transient and steady-state conditions. Useful in monitoring automotive engines, chemical-processing plants, powerplants, and other systems in which outputs of sensors contain noise or other fluctuations in measured quantities.
Heuristic approach to capillary pressures averaging
Coca, B.P.
1980-10-01
Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.
Applications of high average power nonlinear optics
Velsko, S.P.; Krupke, W.F.
1996-02-05
Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.
Method of averaging in Clifford algebras
D. S. Shirokov
2015-06-19
In this paper we consider different operators acting on Clifford algebras. We consider Reynolds operator of Salingaros' vee group. This operator average" an action of Salingaros' vee group on Clifford algebra. We consider conjugate action on Clifford algebra. We present a relation between these operators and projection operators onto fixed subspaces of Clifford algebras. Using method of averaging we present solutions of system of commutator equations.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Average diurnal variation of summer lightning over the Florida peninsula
NASA Technical Reports Server (NTRS)
Maier, L. M.; Krider, E. P.; Maier, M. W.
1984-01-01
Data derived from a large network of electric field mills are used to determine the average diurnal variation of lightning in a Florida seacoast environment. The variation at the NASA Kennedy Space Center and the Cape Canaveral Air Force Station area is compared with standard weather observations of thunder, and the variation of all discharges in this area is compared with the statistics of cloud-to-ground flashes over most of the South Florida peninsula and offshore waters. The results show average diurnal variations that are consistent with statistics of thunder start times and the times of maximum thunder frequency, but that the actual lightning tends to stop one to two hours before the recorded thunder. The variation is also consistent with previous determinations of the times of maximum rainfall and maximum rainfall rate.
Evolution of the average avalanche shape with the universality class.
Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J
2013-01-01
A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571
Evolution of the average avalanche shape with the universality class
Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J
2013-01-01
A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571
Average Cosmological Invariant Parameters of Cosmic Gamma Ray Bursts
NASA Technical Reports Server (NTRS)
Mitrofanov, Igor G.; Anfimov, Dimitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, W. S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles
1998-01-01
Average cosmological invariant parameters (ACIPs) are calculated for six groups of BATSE cosmic gamma-ray bursts selected by their peak fluxes on the 1.024s timescale. The ACIPs represent the average temporal and spectral properties of these events equally in the observer frame of reference and in the co-moving frames of outbursting emitters. The parameters are determined separately for rise fronts and for back slopes of bursts, defined as the time profiles before and after the main peaks, respectively. The ACIPs for the rise fronts are found to be different for different intensity groups, while the ACIPs for the back slopes show no significant dependence on intensity. We conclude that emitters of bursts manifest standard average properties only during the back slopes of bursts.
Average Cosmological Invariant Parameters of Cosmic Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Mitrofanov, Igor G.; Anfimov, Dmitrij S.; Litvak, Maxim L.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Preece, Robert D.; Meegan, Charles A.
1999-01-01
Average cosmological invariant parameters (ACIPs) are calculated for six groups of BATSE cosmic gamma-ray bursts selected by their peak fluxes on the 1.024 S timescale. The ACIPs represent the average temporal and spectral properties of these events equally in the observer frame of reference and in the comoving frames of outbursting emitters. The parameters are determined separately for rise fronts and for back slopes of bursts, defined as the time profiles before and after the main peaks, respectively. The ACIPs for the rise fronts are found to be different for different intensity groups, while the ACIPs for the back slopes show no significant dependence on intensity. We conclude that emitters of bursts manifest standard average properties only during the back slopes of bursts.
A Spectral Estimate of Average Slip in Earthquakes
NASA Astrophysics Data System (ADS)
Boatwright, J.; Hanks, T. C.
2014-12-01
We demonstrate that the high-frequency acceleration spectral level ao of an ?-square source spectrum is directly proportional to the average slip of the earthquake ?u divided by the travel time to the station r/?ao = 1.37 Fs (?/r) ?uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.
Volume averaging in the quasispherical Szekeres model
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2009-07-01
This paper considers volume averaging in the quasispherical Szekeres model. The volume averaging became of considerable interest after it was shown that the volume acceleration calculated within the averaging framework can be positive even when the local expansion rate decelerates. This issue was intensively studied within spherically symmetric models. However, since our Universe is not spherically symmetric similar analysis is needed in non-symmetrical models. This papers presents the averaging analysis within the quasispherical Szekeres model which is a non-symmetrical generalisation of the spherically symmetric Lemaître-Tolman family of models. In the quasispherical Szekeres model the distribution of mass over a surface of constant t and r has the form of a mass-dipole superposed on a monopole. This paper shows that when calculating the volume acceleration, ä, within the Szekeres model, the dipole does not contribute to the final result, hence ä only depends on a monopole configuration. Thus, the volume averaging within the Szekeres model leads to literally the same solutions as those obtained within the Lemaître-Tolman model.
Global average ozone change from November 1978 to May 1990
NASA Technical Reports Server (NTRS)
Herman, J. R.; Mcpeters, R.; Stolarki, R.; Larko, D.; Hudson, R.
1991-01-01
A recent recalibration and reprocessing of the total ozone mapping spectrometer (TOMS) data have made possible a new determination of the global average total ozone decrease of 3.5 percent over the 11-year period, January 1, 1979, to December 31, 1989, with a 2 sigma error of 1.4 percent. The revised TOMS ozone trend data are in agreement, within error limits, with the average of 39 ground-based Dobson stations and with the world standard Dobson spectrometer 83 at Mauna Loa, Hawaii. Superimposed on the 11-year ozone trend is a possible solar cycle effect, quasi-biennial oscillation (QBO), annual, and semiannual cycles. Using solar 10-7-cm flux data and 30-mbar Singapore wind data (QBO), a time series has been constructed that reproduces the long-term behavior of the globally averaged ozone. Removal of the apparent solar cycle effect from the global average reduces the net ozone loss to 2.66 + or - 1.4 percent per decade. The precise value of the global average ozone trend depends on the latitude range selected, with ranges greater than + or - 69 emphasizing the larger variations at high latitudes.
, Above Average 5, Excellent Quality of argument in project summary. Argument is missing, incomplete, and is mostly complete. The reader is left satisfied. Argument is shy of `excellent' standing based on a few/clarity, but the introduction is mostly complete. The reader is left satisfied. Role is shy of `excellent' standing based
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation of global 21-cm signal measurements are detections of Lyman Alpha Emitters at high redshifts and constraints on the midpoint of reionization, both of which are among the primary science objectives of ongoing or near-future experiments.
Perceiving the average hue of color arrays
Webster, Jacquelyn; Kay, Paul; Webster, Michael A.
2014-01-01
The average of a color distribution has special significance for color coding (e.g. to estimate the illuminant) but how it depends on the visual representation (e.g. perceptual vs. cone-opponent) or nonlinearities (e.g. categorical coding) is unknown. We measured the perceived average of two colors shown alternated in spatial arrays. Observers adjusted the components until the average equaled a specified reference hue. Matches for red, blue-red, or yellow-green were consistent with the arithmetic mean chromaticity, while blue-green settings deviated toward blue. The settings show little evidence for categorical coding, and cannot be predicted from the scaled appearances of the individual components. PMID:24695184
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Matrix averages relating to the Ginibre ensembles
Peter J. Forrester; Eric M. Rains
2009-07-02
The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument $AX$, where $A$ is a fixed matrix and $X$ is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khorozhenko [J. Phys. A {\\bf 42} (2009), 222002], and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions, these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.
Matrix averages relating to Ginibre ensembles
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Rains, Eric M.
2009-09-01
The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
A singularity theorem based on spatial averages
José M. M. Senovilla
2007-05-06
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating everywhere expanding universes with non-vanishing spatial average of the matter variables are severely geodesically incomplete to the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Polarized electron beams at milliampere average current
Poelker, Matthew [JLAB
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Implicit ?f Lorentz Ion Sub-Cycling and Orbit Averaging
NASA Astrophysics Data System (ADS)
Parker, Scott; Sturdevant, Benjamin; Chen, Yang; Hause, Benjamin
2014-10-01
A second order, implicit Lorentz ion drift-kinetic electron model has been developed to study low-frequency, quasi-neutral plasmas. This model is useful, for example, as an alternative to gyrokinetics in the tokamak edge region where gradient scale lengths are short. In the presence of a strong guide field, however, the applicability of the model is limited due to the time step size required to fully resolve the ion gyromotion. The aim of this research is to develop GPU accelerated sub-cycling and orbit averaging methods to be used with the Lorentz ion model making its utilization more viable. Sub-cycling pushes computational particles independently over several micro time steps for each macro time step interval over which the fields are advanced. Orbit averaging uses the deposition data from the sub-cycled particles to obtain time averaged source terms used in the field solving stage. This provides a filtering effect, allowing for clean simulations at low frequencies. Simulation results and analysis for an ion acoustic model are presented along with performance results for GPUs.
A bounce-averaged Monte Carlo collision operator and ripple transport in a tokamak
Albert, J.M.; Boozer, A.H.
1986-09-01
A bounce-averaged Monte Carlo operator is presented that simulates bounce-averaged perturbative Lorentz pitch angle scattering of particles in toroidal plasmas, in particular a tokamak. In conjunction with bounce-averaged expressions for the deterministic motion, this operator allows a quick and inexpensive simulation on time scales long compared to a bounce time. An analytically tractable model of transport due to toroidal magnetic field ripple is described.
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
Science of NHL Hockey: Statistics & Averages
NSDL National Science Digital Library
NBC Learn
2010-10-07
Being a top goalie in the NHL takes more than quick reflexes and nerves of steel, it also requires a firm grip on the numbers. Namely, the key averages and statistics of goaltending. "Science of NHL Hockey" is a 10-part video series produced in partnership with the National Science Foundation and the National Hockey League.
Averaging Theory for Non-linear Oscillators
Aritra Sinha
2015-06-24
I have first discussed how averaging theory can be an effective tool in solving weakly non-linear oscillators. Then I have applied this technique for a Van der Pol oscillator and extended the stability criterion of a Van der Pol oscillator for any integer n(odd or even).
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Analytics for geometric average trigger reset options
Lyuu, Yuh-Dauh
, respectively) in stock price. This makes a reset option useful to portfolio insurance. To prevent price options, are (1) to mitigate the possibility of stock price manipulation, especially for thinly traded in Taiwan, issued two average reset options on the Taipei Stock Exchange in 1999. Standard reset options
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
Average Values of Mean Squares in Factorials
Jerome Cornfield; John W. Tukey
1956-01-01
The assumptions appropriate to the application of analysis of variance to specific examples, and the effects of these assumptions on the resulting interpretations, are today a matter of very active discussion. Formulas for average values of mean squares play a central role in this problem, as do assumptions about interactions. This paper presents formulas for crossed (and, incidentally, for nested
Average Annual Rainfall Over the Globe
NASA Astrophysics Data System (ADS)
Agrawal, D. C.
2013-12-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74×1017 J of solar radiation per second and it is divided over various channels as given in Table 1. It keeps our planet warm and maintains its average temperature2 of 288 K with the help of the atmosphere in such a way that life can survive. It also recycles the water in the oceans/rivers/ lakes by initial evaporation and subsequent precipitation; the average annual rainfall over the globe is around one meter. According to M. King Hubbert the amount of solar power going into the evaporation and precipitation channel is 4.0×1016 W. Students can verify the value of average annual rainfall over the globe by utilizing this part of solar energy. This activity is described in the next section.
Average position in quantum walks with a U(2) coin
Min Li; YOng-Sheng Zhang; Guang-Can Guo
2012-10-11
We investigated discrete-time quantum walks with an arbitary unitary coin. Here we discover that the average position $ =\\max( \\sin(\\alpha+\\gamma)$, while the initial state is $1/\\sqrt{2}(\\mid0L>+i\\mid0R>)$. We prove the result and get some symmetry properties of quantum walks with a U(2) coin with $\\mid0L>$ and $\\mid0R>$ as the initial state.
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
NASA Technical Reports Server (NTRS)
1994-01-01
Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.
Comparison of two different methods for coherent averaging in online ECG analysis
K. Egorouchkina; M. Braecklein; L. Pang; I. Tchoudovski; W. Kellermann; A. Bolz
2005-01-01
The coherent averaging is an often used method in ECG systems for noise removing. Most algorithms are developed for the offline data analysis either with automatic or with manual pattern determination for cycles averaging. These algorithms can't be used for a wearable long time ECG analysis system, which processes the biosignal continuously in real time, because the continuous analysis needs
High average power diode pumped solid state lasers for CALIOPE
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.
Temporal averaging of atmospheric turbulence-induced optical scintillation.
Yura, H T; Beck, S M
2015-08-24
Based on the Rytov approximation we have developed for weak scintillation conditions a general expression for the temporal averaged variance of irradiance. The present analysis provides, for what we believe is the first time, a firm theoretical basis for the often-observed reduction of irradiance fluctuations of an optical beam due to atmospheric turbulence. Accurate elementary analytic approximations are presented here for plane, spherical and beam waves for predicting the averaging times required to obtain an arbitrary value of the ratio of the standard deviation to the mean of an optical beam propagating through an arbitrary path in the atmosphere. In particular, a novel application of differential absorption measurement for the purpose of measuring column-integrated concentrations of various so-called greenhouse gas (GHG) atmospheric components is considered where the results of our analysis indicates that relatively short averaging times, on the order of a few seconds, are required to reduce the irradiance fluctuations to a value precise enough for GHG measurements of value to climate related studies. PMID:26368254
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119
Slater Averaged Pseudopotential and Its Inprovements
NASA Astrophysics Data System (ADS)
Miao, Maosheng
2001-03-01
We demonstrate that the optimized effective potential method(OEP), which can be viewed as a way for constructing orbital independent potential from the known orbital dependent potentials, is valid for pseudopotentials. It is further on proved that for most group I and II elements as well as the elements with large radius, the Slater averaged pseudopotential, which is local and orbital independent, is applicable with very good transferability. A Heine-Abarenkov(HA) correction is proposed to make the pseudopotential workable for other elements, especially the first row atoms. Further on, the combination of the Slater averaged potential and the Bachelet-Hamman-Schluter(BHS) construction produces a new family of first principle norm-conserving pseudopotentials.
Average gluon and quark jet multiplicities
A. V. Kotikov
2014-11-30
We show the results in [1,2] for computing the QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new results came due a recent progress in timelike small-x resummation obtained in the MSbar factorization scheme. They depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets demonstrates by its goodness how our results solve a longstandig problem of QCD. Including all the available theoretical input within our approach, alphas(Mz)=0.1199 +- 0.0026 has been obtained in the MSbar scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln x terms through the NNLL level and of ln Q2 terms by the renormalization group. This result is in excellent agreement with the present world average.
Average plasma environment at geosynchronous orbit
NASA Technical Reports Server (NTRS)
Su, S. Y.; Konradi, A.
1979-01-01
The average plasma environment at geosynchronous orbit (GSO) is derived from a whole year's worth of plasma data obtained by the UCSD electrostatic electrometer on board ATS 5. The result is primarily intended for use as a general reference for engineers designing a large spacecraft to be flown at GSO. A simple mathematical formula using a 3rd order polynomial is found to be adequate for representing the yearly averaged particle energy spectrum from 70 to 41,000 eV under different geomagnetic conditions. Furthermore, correlation analyses with the geomagnetic planetary index Kp and with the auroral electrojet index AE were carried out in the hope that the ground observations of the geomagnetic field variations can be used to predict the plasma variations in space. Unfortunately, the results indicate that such forecasting is not feasible by use of these two popular geomagnetic parameters alone.
New applications for high average power beams
Neau, E.L.; Turman, B.N.; Patterson, E.L.
1993-08-01
The technology base formed by the development of high peak power simulators, laser drivers, FEL`s, and ICF drivers from the early 60`s through the late 80`s is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.
Polarized electron beams at milliampere average current
Poelker, M. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States)
2013-11-07
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ? 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Impact of averaged photolysis rates on stratospheric chemical models
Smith, A.K.
1995-06-20
An evaluation is made of the effect of two approximations to the diurnal variation of photolysis rates on the simulation of stratospheric chemistry. The daylight average approximation uses photolysis rates that are given a value representative of the daylight average over that portion of the day that the Sun is shining, and zero otherwise. Although this distorts the diurnal cycle, the zonal average concentrations using this approximation are within about 5% of those from the fully resolved diurnal cycle for most of the important species in the stratosphere. Larger discrepancies occur in the partitioning of inorganic chlorine, although even this is in error by only about 10% for the dominant species. The 24-hour average photolysis approximation gives perpetual daylight except in the polar night. The error associated with this is quite large, especially for those species such as NO{sub 3} and N{sub 2}O{sub 5} that are produced during night. Over an integration time of a month, it can lead to differences in the simulated concentrations of major species such as ozone of the order of 20% or more. 18 refs., 9 figs., 4 tabs.
Evolution of the average steepening factor for nonlinearly propagating waves.
Muhlestein, Michael B; Gee, Kent L; Neilsen, Tracianne B; Thomas, Derek C
2015-02-01
Difficulties arise in attempting to discern the effects of nonlinearity in near-field jet-noise measurements due to the complicated source structure of high-velocity jets. This article describes a measure that may be used to help quantify the effects of nonlinearity on waveform propagation. This measure, called the average steepening factor (ASF), is the ratio of the average positive slope in a time waveform to the average negative slope. The ASF is the inverse of the wave steepening factor defined originally by Gallagher [AIAA Paper No. 82-0416 (1982)]. An analytical description of the ASF evolution is given for benchmark cases-initially sinusoidal plane waves propagating through lossless and thermoviscous media. The effects of finite sampling rates and measurement noise on ASF estimation from measured waveforms are discussed. The evolution of initially broadband Gaussian noise and signals propagating in media with realistic absorption are described using numerical and experimental methods. The ASF is found to be relatively sensitive to measurement noise but is a relatively robust measure for limited sampling rates. The ASF is found to increase more slowly for initially Gaussian noise signals than for initially sinusoidal signals of the same level, indicating the average distortion within noise waveforms occur more slowly. PMID:25697999
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
ERIC Educational Resources Information Center
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
, the body and control surfaces deformed at speeds in excess of 5 knots. The modifi- cations described herethe average weight of Connecticut River fish was considerably less (Table 1). The difference in average weight between sea lampreys in the two populations is not due to the difference in location
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Efficient brightness averaging of heterogeneous achromatic patches.
Kimura, Eiji; Takano, Yusuke
2015-09-01
Mean brightness in a variegated region may work as a clue to illumination intensity over the region and play an important role in the perception of object lightness. This study investigated whether brightness can be efficiently averaged for heterogeneous achromatic patches. Experiment 1 investigated discrimination thresholds for mean brightness between two arrays of 12 heterogeneous patches of different luminances. The thresholds were compared to brightness discrimination thresholds between two arrays of 12 homogeneous patches and to those between two single patches. The two arrays (or patches) were simultaneously presented for 200 msec and followed by a pattern mask. Results showed that mean brightness judgments for heterogeneous arrays were as accurate as simple brightness comparison for single patches, although they were slightly worse than brightness judgments for homogeneous arrays. This finding is consistent with efficient brightness averaging of different luminance patches. However, additional experiments revealed that inexperienced naive observers may use shortcuts for mean brightness judgments; they tended to choose as the brighter array the one containing a highest luminance patch or the one consisting of the larger number of patches. To investigate the effects of these confounding factors, Experiment 2 measured discrimination thresholds for mean brightness between two arrays composed of different numbers of heterogeneous patches (6 vs. 12 or 9 vs. 12). The highest luminance patch was included in the array consisting of either the smaller or the larger number of patches, and thus using this clue for mean judgments would lead to highly biased thresholds. Results were consistent with brightness averaging, but a small bias (varying in the magnitude among observers) was found to choose the array containing the highest luminance patch. Overall, the present findings suggest that brightness can be efficiently averaged, but with a greater weight to the highest luminance. Meeting abstract presented at VSS 2015. PMID:26326319
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
Fast Optimal Transport Averaging of Neuroimaging Data.
Gramfort, A; Peyré, G; Cuturi, M
2015-01-01
Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface. PMID:26221679
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Modified Averaging Processes in Cosmology and the Structured FRW model
Shahram Khosravi; Ehsan Kourkchi; Reza Mansouri
2009-07-22
We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous structured FRW model based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. It is shown that, contrary to the claims in the literature, the backreaction fluid behaves like a dark matter component, instead of dark energy, having a density of the order of 10^{-5} times the matter density, and most importantly, it is gauge dependent.
Improved MCMAC with momentum, neighborhood, and averaged trapezoidal output.
Ang, K K; Chai, Q
2000-01-01
An improved modified cerebellar articulation controller (MCMAC) neural control algorithm with better learning and recall processes using momentum, neighborhood learning, and averaged trapezoidal output, is proposed in this paper. The learning and recall processes of MCMAC are investigated using the characteristic surface of MCMAC and the control action exerted in controlling a continuously variable transmission (CVT). Extensive experimental results demonstrate a significant improvement with reduced training time and an extended range of trained MCMAC cells. The improvement in recall process using the averaged trapezoidal output (MCMAC-ATO) are contrasted against the original MCMAC using the square of the Pearson product moment correlation coefficient. Experimental results show that the new recall process has significantly reduced the fluctuations in the control action of the MCMAC and addressed partially the problem associated with the resolution of the MCMAC memory array. PMID:18252380
High-average-power industrial laser for precision machining
NASA Astrophysics Data System (ADS)
Burchman, David A.; Hoffmaster, Dave; Machan, Jason P.; Rust, Tom; Marabella, Leonard J.; Zamel, James M.
1998-09-01
High brightness, high average power, diode pumped Nd:YAG solid state lasers (DPSSL) are being developed by TRW as part of the Precision Laser Machining Technology Reinvestment Program. The use of diode pump arrays in place of flashlamps, and zig-zag slab geometry, allow lasers to be scaled to power beyond the current generation of lamp-pumped rod lasers while providing excellent beam quality. The efficiency is 3 - 4 times better using diode arrays in place of flashlamps resulting in less waste heat in the laser medium and reduced optical aberrations. The corresponding beam quality provides more than an order of magnitude increase in the average intensity available at the workpiece, thus enabling new machining capabilities.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-08-14
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:24141918
Is dark energy an effect of averaging?
Nan Li; Marina Seikel; Dominik J. Schwarz
2008-01-22
The present standard model of cosmology states that the known particles carry only a tiny fraction of total mass and energy of the Universe. Rather, unknown dark matter and dark energy are the dominant contributions to the cosmic energy budget. We review the logic that leads to the postulated dark energy and present an alternative point of view, in which the puzzle may be solved by properly taking into account the influence of cosmic structures on global observables. We illustrate the effect of averaging on the measurement of the Hubble constant.
Averaging with B-series Ander Murua
Murua, Ander
y = -1 f1(y, ) + -2 f2(y, ) + -3 f3(y, ) + Â· Â· Â· #12;Backward error analysis of highly oscillatory) is Hamiltonian, then (2) is Hamiltonian with Â¯H(y) = -1 Â¯H1(y) + -2 Â¯H2(y) + -3 Â¯H3(y) + Â· Â· Â· #12;Backward error with Â¯H(y) = -1 Â¯H1(y) + -2 Â¯H2(y) + -3 Â¯H3(y) + Â· Â· Â· #12;In averaging theory [Sanders, Verhulst, Murdock
Collimation of average multiplicity in QCD jets
NASA Astrophysics Data System (ADS)
Arleo, François; Pérez Ramos, Redamy
2009-11-01
The collimation of average multiplicity inside quark and gluon jets is investigated in perturbative QCD in the modified leading logarithmic approximation (MLLA). The role of higher order corrections accounting for energy conservation and the running of the coupling constant leads to smaller multiplicity collimation as compared to leading logarithmic approximation (LLA) results. The collimation of jets produced in heavy-ion collisions has also been explored by using medium-modified splitting functions enhanced in the infrared sector. As compared to elementary collisions, the angular distribution of the jet multiplicity is found to broaden in QCD media at all energy scales.
Averaging Spherically Symmetric Spacetimes in General Relativity
A. A. Coley; N. Pelavas
2006-06-21
We discuss the averaging problem in general relativity, using the form of the macroscopic gravity equations in the case of spherical symmetry in volume preserving coordinates. In particular, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. On cosmological scales, the correlation tensor in a Friedmann-Lema\\^{\\i}tre-Robertson-Walker (FLRW) background is found to be of the form of a spatial curvature. On astrophysical scales the correlation tensor can be interpreted as the sum of a spatial curvature and an anisotropic fluid. We briefly discuss the physical implications of these results.
Benkler, Erik; Sterr, Uwe
2015-01-01
The power spectral density in Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with $\\Lambda$-weighted averaging, and the two sample deviation associated to a linear phase regr...
A comparison of time-shared vs. batch development of space software
NASA Technical Reports Server (NTRS)
Forthofer, M.
1977-01-01
In connection with a study regarding the ground support software development for the Space Shuttle, an investigation was conducted concerning the most suitable software development techniques to be employed. A time-sharing 'trial period' was used to determine whether or not time-sharing would be a cost-effective software development technique for the Ground Based Shuttle system. It was found that time-sharing substantially improved job turnaround and programmer access to the computer for the representative group of ground support programmers. Moreover, this improvement resulted in an estimated saving of over fifty programmer days during the trial period.