Baumgardner, H W; Boyer, B P; Cavenaugh, B; Keller, C
1992-01-01
Advances in instrumentation have enabled laboratories to deliver results much more quickly. But as equipment has advanced so have expectations on turnaround time (TAT), driving up costs in the process. Laboratories must balance the needs of their clients against the need to cover their costs. In this issue, we asked our respondents: How do you address the issue of turnaround time? PMID:10118449
Hawkins, Robert C
2007-01-01
Turnaround time (TAT) is one of the most noticeable signs of laboratory service and is often used as a key performance indicator of laboratory performance. This review summarises the literature regarding laboratory TAT, focusing on the different definitions, measures, expectations, published data, associations with clinical outcomes and approaches to improve TAT. It aims to provide a consolidated source of benchmarking data useful to the laboratory in setting TAT goals and to encourage introduction of TAT monitoring for continuous quality improvement. A 90% completion time (sample registration to result reporting) of <60 minutes for common laboratory tests is suggested as an initial goal for acceptable TAT. PMID:18392122
Emergency Department Overcrowding and Ambulance Turnaround Time
Lee, Yu Jin; Shin, Sang Do; Lee, Eui Jung; Cho, Jin Seong; Cha, Won Chul
2015-01-01
Objective The aims of this study were to describe overcrowding in regional emergency departments in Seoul, Korea and evaluate the effect of crowdedness on ambulance turnaround time. Methods This study was conducted between January 2010 and December 2010. Patients who were transported by 119-responding ambulances to 28 emergency centers within Seoul were eligible for enrollment. Overcrowding was defined as the average occupancy rate, which was equal to the average number of patients staying in an emergency department (ED) for 4 hours divided by the number of beds in the ED. After selecting groups for final analysis, multi-level regression modeling (MLM) was performed with random-effects for EDs, to evaluate associations between occupancy rate and turnaround time. Results Between January 2010 and December 2010, 163,659 patients transported to 28 EDs were enrolled. The median occupancy rate was 0.42 (range: 0.10-1.94; interquartile range (IQR): 0.20-0.76). Overcrowded EDs were more likely to have older patients, those with normal mentality, and non-trauma patients. Overcrowded EDs were more likely to have longer turnaround intervals and traveling distances. The MLM analysis showed that an increase of 1% in occupancy rate was associated with 0.02-minute decrease in turnaround interval (95% CI: 0.01 to 0.03). In subgroup analyses limited to EDs with occupancy rates over 100%, we also observed a 0.03 minute decrease in turnaround interval per 1% increase in occupancy rate (95% CI: 0.01 to 0.05). Conclusions In this study, we found wide variation in emergency department crowding in a metropolitan Korean city. Our data indicate that ED overcrowding is negatively associated with turnaround interval with very small practical significance. PMID:26115183
Turnaround Time Modeling for Conceptual Rocket Engines
NASA Technical Reports Server (NTRS)
Nix, Michael; Staton, Eric J.
2004-01-01
Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.
Turnaround Time and Market Capacity in Contract Cheating
ERIC Educational Resources Information Center
Wallace, Melisa J.; Newton, Philip M.
2014-01-01
Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…
Interlibrary Loan Turnaround Times in Science and Engineering.
ERIC Educational Resources Information Center
Horton, Weldon, Jr.
1989-01-01
Describes the use of fixed point analysis procedures at King Fahd University of Petroleum and Minerals to determine as narrow a range as possible of interlibrary loan turnaround times in science and engineering subjects. The findings are discussed in terms of the complexity of interlibrary loan factors and items determined as relevant for further…
Monitoring operating room turnaround time: a retrospective analysis.
Scagliarini, Michele; Apreda, Mariarosaria; Wienand, Ulrich; Valpiani, Giorgia
2016-04-18
Purpose - Operating room (OR) turnaround time is a key process indicator for hospital business management: delays lead to a reduced surgical interventions per day with a consequent increase in costs and decrease in efficiency. The purpose of this paper is to increase understanding by assessing the process' steady-state behaviour and identifying changes that indicate either improvement or deterioration in quality. Design/methodology/approach - With this purpose, the authors retrospectively applied Shewhart control charts and exponentially weighted moving average control charts to data extracted from an hospital information system. Findings - The results showed that statistical process control is able to identify steady-state behaviour process and to detect positive or negative changes in process performance. In particular the authors detected a deterioration in the process performance coinciding with the change in the operating room patient transfer staff. Practical implications - This study showed that statistical quality control is a valuable tool for monitoring performance indicators. Currently, hospital managers are designing an OR dashboard which also includes the control charts. Originality/value - The paper highlights the control chart application to organizational indicators allowing an objective OR system performance assessment. PMID:27120511
Alshieban, Saeed; Al-Surimi, Khaled
2015-01-01
Turnaround time is an important quality indicator in surgical pathology. Retrospective analysis of three data points in September 2014, January 2015, and February 2015 showed that on average, about a quarter (24%) of routine surgical pathology cases (26%, 19%, and 27% respectively) are not reported on time and do not meet the accepted level of the College of American Pathologists' (CAP) standard turnaround time, which states at least 90% of routine surgical pathology cases should be reported and verified within two days. Our daily observation supported by a root cause analysis exercise revealed that subprocesses including slide allocation and delivery to pathologists, slide review by pathologists, report editing by transcriptionists, and report verification by pathologists are where most delays occur. Thus, to tackle these process issues we developed a quality improvement project using the model of improvement methods to streamline the sample flow process and avoid unjustified reporting delay. The improvement ideas included developing a time log sheet to be attached with all pathology requests, and sending a daily reminder email followed by a phonecall to all pathologists to verify completed reports on the same day. These intervention elements were tested using multiple PDSA cycles resulting in a very noticeable improvement, with more than 94% of all routine cases reported in less than two days, meeting and exceeding the CAP standards. Such noticeable improvement in turnaround of surgical pathology reports will eventually lead to improving the quality and safety of patient care outcome, including diagnosing patients on time, developing the appropriate treatment plan, and avoiding unjustified complications resulting in morbidity and mortality due to delayed reports. PMID:26734438
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
2011-01-01
Background Assessing turnaround times can help to analyse workflows in hospital information systems. This paper presents a systematic review of literature concerning different turnaround time definitions. Our objectives were to collect relevant literature with respect to this kind of process times in hospitals and their respective domains. We then analysed the existing definitions and summarised them in an appropriate format. Methods Our search strategy was based on Pubmed queries and manual reviews of the bibliographies of retrieved articles. Studies were included if precise definitions of turnaround times were available. A generic timeline was designed through a consensus process to provide an overview of these definitions. Results More than 1000 articles were analysed and resulted in 122 papers. Of those, 162 turnaround time definitions in different clinical domains were identified. Starting and end points vary between these domains. To illustrate those turnaround time definitions, a generic timeline was constructed using preferred terms derived from the identified definitions. The consensus process resulted in the following 15 terms: admission, order, biopsy/examination, receipt of specimen in laboratory, procedure completion, interpretation, dictation, transcription, verification, report available, delivery, physician views report, treatment, discharge and discharge letter sent. Based on this analysis, several standard terms for turnaround time definitions are proposed. Conclusion Using turnaround times to benchmark clinical workflows is still difficult, because even within the same clinical domain many different definitions exist. Mapping of turnaround time definitions to a generic timeline is feasible. PMID:21609424
24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Indicator #1, vacancy rate and unit... rate and unit turnaround time. This indicator examines the vacancy rate, a PHA's progress in reducing.... This indicator has a weight of x2. (a) For the calculation of the actual and adjusted vacancy rate...
An automated resource management system to improve production tapeout turn-around time
NASA Astrophysics Data System (ADS)
Guo, Eric; Liu, Qingwei; Zhu, Sherry; Wu, Jason; Tsai, Jenny; Lu, Junwei; Simmons, Mark C.
2013-03-01
In today's semiconductor industry, both the pure-play and independent device manufacturer (IDM) foundries are constantly and rigorously competing for market share. The acknowledged benefit for customers who partner with these foundries includes a reduced cost-of-ownership, along with the underwritten agreement of meeting or exceeding an aggressive time-to-market schedule. Because the Semiconductor Manufacturing International Corporation (SMIC) is one of the world-wide forerunners in the foundry industry, one of its primary concerns is ensuring continual improvement in its fab's turnaround time (TAT), especially given that newer technology nodes and their associated processes are increasing in complexity, and consequently, in their time-to-process. In assessing current runtime data trends at the 65nm and 40nm technology nodes, it was hypothesized that hardware and software utilization improvements could accomplish a reduced overall TAT. By running an experiment using the Mentor Graphics Calibre® Cluster Manager (CalCM) software, SMIC was able to demonstrate just over a 30% aggregate TAT improvement in conjunction with a greater than 90% average utilization of all hardware resources. This paper describes the experimental setup and procedures that predicated the reported results.
McGann, Patrick; Bunin, Jessica L; Snesrud, Erik; Singh, Seema; Maybank, Rosslyn; Ong, Ana C; Kwak, Yoon I; Seronello, Scott; Clifford, Robert J; Hinkle, Mary; Yamada, Stephen; Barnhill, Jason; Lesho, Emil
2016-07-01
Whole genome sequencing (WGS) is increasingly employed in clinical settings, though few assessments of turnaround times (TAT) have been performed in real-time. In this study, WGS was used to investigate an unfolding outbreak of vancomycin resistant Enterococcus faecium (VRE) among 3 patients in the ICU of a tertiary care hospital. Including overnight culturing, a TAT of just 48.5 h for a comprehensive report was achievable using an Illumina Miseq benchtop sequencer. WGS revealed that isolates from patient 2 and 3 differed from that of patient 1 by a single nucleotide polymorphism (SNP), indicating nosocomial transmission. However, the unparalleled resolution provided by WGS suggested that nosocomial transmission involved two separate events from patient 1 to patient 2 and 3, and not a linear transmission suspected by the time line. Rapid TAT's are achievable using WGS in the clinical setting and can provide an unprecedented level of resolution for outbreak investigations. PMID:27185645
Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics
Ialongo, Cristiano; Bernardini, Sergio
2016-01-01
Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed. PMID:27019886
Insertable system for fast turnaround time microwave experiments in a dilution refrigerator.
Ong, Florian R; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian
2012-09-01
Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence. PMID:23020391
Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.
Ialongo, Cristiano; Bernardini, Sergio
2016-01-01
Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed. PMID:27019886
Averaging facial expression over time
Haberman, Jason; Harp, Tom; Whitney, David
2010-01-01
The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064
Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh
2015-09-01
The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays. PMID:26068719
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Additional technician tasks and turnaround time in the clinical Stat laboratory
Salinas, Maria; López-Garrigós, Maite; Flores, Emilio; Leiva-Salinas, Maria; Lillo, Rosa; Leiva-Salinas, Carlos
2016-01-01
Introduction Many additional tasks in the Stat laboratory (SL) increase the workload. It is necessary to control them because they can affect the service provided by the laboratory. Our aim is to calculate these tasks, study their evolution over a 10 year period, and compare turnaround times (TAT) in summer period to the rest of the year. Materials and methods Additional tasks were classified as “additional test request” and “additional sample”. We collected those incidences from the laboratory information system (LIS), and calculated their evolution over time. We also calculated the monthly TAT for troponin for Emergency department (ED) patients, as the difference between the verification and LIS registration time. A median time of 30 minutes was our indicator target. TAT results and tests workload in summer were compared to the rest of the year. Results Over a 10-year period, the technologists in the SL performed 51,385 additional tasks, a median of 475 per month. The workload was significantly higher during the summer (45,496 tests) than the rest of the year (44,555 tests) (P = 0.019). The troponin TAT did not show this variation between summer and the rest of the year, complying always with our 30 minutes indicator target. Conclusion The technicians accomplished a significant number of additional tasks, and the workload kept increasing over the period of 10 years. That did not affect the TAT results. PMID:27346970
Efficiency of an Automated Reception and Turnaround Time Management System for the Phlebotomy Room
Yun, Soon Gyu; Park, Eun Su; Bang, Hae In; Kang, Jung Gu
2016-01-01
Background Recent advances in laboratory information systems have largely been focused on automation. However, the phlebotomy services have not been completely automated. To address this issue, we introduced an automated reception and turnaround time (TAT) management system, for the first time in Korea, whereby the patient's information is transmitted directly to the actual phlebotomy site and the TAT for each phlebotomy step can be monitored at a glance. Methods The GNT5 system (Energium Co., Ltd., Korea) was installed in June 2013. The automated reception and TAT management system has been in operation since February 2014. Integration of the automated reception machine with the GNT5 allowed for direct transmission of laboratory order information to the GNT5 without involving any manual reception step. We used the mean TAT from reception to actual phlebotomy as the parameter for evaluating the efficiency of our system. Results Mean TAT decreased from 5:45 min to 2:42 min after operationalization of the system. The mean number of patients in queue decreased from 2.9 to 1.0. Further, the number of cases taking more than five minutes from reception to phlebotomy, defined as the defect rate, decreased from 20.1% to 9.7%. Conclusions The use of automated reception and TAT management system was associated with a decrease of overall TAT and an improved workflow at the phlebotomy room. PMID:26522759
Ialongo, Cristiano; Porzio, Ottavia; Giambini, Ilio; Bernardini, Sergio
2016-06-01
The transition to total automation represents the greatest leap for a clinical laboratory, characterized by a totally new philosophy of process management. We have investigated the impact of total automation on core laboratory efficiency and its effects on the clinical services related to STAT tests. For this purpose, a 47-month retrospective study based on the analysis of 44,212 records of STAT cardiac troponin I (CTNI) tests was performed. The core laboratory reached a new efficiency level 3 months after the implementation of total automation. Median turnaround time (TAT) was reduced by 14.9±1.5 min for the emergency department (p < 0.01), reaching 41.6±1.2 min. In non-emergency departments, median TAT was reduced by 19.8±2.2 min (p < 0.01), reaching 52±1.3 min. There was no change in the volume of ordered STAT CTNI tests by the emergency department (p = 0.811), whereas for non-emergency departments there was a reduction of 115.7±50 monthly requests on average (p = 0.026). The volume of ordered tests decreased only in time frames of the regular shift following the morning round. Thus, total automation significantly improves the core laboratory efficiency in terms of TAT. As a consequence, the volume of STAT tests ordered by hospital departments (except for the emergency department) decreased due to reduced duplicated requests. PMID:25882188
Georgiou, A.; Lymer, S.; Hordern, A.; Ridley, L.; Westbrook, J.
2015-01-01
Summary Objectives To assess the impact of introducing a new Picture Archiving and Communication System (PACS) and Radiology Information System (RIS) on: (i) Medical Imaging work processes; and (ii) turnaround times (TATs) for x-ray and CT scan orders initiated in the Emergency Department (ED). Methods We employed a mixed method study design comprising: (i) semi-structured interviews with Medical Imaging Department staff; and (ii) retrospectively extracted ED data before (March/April 2010) and after (March/April 2011 and 2012) the introduction of a new PACS/RIS. TATs were calculated as: processing TAT (median time from image ordering to examination) and reporting TAT (median time from examination to final report). Results Reporting TAT for x-rays decreased significantly after introduction of the new PACS/RIS; from a median of 76 hours to 38 hours per order (p<.0001) for patients discharged from the ED, and from 84 hours to 35 hours (p<.0001) for patients admitted to hospital. Medical Imaging staff reported that the changeover to the new PACS/RIS led to gains in efficiency, particularly regarding the accessibility of images and patient-related information. Nevertheless, assimilation of the new PACS/RIS with existing Departmental work processes was considered inadequate and in some instances unsafe. Issues highlighted related to the synchronization of work tasks (e.g., porter arrangements) and the material set up of the work place (e.g., the number and location of computers). Conclusions The introduction of new health IT can be a “double-edged sword” providing improved efficiency but at the same time introducing potential hazards affecting the effectiveness of the Medical Imaging Department. PMID:26448790
Policy change to improve pathology turnaround time and reduce costs – possible to do both?
Dimeski, Goce; Silvester, Breeann; Ungerer, Jacobus; Johnson, Leslie; Martin, Jennifer H.
2013-01-01
Background: Overcrowding and prolonged length of stay in emergency departments (ED) are increasing problems in hospitals. Rapid availability of all laboratory results has an impact on clinical decision-making, admissions or discharge decisions and resource utilisation. Increasing number of our urinary drugs of abuse (DOA) screens had a turnaround time (TAT) of up to 33 days after the discharge of the patient. Materials and methods: Following an audit and a consultation period with clinicians using the service, a policy change was implemented to reduce the use of gas chromatography mass spectroscopy (GCMS): all requests would have a standard immunoassay (IA) test panel undertaken unless specifically they requested GCMS (including medico-legal) analysis. Results: Almost all of the clinicians interviewed had no understanding of the DOA screening or the difference in the information generated between a confirmatory GCMS urine toxicology screen and IA DOA panel. It appeared none of the patients surveyed in the audit would have had a different clinical decision made if a GCMS had not been undertaken. Post change audit showed only 4.3% of drug requests for IA also received a confirmatory GCMS testing. The estimated saving post change implementation was $127,000 (AU $) in test costs alone over a two year period. The TAT of GCMS results was reduced to 3–4 days. Conclusion: A laboratory-led behavioural change in test requesting is possible and sustainable provided the reason is clinically sound and accompanied by consultation and availability of advice by phone when requested on test requesting or interpretation. PMID:24266298
Preparing printed circuit boards for rapid turn-around time on a plotter
Hawtree, J.
1998-01-01
This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab`s Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day`s time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter.
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Attractors and Time Averages for Random Maps
NASA Astrophysics Data System (ADS)
Araujo, Vitor
2006-07-01
Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.
Ensemble vs. time averages in financial time series analysis
NASA Astrophysics Data System (ADS)
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
Imoh, Lucius C; Mutale, Mubanga; Parker, Christopher T; Erasmus, Rajiv T; Zemlin, Annalise E
2016-01-01
Introduction Timeliness of laboratory results is crucial to patient care and outcome. Monitoring turnaround times (TAT), especially for emergency tests, is important to measure the effectiveness and efficiency of laboratory services. Laboratory-based clinical audits reveal opportunities for improving quality. Our aim was to identify the most critical steps causing a high TAT for cerebrospinal fluid (CSF) chemistry analysis in our laboratory. Materials and methods A 6-month retrospective audit was performed. The duration of each operational phase across the laboratory work flow was examined. A process-mapping audit trail of 60 randomly selected requests with a high TAT was conducted and reasons for high TAT were tested for significance. Results A total of 1505 CSF chemistry requests were analysed. Transport of samples to the laboratory was primarily responsible for the high average TAT (median TAT = 170 minutes). Labelling accounted for most delays within the laboratory (median TAT = 71 minutes) with most delays occurring after regular work hours (P < 0.05). CSF chemistry requests without the appropriate number of CSF sample tubes were significantly associated with delays in movement of samples from the labelling area to the technologist’s work station (caused by a preference for microbiological testing prior to CSF chemistry). Conclusion A laboratory-based clinical audit identified sample transportation, work shift periods and use of inappropriate CSF sample tubes as drivers of high TAT for CSF chemistry in our laboratory. The results of this audit will be used to change pre-analytical practices in our laboratory with the aim of improving TAT and customer satisfaction. PMID:27346964
Edwards, Thomas
2005-09-01
The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3
Turnaround Principal Competencies
ERIC Educational Resources Information Center
Steiner, Lucy; Barrett, Sharon Kebschull
2012-01-01
When the Minneapolis Public Schools first set out to hire turnaround school principals, administrators followed their usual process--which focused largely on reputation and anecdotal support and considered mainly internal candidates. Yet success at the complicated task of turning around the fortunes of a failing school depends on exceptionally…
Turnaround Momentum Still Fragile
ERIC Educational Resources Information Center
Klein, Alyson
2012-01-01
The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a…
Wesolowski, Laura G; Nasrullah, Muazzam; Coombs, Robert W; Rosenberg, Eric; Ethridge, Steven F; Hutchinson, Angela B; Dragavon, Joan; Rychert, Jennifer; Nolte, Frederick S; Madory, James E; Werner, Barbara G
2016-01-01
Background. To improve clinical and public health outcomes through early human immunodeficiency virus (HIV) detection, fourth-generation antigen/antibody immunoassay (4IA) and supplemental testing results must be returned rapidly. Methods. We examined HIV testing data at Harborview Medical Center (HMC), Massachusetts General Hospital (MGH), and the Medical University of South Carolina (MUSC), which used 4IA and supplemental antibody and nucleic acid tests (NATs). At MGH and MUSC, HIV-1 Western blot (WB) and HIV-2 testing were conducted at a reference laboratory. We compared time from specimen collection to laboratory result for established (positive WB) and acute infections (reactive 4IA, negative/indeterminate WB, detectable NAT), and we calculated testing cost per positive-test result. Results. From 3731 (MUSC) to 19 774 (MGH) tests were conducted; 0.01% (MGH) to 0.05% (HMC) were acute infections. Each laboratory had reactive 4IA, WB-negative, or indeterminate specimens without NAT (ie, potential acute infections). Time to result was 1.5 (HMC) to 5.2 days (MGH) for acute and 1.0 (HMC) to 5.2 days (MGH) for established infections. Costs were $1054 (MGH) to $1521 (MUSC). Conclusions. Conducting supplemental testing in-house lowered turnaround times, which may be further reduced with rapid HIV-1/HIV-2 differentiation tests. Hospitals may benefit from quantitative NATs not requiring physician orders, so all potential acute infections receive NAT. PMID:26798766
Wesolowski, Laura G.; Nasrullah, Muazzam; Coombs, Robert W.; Rosenberg, Eric; Ethridge, Steven F.; Hutchinson, Angela B.; Dragavon, Joan; Rychert, Jennifer; Nolte, Frederick S.; Madory, James E.; Werner, Barbara G.
2016-01-01
Background. To improve clinical and public health outcomes through early human immunodeficiency virus (HIV) detection, fourth-generation antigen/antibody immunoassay (4IA) and supplemental testing results must be returned rapidly. Methods. We examined HIV testing data at Harborview Medical Center (HMC), Massachusetts General Hospital (MGH), and the Medical University of South Carolina (MUSC), which used 4IA and supplemental antibody and nucleic acid tests (NATs). At MGH and MUSC, HIV-1 Western blot (WB) and HIV-2 testing were conducted at a reference laboratory. We compared time from specimen collection to laboratory result for established (positive WB) and acute infections (reactive 4IA, negative/indeterminate WB, detectable NAT), and we calculated testing cost per positive-test result. Results. From 3731 (MUSC) to 19 774 (MGH) tests were conducted; 0.01% (MGH) to 0.05% (HMC) were acute infections. Each laboratory had reactive 4IA, WB-negative, or indeterminate specimens without NAT (ie, potential acute infections). Time to result was 1.5 (HMC) to 5.2 days (MGH) for acute and 1.0 (HMC) to 5.2 days (MGH) for established infections. Costs were $1054 (MGH) to $1521 (MUSC). Conclusions. Conducting supplemental testing in-house lowered turnaround times, which may be further reduced with rapid HIV-1/HIV-2 differentiation tests. Hospitals may benefit from quantitative NATs not requiring physician orders, so all potential acute infections receive NAT. PMID:26798766
Time average vibration fringe analysis using Hilbert transformation
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-10-20
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Distribution of time-averaged observables for weak ergodicity breaking.
Rebenshtok, A; Barkai, E
2007-11-23
We find a general formula for the distribution of time-averaged observables for systems modeled according to the subdiffusive continuous time random walk. For Gaussian random walks coupled to a thermal bath we recover ergodicity and Boltzmann's statistics, while for the anomalous subdiffusive case a weakly nonergodic statistical mechanical framework is constructed, which is based on Lévy's generalized central limit theorem. As an example we calculate the distribution of X, the time average of the position of the particle, for unbiased and uniformly biased particles, and show that X exhibits large fluctuations compared with the ensemble average
Assessing School Turnaround: Evidence from Ohio
ERIC Educational Resources Information Center
Player, Daniel; Katz, Veronica
2016-01-01
Policy makers have struggled to find successful approaches to address concentrated, persistent low school achievement. While NCLB and the School Improvement Grant (SIG) program have devoted significant time and attention to turnaround, very little empirical evidence substantiates whether and how these efforts work. This study employs a comparative…
Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports
ERIC Educational Resources Information Center
Silva, Elena
2012-01-01
If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposed…
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Human Capital in Turnaround Schools
ERIC Educational Resources Information Center
Ferris, Kristen
2012-01-01
Finding, keeping and supporting great educators presents the single biggest challenge to successful school turnarounds. Without teachers and administrators who bring the needed combination of skills and passion, nothing else will achieve the desired effect. The turnaround model supported by the U.S. Department of Education School Improvement Grant…
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Averaging analysis for discrete time and sampled data adaptive systems
NASA Technical Reports Server (NTRS)
Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.
1986-01-01
Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.
Time-averaged photon-counting digital holography.
Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario
2015-09-15
Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907
An averaging analysis of discrete-time indirect adaptive control
NASA Technical Reports Server (NTRS)
Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.
1988-01-01
An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.
Average waiting time in FDDI networks with local priorities
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry
NASA Astrophysics Data System (ADS)
de Kat, Roeland
2015-11-01
Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.
Time-average TV holography for vibration fringe analysis
Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2009-06-01
Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.
Trapping ultracold atoms in a time-averaged adiabatic potential
Gildemeister, M.; Nugent, E.; Sherlock, B. E.; Kubasik, M.; Sheard, B. T.; Foot, C. J.
2010-03-15
We report an experimental realization of ultracold atoms confined in a time-averaged, adiabatic potential (TAAP). This trapping technique involves using a slowly oscillating ({approx}kHz) bias field to time-average the instantaneous potential given by dressing a bare magnetic potential with a high-frequency ({approx}MHz) magnetic field. The resultant potentials provide a convenient route to a variety of trapping geometries with tunable parameters. We demonstrate the TAAP trap in a standard time-averaged orbiting potential trap with additional Helmholtz coils for the introduction of the radio frequency dressing field. We have evaporatively cooled 5x10{sup 4} atoms of {sup 87}Rb to quantum degeneracy and observed condensate lifetimes of longer than 3 s.
Nguyen, V A T; Nguyen, H Q; Vu, T T; Nguyen, N A T; Duong, C M; Tran, T H T; Nguyen, H V; Dang, D A; Bañuls, A-L
2015-12-01
Multidrug-resistant tuberculosis is a major issue worldwide; however, accessibility to drug susceptibility testing (DST) is still limited in developing countries, owing to high costs and complexity. We developed a proportion method on 12-well microplates for DST. The assay reduced the time to results to <12 days and <10 days when bacterial growth was checked with the naked eye or a microscope, respectively. Comparison with the Canetti-Grosset method showed that the results of the two assays almost overlapped (kappa index 0.98 (95% CI 0.91-1.00) for isoniazid, rifampicin, streptomycin; and kappa index 0.92 (95% CI 0.85-0.99) for ethambutol). The sequencing of genes involved in drug resistance showed similar level of phenotype-genotype agreement between techniques. Finally, measurement of the MICs of rifampicin and ethambutol suggests that the currently used critical ethambutol concentration should be revised, and that the current molecular drug susceptibility tests for rifampicin need to be re-evaluated, as in vitro rifampicin-sensitive isolates could harbour drug resistance-associated mutation(s). PMID:26348263
An Advanced Time Averaging Modelling Technique for Power Electronic Circuits
NASA Astrophysics Data System (ADS)
Jankuloski, Goce
For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Making Sense of School Turnarounds
ERIC Educational Resources Information Center
Hess, Frederick M.
2012-01-01
Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
Organized by the key conditions areas for turnaround, "People, Program, Time and Money," this tool offers sample language for each contract element to serve as a model for modifications from a traditional CBA that may support a district's turnaround efforts. Sample language is offered from existing provisions in district-wide collective bargaining…
Combining Quick-Turnaround and Batch Workloads at Scale
NASA Technical Reports Server (NTRS)
Matthews, Gregory A.
2012-01-01
NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.
Series Overview. Sustaining School Turnaround at Scale. Brief 1
ERIC Educational Resources Information Center
Education Resource Strategies, 2012
2012-01-01
Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…
School Turnarounds: The Essential Role of Districts
ERIC Educational Resources Information Center
Zavadsky, Heather
2012-01-01
The inspiration for this book was a crucial observation: that if the school turnaround movement is to have widespread and lasting consequences, it will need to incorporate meaningful district involvement in its efforts. The result is a volume that considers school turnaround efforts at the district level, examining the evidence thus far and…
Time-averaged and time-dependent computations of isothermal flowfields in a centerbody combustor
NASA Astrophysics Data System (ADS)
Krishnamurthy, L.; Raju, M. S.; Creed, M. J.; Memering, J. N.
1984-12-01
A numerical investigation of the near-wake region in a ducted bluff-body combustor by finite-difference computations is reported. The numerical predictions are based upon: (1) the Reynolds-averaged Navier Stokes equations and the k-epsilon turbulence model; and (2) the time-dependent, compressible Navier-Stokes equations, The standard K-epsilon turbulence model was modified to account for the effect of streamline curvature and for the preferential influence of normal stresses. The time-averaged calculations addressed the turbulent mixing under isothermal conditions in: (1) the large and small-scale centerbody combustor configurations, due to annular air flow and central CO2 flow; and (2) the two-dimensional mixing-layer configuration, due to two streams of nitrogen at different velocities. The time-dependent calculations addressed the near-wake flowfield of the large-scale centerbody combustor configuration with only the annular air stream present. The Reynolds-averaged predictions examined the influence of the turbulence model corrections and geometric scale under varying annular and central flows on: (1) the axial and radial distributions of the mean and fluctuating components of the axial and radial velocities and of the mean CO2 concentrations; and (2) the axial and the radial locations of the vortex center, as well as the magnitude and location of the minimum centerline mean axial velocity. Comparison of the predicted results with experimental data emphasizes and clarifies the complex flowfield interactions of the recirculating near-wake region.
Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer
NASA Astrophysics Data System (ADS)
Dunn, M. G.; Rae, W. J.; Rigby, D. L.
Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.
Experimental and theoretical studies of time-averaged and time resolved rotor heat transfer
NASA Technical Reports Server (NTRS)
Dunn, M. G.; Rae, W. J.; Rigby, D. L.
1987-01-01
Efforts in support of the Space Shuttle Main Engine (SSME) structural durability program have concentrated on obtaining detailed time-averaged and time-resolved (or phase-locked) measurements on a full-scale rotating turbine both with and without cold gas injection and on theoretical studies designed to improve the prediction capability for these turbine flows. The experimental efforts have concentrated on use of the Garrett TFE 731-2 hp turbine. However, it has been possible to apply the theoretical efforts to predicting heat-flux distributions obtained for two additional turbines - i.e., (1) the Garrett low aspect ratio turbine (LART) and (2) the Teledyne 702 turbine. The experimental technique is the short duration, shock-tunnel approach, in which fast-response, thin-film resistance thermometers are used to measure surface temperature histories at prescribed locations on the turbine component parts. Heat-flux values are then inferred from the temperature histories by using standard data reduction procedures. The turbine being used is the Garrett TFE 731-2 hp stage, and both the nozzle guide vanes and the rotor blades are heavily instrumented with thin-film heat-flux gauges. Depending on how the data from a particular heat-flux gauge are recorded, one can get either time-resolved (or phase-locked) or time-averaged results. Both types of data are illustrated.
Sustainment of Fine Particle Cloud by Means of Time-Averaged Particle Driving Force in Plasmas
Gohda, Takuma; Iizuka, Satoru
2008-09-07
We have succeeded in sustaining fine particle cloud by using a time-averaged particle driving (TAPD) method in the RF discharge plasma. The particles feel only time-averaged force when the period of pulses applied to those point-electrodes is shorter than the particle response time. The particles are transported to a middle point between two point-electrodes.
Infinite invariant density determines statistics of time averages for weak chaos.
Korabel, N; Barkai, E
2012-02-10
Weakly chaotic nonlinear maps with marginal fixed points have an infinite invariant measure. Time averages of integrable and nonintegrable observables remain random even in the long time limit. Temporal averages of integrable observables are described by the Aaronson-Darling-Kac theorem. We find the distribution of time averages of nonintegrable observables, for example, the time average position of the particle, x[over ¯]. We show how this distribution is related to the infinite invariant density. We establish four identities between amplitude ratios controlling the statistics of the problem. PMID:22401047
ERIC Educational Resources Information Center
Strunk, Katharine O.; Marsh, Julie A.; Hashim, Ayesha K.; Bush-Mecenas, Susan; Weinstein, Tracey
2016-01-01
We examine the Los Angeles Unified School District's Public School Choice Initiative (PSCI), which sought to turnaround the district's lowest-performing schools. We ask whether school turnaround impacted student outcomes, and what explains variations in outcomes across reform cohorts. We use a Comparative Interrupted Time Series approach using…
Unocal restarts Parachute Creek plant after turnaround
Not Available
1986-12-01
In late November, Unocal Corporation restarted its Parachute Creek shale oil project after a two-month turnaround, or overhaul operation. The plant is again operating at about 50% of its 10,000 barrel/day design capacity. This is the maximum rate which has been achieved for long periods of time. Unocal has been continuing to explore the possibility of adding a fluidized bed combustor to the retorting system. A decision on whether to proceed with building the FBC is expected to be made by July, 1987. The Colorado Mined Land Reclamation Board has allowed Unocal to make changes at Parachute Creek to better accommodate the handling of spent shale. These changes are described. Operating problems at the plant from start-up and modifications required are reviewed. Upgrading of the shale oil syncrude cannot be done at the Gary, Indiana refinery as planned. Current plans are to truck it to Utah, place it in a pipeline to Midland, Texas, then pipe it through Cushing, Oklahoma to the Unocal refinery in Lemont, Illinois.
Code of Federal Regulations, 2014 CFR
2014-07-01
... averaging times and units? (a) Use Equation 1 in § 60.2975 to calculate emissions at 7 percent oxygen. (b) Use Equation 2 in § 60.2975 to calculate the 12-hour rolling averages for concentrations of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...
Code of Federal Regulations, 2014 CFR
2014-07-01
... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
Code of Federal Regulations, 2011 CFR
2011-07-01
... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...
Code of Federal Regulations, 2011 CFR
2011-07-01
... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
Code of Federal Regulations, 2013 CFR
2013-07-01
... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
Code of Federal Regulations, 2010 CFR
2010-07-01
... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
Code of Federal Regulations, 2013 CFR
2013-07-01
... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...
Code of Federal Regulations, 2014 CFR
2014-07-01
... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
Local volume-time averaged equations of motion for dispersed, turbulent, multiphase flows
Sha, W.T.; Slattery, J.C.
1980-11-01
In most flows of liquids and their vapors, the phases are dispersed randomly in both space and time. These dispersed flows can be described only statistically or in terms of averages. Local volume-time averaging is used here to derive a self-consistent set of equations governing momentum and energy transfer in dispersed, turbulent, multiphase flows. The empiricisms required for use with these equations are the subject of current research.
Turnarounds to Transfer: Design beyond the Modes
ERIC Educational Resources Information Center
Eddy, Jennifer
2014-01-01
In "Turnarounds to Transfer," teachers design a collection of tasks toward the summative performance goal but go beyond the Communicative mode criteria: they must assess for transfer. Transfer design criteria must include a complexity or variation that make learners engage critical thinking skills and call upon a repertoire of knowledge…
Pre-K as School Turnaround Strategy
ERIC Educational Resources Information Center
Buenafe, Anthony Q.
2011-01-01
Both state and local school systems, recognizing the strong and growing evidence of pre-kindergarten's effectiveness in closing achievement gaps, are implementing early learning programs as part of their education reform efforts. In low-performing districts, pre-k has emerged as a promising turnaround strategy, reaching children before they become…
Textiles, Tariffs, and Turnarounds: Profits Improved.
ERIC Educational Resources Information Center
Aronoff, Craig
1986-01-01
The U.S. textile industry may serve as a classic study on regeneration through market forces. The industry has recently made a turnaround in profits after having been recognized as an industry that was losing most of its profits to overseas producers. The reasons for the emerging strength of the industry is that it began to innovate after a…
Medical school dean as a turnaround agent.
Sachs, Benjamin P; Krane, N Kevin; Kahn, Marc J
2008-08-01
Taking on the role as a new medical school Dean in a new city after Hurricane Katrina posed many challenges. To facilitate turnaround, 3 principles were applied: hit the ground running, promote community involvement, and gain a common vision for the future. This article describes Tulane University's process for implementing change and expands on its vision for the future. PMID:18703920
Bewerunge, Jörg; Ladadwa, Imad; Platten, Florian; Zunke, Christoph; Heuer, Andreas; Egelhaaf, Stefan U
2016-07-28
Anomalous diffusion is a ubiquitous phenomenon in complex systems. It is often quantified using time- and ensemble-averages to improve statistics, although time averages represent a non-local measure in time and hence can be difficult to interpret. We present a detailed analysis of the influence of time- and ensemble-averages on dynamical quantities by investigating Brownian particles in a rough potential energy landscape (PEL). Initially, the particle ensemble is randomly distributed, but the occupancy of energy values evolves towards the equilibrium distribution. This relaxation manifests itself in the time evolution of time- and ensemble-averaged dynamical measures. We use Monte Carlo simulations to study particle dynamics in a potential with a Gaussian distribution of energy values, where the long-time limit of the diffusion coefficient is known from theory. In our experiments, individual colloidal particles are exposed to a laser speckle pattern inducing a non-Gaussian roughness and are followed by optical microscopy. The relaxation depends on the kind and degree of roughness of the PEL. It can be followed and quantified by the time- and ensemble-averaged mean squared displacement. Moreover, the heterogeneity of the dynamics is characterized using single-trajectory analysis. The results of this work are relevant for the correct interpretation of single-particle tracking experiments in general. PMID:27353405
Relegation and Reversion: Longitudinal Analysis of School Turnaround and Decline
ERIC Educational Resources Information Center
Hochbein, Craig
2012-01-01
School turnaround has become a popular strategy for improvement of chronically low-performing schools. Research related to school turnaround has relied substantially upon case studies. These case studies often focus on successful turnarounds and report immediate outcomes, failing to provide information about the sustainability of the results. In…
School Turnaround: Cristo Rey Boston High School Case Study
ERIC Educational Resources Information Center
Thielman, Jeff
2012-01-01
The mandates of the federal No Child Left Behind Law, including the threat of closing a school for underperformance, have led to multiple public school turnaround attempts. Because turnaround is a relatively new area of focus in education, there is limited research on what does and does not work, and even the definition of turnaround is a work in…
Thermal motion in proteins: Large effects on the time-averaged interaction energies
NASA Astrophysics Data System (ADS)
Goethe, Martin; Fita, Ignacio; Rubi, J. Miguel
2016-03-01
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Neural Networks Used to Compare Designed and Measured Time-Average Patterns
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1999-01-01
Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.
Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio
NASA Astrophysics Data System (ADS)
Li, Shenghong; Bi, Guoan
2014-12-01
Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-01-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ≈ 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions
Violation of Homogeneity of Variance Assumption in the Integrated Moving Averages Time Series Model.
ERIC Educational Resources Information Center
Gullickson, Arlen R.; And Others
This study is an analysis of the robustness of the Box-Tiao integrated moving averages model for analysis of time series quasi experiments. One of the assumptions underlying the Box-Tiao model is that all N values of alpha subscript t come from the same population which has a variance sigma squared. The robustness was studied only in terms of…
A diagnostic study of time variations of regionally averaged background error covariances
NASA Astrophysics Data System (ADS)
Monteiro, Maria; Berre, LoïK.
2010-12-01
In variational data assimilation systems, background error covariances are often estimated from a temporal and spatial average. For a limited area model such as the Aire Limited Adaptation Dynamique Developpment International (ALADIN)/France, the spatial average is calculated over the regional computation domain, which covers western Europe. The purpose of this study is to revise the temporal stationarity assumption by diagnosing time variations of such regionally averaged covariances. This is done through examination of covariance changes as a function of season (winter versus summer), day (in connection with the synoptic situation), and hour (related to the diurnal cycle), with the ALADIN/France regional ensemble Three-Dimensional Variational analysis (3D-Var) system. In summer, compared to winter, average error variances are larger, and spatial correlation functions are sharper horizontally but broader vertically. Daily changes in covariances are particularly strong during the winter period, with larger variances and smaller-scale error structures when an unstable low-pressure system is present in the regional domain. Diurnal variations are also significant in the boundary layer in particular, and, as expected, they tend to be more pronounced in summer. Moreover, the comparison between estimates provided by two independent ensembles indicates that these covariance time variations are estimated in a robust way from a six-member ensemble. All these results support the idea of representing these time variations by using a real-time ensemble assimilation system.
An upper bound to time-averaged space-charge limited diode currents
Griswold, M. E.; Fisch, N. J.; Wurtele, J. S.
2010-11-15
The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
Testing ΛCDM cosmology at turnaround: where to look for violations of the bound?
NASA Astrophysics Data System (ADS)
Tanoglidis, D.; Pavlidou, V.; Tomaras, T. N.
2015-12-01
In ΛCDM cosmology, structure formation is halted shortly after dark energy dominates the mass/energy budget of the Universe. A manifestation of this effect is that in such a cosmology the turnaround radius—the non-expanding mass shell furthest away from the center of a structure— has an upper bound. Recently, a new, local, test for the existence of dark energy in the form of a cosmological constant was proposed based on this turnaround bound. Before designing an experiment that, through high-precision determination of masses and —independently— turnaround radii, will challenge ΛCDM cosmology, we have to answer two important questions: first, when turnaround-scale structures are predicted to be close enough to their maximum size, so that a possible violation of the bound may be observable. Second, which is the best mass scale to target for possible violations of the bound. These are the questions we address in the present work. Using the Press-Schechter formalism, we find that turnaround structures have in practice already stopped forming, and consequently, the turnaround radius of structures must be very close to the maximum value today. We also find that the mass scale of ~ 1013 Msolar characterizes the turnaround structures that start to form in a statistically important number density today —and even at an infinite time in the future, since structure formation has almost stopped. This mass scale also separates turnaround structures with qualitatively different cosmological evolution: smaller structures are no longer readjusting their mass distribution inside the turnaround scale, they asymptotically approach their ultimate abundance from higher values, and they are common enough to have, at some epoch, experienced major mergers with structures of comparable mass; larger structures exhibit the opposite behavior. We call this mass scale the transitional mass scale and we argue that it is the optimal for the purpose outlined above. As a corollary, we
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Leading a supply chain turnaround.
Slone, Reuben E
2004-10-01
Just five years ago, salespeople at Whirlpool were in the habit of referring to their supply chain organization as the "sales disablers." Now the company excels at getting products to the right place at the right time--while managing to keep inventories low. How did that happen? In this first-person account, Reuben Slone, Whirlpool's vice president of Global Supply Chain, describes how he and his colleagues devised the right supply chain strategy, sold it internally, and implemented it. Slone insisted that the right focal point for the strategy was the satisfaction of consumers at the end of the supply chain. Most supply chain initiatives do the opposite: They start with the realities of a company's manufacturing base and proceed from there. Through a series of interviews with trade customers large and small, his team identified 27 different capabilities that drove industry perceptions of Whirlpool's performance. Knowing it was infeasible to aim for world-class performance across all of them, Slone weighed the costs of excelling at each and found the combination of initiatives that would provide overall competitive advantage. A highly disciplined project management office and broad training in project management were key to keeping work on budget and on benefit. Slone set an intense pace--three "releases" of new capabilities every month--that the group maintains to this day. Lest this seem like a technology story, however, Slone insists it is just as much a "talent renaissance." People are proud today to be part of Whirlpool's supply chain organization, and its new generation of talent will give the company a competitive advantage for years to come. PMID:15559580
Decker, A.J.
1984-06-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.
Mean square average-consensus for multi-agent systems with measurement noise and time delay
NASA Astrophysics Data System (ADS)
Sun, Fenglan; Guan, Zhi-Hong; Ding, Li; Wang, Yan-Wu
2013-06-01
Mean square average consensus for multi-agent systems with measurement noise and time delay under fixed digraph is studied in this article. The time-varying consensus-gain is introduced to attenuate the measurement noise. By combining the tools of algebraic graph theory, matrix theory and stochastic analysis, consensus protocols for multi-agent systems with measurement noise and time delay are elaborately analysed. The example and simulation results are given to illustrate the effectiveness of the obtained theoretical results. Moreover, the simulations demonstrate that, the proper consensus-gain function in the consensus protocol is the necessary and sufficient condition for the convergence of the multi-agent systems.
NASA Technical Reports Server (NTRS)
Decker, A. J.
1984-01-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.
NASA Technical Reports Server (NTRS)
Decker, A. J.
1984-01-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three-dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed. Previously announced in STAR as N84-21849
Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field
NASA Astrophysics Data System (ADS)
Constable, C.; Johnson, C. L.
2009-05-01
We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509
Convergence of Time Averages of Weak Solutions of the Three-Dimensional Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Foias, Ciprian; Rosa, Ricardo M. S.; Temam, Roger M.
2015-08-01
Using the concept of stationary statistical solution, which generalizes the notion of invariant measure, it is proved that, in a suitable sense, time averages of almost every Leray-Hopf weak solution of the three-dimensional incompressible Navier-Stokes equations converge as the averaging time goes to infinity. This system of equations is not known to be globally well-posed, and the above result answers a long-standing problem, extending to this system a classical result from ergodic theory. It is also shown that, from a measure-theoretic point of view, the stationary statistical solution obtained from a generalized limit of time averages is independent of the choice of the generalized limit. Finally, any Borel subset of the phase space with positive measure with respect to a stationary statistical solution is such that for almost all initial conditions in that Borel set and for at least one Leray-Hopf weak solution starting with that initial condition, the corresponding orbit is recurrent to that Borel subset and its mean sojourn time within that Borel subset is strictly positive.
Time-average-based Methods for Multi-angular Scale Analysis of Cosmic-Ray Data
NASA Astrophysics Data System (ADS)
Iuppa, R.; Di Sciascio, G.
2013-04-01
Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10°, disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.
A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by
Deepan, B; Quan, C; Tay, C J
2016-08-01
In this paper, a novel technique for quantitative vibration analysis using time-average electronic speckle pattern interferometry is proposed. An amplitude-varied time-average refreshing reference frame method is used to capture a fringe pattern with a better fringe contrast than the conventional reference frame technique. The recorded fringe patterns with improved contrast provide better mode shape visibility and are easier to process. A derivative-based regularized phase tracker model is used to retrieve vibration amplitudes from a single fringe pattern. The method does not require a phase shifter to obtain the mode shape or amplitude. The method provides unwrapped amplitude and amplitude derivatives maps directly, so a separate phase unwrapping process is not required. Experimental work is carried out using a circular aluminum plate test specimen and the results are compared with a finite element method modal analysis. Both experimental and numerical results show that the proposed method is robust and accurate. PMID:27505366
H∞ Adaptive tracking control for switched systems based on an average dwell-time method
NASA Astrophysics Data System (ADS)
Wu, Caiyun; Zhao, Jun
2015-10-01
This paper investigates the H∞ state tracking model reference adaptive control (MRAC) problem for a class of switched systems using an average dwell-time method. First, a stability criterion is established for a switched reference model. Then, an adaptive controller is designed and the state tracking control problem is converted into the stability analysis. The global practical stability of the error switched system can be guaranteed under a class of switching signals characterised by an average dwell time. Consequently, sufficient conditions for the solvability of the H∞ state tracking MRAC problem are derived. An example of highly manoeuvrable aircraft technology vehicle is given to demonstrate the feasibility and effectiveness of the proposed design method.
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
Long, Lijun; Zhao, Jun
2015-07-01
This paper investigates the problem of adaptive neural tracking control via output-feedback for a class of switched uncertain nonlinear systems without the measurements of the system states. The unknown control signals are approximated directly by neural networks. A novel adaptive neural control technique for the problem studied is set up by exploiting the average dwell time method and backstepping. A switched filter and different update laws are designed to reduce the conservativeness caused by adoption of a common observer and a common update law for all subsystems. The proposed controllers of subsystems guarantee that all closed-loop signals remain bounded under a class of switching signals with average dwell time, while the output tracking error converges to a small neighborhood of the origin. As an application of the proposed design method, adaptive output feedback neural tracking controllers for a mass-spring-damper system are constructed. PMID:25122844
Time series autoregressive integrated moving average modeling of test-day milk yields of dairy ewes.
Macciotta, N P; Cappio-Borlino, A; Pulina, G
2000-05-01
Monthly test-day milk yields of 1200 dairy Sarda ewes were analyzed by time-series methods. Autocorrelation functions were calculated for lactations within parity classes and altitude of location of flocks. Spectral analysis of the successions of data was developed by Fourier transformation, and different Box-Jenkins autoregressive integrated moving average models were fitted. The separation of deterministic and stochastic components highlighted the autoregressive feature of milk production pattern. The forecasting power of autoregressive integrated moving average models was tested by predicting total milk production for a standardized lactation length of 225 d from only a few test-day records. Results indicated a greater forecasting capacity in comparison with standard methods and suggested further development of time-series analysis for studying lactation curves with more sophisticated methods, such as wavelet decomposition and neural network models. PMID:10821585
Time-averaged in-line digital holographic interferometry for vibration analysis
Asundi, Anand; Singh, Vijay Raj
2006-04-10
Time-averaged in-line digital holography is applied for vibration analysis. In particular, by use of a double-exposure approach, simultaneous determination of vibration mode shape and mean static state deformation during a vibration cycle are obtained. The subtraction of two numerically reconstructed digital holograms recorded at the same resonant frequency but with a small difference in amplitude shows the mixing of Bessel-type time-averaged fringes owing to vibration and of the double-exposure fringes owing to differences in the mean deformation of the object. It is shown that separation of these fringe patterns can be readily accomplished numerically. An experimental demonstration of this effect by use of in-line digital holography for relatively small membranes is demonstrated.
Manned Orbital Transfer Vehicle (MOTV). Volume 5: Turnaround analysis
NASA Technical Reports Server (NTRS)
Boyland, R. E.; Sherman, S. W.; Morfin, H. W.
1979-01-01
The development of a low-cost reliable turnaround process to employ the MOTV in enhancing the utilization of the geosynchronous space region is analyzed. It is indicated that a routine effective turnaround/maintenance plan must make maximum use of flight data for maintenance planning, a high degree of test automation, and MOTV maintainability features in order to minimize tests, facilitate repair, and reduce manpower requirements. An effective turnaround plan provides a payback of reduced risks.
The Average Density of Extrasolar Habitable Planets Over Cosmological Time Scales
NASA Astrophysics Data System (ADS)
von Bloh, W.; Franck, S.; Bounama, C.; Schellnhuber, H. J.
A general modelling scheme for assessing the suitability for life on any Earth-like ex- trasolar planet is presented. This approach is based on an integrated Earth system anal- ysis in order to calculate the habitable zone in main-sequence-star planetary systems. Within this model the evolution of the habitable zone over geological time scales is straightforward to calculate and allows an estimate of the probability that an Earth-like planet is within the habitable zone of an extrasolar planetary system. The probability depends explicitly on the time since planet formation. A new attempt by Lineweaver (2001) to estimate the formation rate of Earth-like planets over cosmological time scales is applied to calculate the average density of habitable planets as a function of time. This approach is based on a quantitative determination of metallicity from star formation rates as an ingredient for forming Earth-like planets. Combining this result with our estimations of extrasolar habitable zones yields the average density of habit- able planets over cosmological time scales. We find that there was a maximum density of habitable planets at the time of Earth's origin.
Effects of Time Averaging on Optical Scintillation in a Ground-to-Satellite Atmospheric Propagation
NASA Astrophysics Data System (ADS)
Toyoshima, Morio; Araki, Kenichi
2000-04-01
Temporal natures for a variance of turbulence-induced log-intensity fluctuations are obtained. The variance of the optical fluctuation is reduced when the optical signals are integrated in a photodetector, and we express the index of reduction (called the time-averaging factor) by using an autocovariance function of the optical fluctuation. The optical fluctuations for a ground-to-satellite path are caused by both atmospheric turbulence and the beam-pointing jitter error of the optical transmitter. The turbulence-induced optical scintillation can be discriminated from the fluctuation that is due to the beam-pointing jitter error. The compared result from the probability density function of the optical signal reveals good agreement. The temporal autocovariance functions of optical scintillation are obtained and used to calculate the time-averaging factor. The analytically expected effects of time averaging are verified by the experimental results. The estimations contribute to the link budget design for the optical tracking channel through atmospheric turbulence.
Effects of time averaging on optical scintillation in a ground-to-satellite atmospheric propagation.
Toyoshima, M; Araki, K
2000-04-20
Temporal natures for a variance of turbulence-induced log-intensity fluctuations are obtained. The variance of the optical fluctuation is reduced when the optical signals are integrated in a photodetector, and we express the index of reduction (called the time-averaging factor) by using an autocovariance function of the optical fluctuation. The optical fluctuations for a ground-to-satellite path are caused by both atmospheric turbulence and the beam-pointing jitter error of the optical transmitter. The turbulence-induced optical scintillation can be discriminated from the fluctuation that is due to the beam-pointing jitter error. The compared result from the probability density function of the optical signal reveals good agreement. The temporal autocovariance functions of optical scintillation are obtained and used to calculate the time-averaging factor. The analytically expected effects of time averaging are verified by the experimental results. The estimations contribute to the link budget design for the optical tracking channel through atmospheric turbulence. PMID:18345087
NASA Astrophysics Data System (ADS)
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2016-03-01
A mixed semiclassical initial value representation expression for spectroscopic calculations is derived. The formulation takes advantage of the time-averaging filtering and the hierarchical properties of different trajectory based propagation methods. A separable approximation is then introduced that greatly reduces (about an order of magnitude) the computational cost compared with a full Herman-Kluk time-averaging semiclassical calculation for the same systems. The expression is exact for the harmonic case and it is tested numerically for a Morse potential coupled to one or two additional harmonic degrees of freedom. Results are compared to full Herman-Kluk time-averaging calculations and exact quantum wavepacket propagations. We found the peak positions of the mixed semiclassical approximations to be always in very good agreement with full quantum calculations, while overtone peak intensities are lower with respect to the exact ones. Given the reduced computational effort required by this new mixed semiclassical approximation, we believe the present method to make spectroscopic calculations available for higher dimensional systems than accessible before.
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2016-03-01
A mixed semiclassical initial value representation expression for spectroscopic calculations is derived. The formulation takes advantage of the time-averaging filtering and the hierarchical properties of different trajectory based propagation methods. A separable approximation is then introduced that greatly reduces (about an order of magnitude) the computational cost compared with a full Herman-Kluk time-averaging semiclassical calculation for the same systems. The expression is exact for the harmonic case and it is tested numerically for a Morse potential coupled to one or two additional harmonic degrees of freedom. Results are compared to full Herman-Kluk time-averaging calculations and exact quantum wavepacket propagations. We found the peak positions of the mixed semiclassical approximations to be always in very good agreement with full quantum calculations, while overtone peak intensities are lower with respect to the exact ones. Given the reduced computational effort required by this new mixed semiclassical approximation, we believe the present method to make spectroscopic calculations available for higher dimensional systems than accessible before. PMID:26957152
Quick turnaround facilities for reusable rocket launchers
NASA Astrophysics Data System (ADS)
Lacaze, H.; Bombled, J. P.
After defining the requirements for future vertical takeoff, recoverable/reusable vehicle launch pads on the basis of a functionl analysis, the conceptual design of a high turnaround/low maintenance facility has been performed. The results obtained show that performance goals are achievable if specific efforts are directed to the optimization of the launch pad's jet deflector. Potential damage is restricted to the support/release jaws and the jet deflector; a maintenance-free 200-launch life may be obtained through steel cladding of critical deflector areas.
Methods of phase reconstruction for time-averaging electronic speckle pattern interferometry.
Statsenko, T; Chatziioannou, V; Moore, T; Kausel, W
2016-03-10
Electronic speckle pattern interferometry is useful for the qualitative depiction of the deformation profile of harmonically vibrating objects. However, extending the process to achieve quantitative results requires unwrapping the phase in the interferogram, which contains significant noise due to the speckle. Two methods to achieve accurate phase information from time-averaged speckle pattern interferograms are presented. The first is based on a direct inverse of the regions within corresponding phase intervals, and the second is based on optimization of four independent parameters. The optimization method requires less time than more commonly used algorithms and shows higher precision of the resulting surface displacement. PMID:26974782
Luque-Vasquez, Fernando Minjarez-Sosa, J. Adolfo Rosas-Rosas, Luz del Carmen
2010-06-15
This paper deals with a class of semi-Markov control models with Borel state and control spaces, possibly unbounded costs, and unknown holding times distribution F. Assuming that F does not depend on state-action pairs, we combine suitable methods of statistical estimation of the mean holding time with control procedures to construct an average cost optimal Markovian policy {pi}-hat={l_brace}f{sub n}{r_brace}, and an optimal stationary policy {l_brace}f{sub {infinity}}{r_brace}, where f{sub n} converges to f{sub {infinity}} in the sense of Schael.
ERIC Educational Resources Information Center
Robinson, William S.; Buntrock, LeAnn M.
2011-01-01
Turning around chronically low-performing schools is challenging work requiring fundamental rethinking of the change process, and a systemic rather than school-by-school approach. Without a doubt, high-impact school leaders are critical to turnaround success, and pockets of success around the country demonstrate this. However, transformational and…
ERIC Educational Resources Information Center
Public Impact, 2008
2008-01-01
This toolkit includes these separate sections: (1) Selection Preparation Guide; (2) Day-of-Interview Tools; (3) Candidate Rating Tools; and (4) Candidate Comparison and Decision Tools. Each of the sections is designed to be used at different stages of the selection process. The first section provides turnaround teacher competencies that are the…
NASA Astrophysics Data System (ADS)
Özgür, Evren; Koçak, Kasım
2016-04-01
According to nonlinear dynamical system approach, it is possible that the time evolution of a system can be represented by its trajectories in phase space. This phase space is spanned by the state variables which are necessary to determine the time evolution of the system. Atmospheric processes can not be represented by linear approaches because of their dependency on numerous independent variables. Since a small changes in initial conditions lead to significant differences in prediction, long term prediction of meteorological variables is not possible. This situation can be explained by the term "sensitive dependence on initial conditions". In the study, it was tried to determine the average prediction time for different atmospheric variables by applying nonlinear approach. In order to apply the method, the first step is to reconstruct the phase space. Phase space has two parameters which are time delay and embedding dimension. Mutual Information Function (MIF) can be used to determine optimum time delay. MIF considers both linear and nonlinear inner-dependencies in a given time series. To define phase space, embedding dimension must be identified correctly. Embedding dimesion is the number of necessary state variables which describe the dynamics of a system. The algorithm to define embedding dimension is False Nearest Neighbors (FNN). After constructing the phase space by using time delay and embedding dimension, the maximum Lyapunov exponent was introduced. Lyapunov exponent is related to the exponential divergence or convergence of nearby orbits in the phase space. A dynamical system which has positive Lyapunov exponent is defined as chaotic system. Because meteorological variables can be controlled with large numbers of independent variables, time series of meteorological variables might be produced by a chaotic dynamical system. By using phase space and maximum Lyapunov exponent value, average prediction times of different parameters were calculated
High Resolution observation of Ark 564: I. Time-average spectrum
NASA Astrophysics Data System (ADS)
Ramírez, J.; Komossa, S.
2010-07-01
We present an analysis of a 100 ks X-ray spectrum of the Narrow-Line Seyfert 1 Galaxy Ark 564, which we have taken with the Low Energy Transmission Grating Spectrometer (LETGS) on board the X-ray telescope Chandra. We fit several spectral continuum models to the time-averaged X-ray spectrum of this galaxy, and study the possible origin of the steep soft X-ray spectrum. Also, we characterize the average intrinsic absorption of the system, based on photoionization modeling. Using κ2-statistics, we compare several of these spectral models, and arrive at possible solutions for the description of (1) the intrinsic emission continuum, and (2) the intrinsic absorption of the system.
On simulating flow with multiple time scales using a method of averages
Margolin, L.G.
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his new method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.
Asymmetric multifractal detrending moving average analysis in time series of PM2.5 concentration
NASA Astrophysics Data System (ADS)
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Li, Jingming; Zhou, Longfei
2016-09-01
In this paper, we propose the asymmetric multifractal detrending moving average analysis (A-MFDMA) method to explore the asymmetric correlation in non-stationary time series. The proposed method is applied to explore the asymmetric correlation of PM2.5 daily average concentration with uptrends or downtrends in China. In addition, shuffling and phase randomization procedures are applied to detect the sources of multifractality. The results show that existences of asymmetric correlations, and the asymmetric correlations are multifractal. Further, the multifractal scaling behavior in the Chinese PM2.5 is caused not only by long-range correlation but also by fat-tailed distribution, but the major source of multifractality is fat-tailed distribution.
TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA
Iuppa, R.; Di Sciascio, G. E-mail: giuseppe.disciascio@roma2.infn.it
2013-04-01
Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.
ATS simultaneous and turnaround ranging experiments
NASA Technical Reports Server (NTRS)
Watson, J. S.; Putney, B. H.
1971-01-01
This report explains the data reduction and spacecraft position determination used in conjunction with two ATS experiments - Trilateration and Turnaround Ranging - and describes in detail a multilateration program that is used for part of the data reduction process. The process described is for the determination of the inertial position of the satellite, and for formating input for related programs. In the trilateration procedure, a geometric determination of satellite position is made from near simultaneous range measurements made by three different tracking stations. Turnaround ranging involves two stations; one, the master station, transmits the signal to the satellite and the satellite retransmits the signal to the slave station which turns the signal around to the satellite which in turn retransmits the signal to the master station. The results of the satellite position computations using the multilateration program are compared to results of other position determination programs used at Goddard. All programs give nearly the same results which indicates that because of its simplicity and computational speed the trilateration technique is useful in obtaining spacecraft positions for near synchronous satellites.
NASA Astrophysics Data System (ADS)
Gan, Qifeng; Seoud, Lama; Ben Tahar, Houssem; Langlois, J. M. Pierre
2016-04-01
Spatial Averaging Filters (SAF) are extensively used in image processing for image smoothing and denoising. Their latest implementations have already achieved constant time computational complexity regardless of kernel size. However, all the existing O(1) algorithms require additional memory for temporary data storage. In order to minimize memory usage in embedded systems, we introduce a new two-dimensional recursive SAF. It uses previous resultant pixel values along both rows and columns to calculate the current one. It can achieve constant time computational complexity without using any additional memory usage. Experimental comparisons with previous SAF implementations shows that the proposed 2D-Recursive SAF does not require any additional memory while offering a computational time similar to the most efficient existing SAF algorithm. These features make it especially suitable for embedded systems with limited memory capacity.
NASA Astrophysics Data System (ADS)
Young, Christopher; Lucca Fabris, Andrea; Gascon, Nicolas; Cappelli, Mark
2014-10-01
An extensive experimental campaign characterizes a 70 mm diameter stationary plasma thruster operating on xenon in the 200--500 W power range. This study resolves both time-averaged properties and oscillatory phenomena in the plasma discharge. Specifically, we explore the time variation of the plume ion velocity field referenced to periodic discharge current oscillations using time-synchronized laser induced fluorescence (LIF) measurements. This LIF scheme relies on a triggered signal acquisition gate locked at a given phase of the current oscillation period. The laser is modulated at a characteristic frequency and homodyne detection through a lock-in amplifier extracts the induced fluorescence signal out of the bright background emission. This work is sponsored by the U.S. Air Force Office of Scientific Research with Dr. Mitat Birkan as program manager. CVY acknowledges support from the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.
Time-Averaged and Oscillatory Characterization of a Hall Plasma Discharge
NASA Astrophysics Data System (ADS)
Young, Chris V.; Lucca Fabris, Andrea; Gascon, Nicolas; Cappelli, Mark A.
2014-10-01
We characterize a 70 mm diameter stationary plasma thruster operating on xenon at 200-500 W using nonintrusive laser measurements. This study resolves both time-averaged properties and oscillatory phenomena in the plasma discharge. Specifically, we explore how the plume ion velocity field evolves in time with respect to periodic discharge current oscillations using time-synchronized laser induced fluorescence (LIF) techniques. In this LIF scheme, a triggered signal acquisition gate is locked at a given phase of the current oscillation period, allowing for drift in the oscillation. The laser is modulated at a characteristic frequency and the induced fluorescence signal is extracted out of the bright background emission using homodyne detection with a lock-in amplifier.
NASA Astrophysics Data System (ADS)
Zhang, Shengli; Tang, Jiong
2016-04-01
Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.
A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms
NASA Technical Reports Server (NTRS)
Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.
1992-01-01
A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.
Average discharge rate representation of voice onset time in the chinchilla auditory nerve
Sinex, D.G.; McDonald, L.P.
1988-05-01
Responses of chinchilla auditory-nerve fibers to synthesized stop consonants differing in voice onset time (VOT) were obtained. The syllables, heard as /ga/--/ka/ or /da/--/ta/, were similar to those previously used by others in psychophysical experiments with human and with chinchilla subjects. Average discharge rates of neurons tuned to the frequency region near the first formant generally increased at the onset of voicing, for VOTs longer than 20 ms. These rate increases were closely related to spectral amplitude changes associated with the onset of voicing and with the activation of the first formant; as a result, they provided accurate information about VOT. Neurons tuned to frequency regions near the second and third formants did not encode VOT in their average discharge rates. Modulations in the average rates of these neurons reflected spectral variations that were independent of VOT. The results are compared to other measurements of the peripheral encoding of speech sounds and to psychophysical observations suggesting that syllables with large variations in VOT are heard as belonging to one of only two phonemic categories.
Probe shapes that measure time-averaged streamwise momentum and cross-stream turbulence intensity
NASA Technical Reports Server (NTRS)
Rossow, Vernon J. (Inventor)
1993-01-01
A method and apparatus for directly measuring the time-averaged streamwise momentum in a turbulent stream use a probe which has total head response which varies as the cosine-squared of the angle of incidence. The probe has a nose with a slight indentation on its front face for providing the desired response. The method of making the probe incorporates unique design features. Another probe may be positioned in a side-by-side relationship to the first probe to provide a direct measurement of the total pressure. The difference between the two pressures yields the sum of the squares of the cross-stream components of the turbulence level.
Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay
Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.
2000-01-01
Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.
Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?
ERIC Educational Resources Information Center
Mette, Ian M.
2013-01-01
This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…
The Effect of Locus of Control on School Turnaround
ERIC Educational Resources Information Center
Walston, Bradford
2012-01-01
This research focused on the school turnaround process in six turnaround elementary schools located in urban and rural areas of the state of North Carolina. The purpose of the study was to learn about the challenges facing the six schools, the process of improving student achievement, and, more specifically, the degree to which adaptive leadership…
"Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform
ERIC Educational Resources Information Center
Johnson, Amanda Walker
2013-01-01
"Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas. The neoliberal…
The State Role in School Turnaround: Emerging Best Practices
ERIC Educational Resources Information Center
Rhim, Lauren Morando, Ed.; Redding, Sam, Ed.
2014-01-01
This publication explores the role of the state education agency (SEA) in school turnaround efforts. An emphasis is placed on practical application of research and best practices related to the SEA's critical leadership role in driving and supporting successful school turnaround efforts. The publication is organized around the four goals of…
Theoretical Considerations and Standards for the Use of Turnarounds.
ERIC Educational Resources Information Center
Olson, Clark D.
The term "turnaround" has taken a permanent place among the intercollegiate debate jargon. All too often, the first affirmative rebuttalists charge "turnaround" for every plan or response they do not know how to label properly. After so many "false alarms," judges are too weary or aggravated to notice the real thing, and many judges are commenting…
Modal identification based on Gaussian continuous time autoregressive moving average model
NASA Astrophysics Data System (ADS)
Xiuli, Du; Fengquan, Wang
2010-09-01
A new time-domain modal identification method of the linear time-invariant system driven by the non-stationary Gaussian random force is presented in this paper. The proposed technique is based on the multivariate continuous time autoregressive moving average (CARMA) model. This method can identify physical parameters of a system from the response-only data. To do this, we first transform the structural dynamic equation into the CARMA model, and subsequently rewrite it in the state-space form. Second, we present the exact maximum likelihood estimators of parameters of the continuous time autoregressive (CAR) model by virtue of the Girsanov theorem, under the assumption that the uniformly modulated function is approximately equal to a constant matrix over a very short period of time. Then, based on the relation between the CAR model and the CARMA model, we present the exact maximum likelihood estimators of parameters of the CARMA model. Finally, the modal parameters are identified by the eigenvalue analysis method. Numerical results show that the method we introduced here not only has high precision and robustness, but also has very high computing efficiency. Therefore, it is suitable for real-time modal identification.
Geomagnetic field model for the last 5 My: time-averaged field and secular variation
NASA Astrophysics Data System (ADS)
Hatakeyama, Tadahiro; Kono, Masaru
2002-11-01
Structure of the geomagnetic field has bee studied by using the paleomagetic direction data of the last 5 million years obtained from lava flows. The method we used is the nonlinear version, similar to the works of Gubbins and Kelly [Nature 365 (1993) 829], Johnson and Constable [Geophys. J. Int. 122 (1995) 488; Geophys. J. Int. 131 (1997) 643], and Kelly and Gubbins [Geophys. J. Int. 128 (1997) 315], but we determined the time-averaged field (TAF) and the paleosecular variation (PSV) simultaneously. As pointed out in our previous work [Earth Planet. Space 53 (2001) 31], the observed mean field directions are affected by the fluctuation of the field, as described by the PSV model. This effect is not excessively large, but cannot be neglected while considering the mean field. We propose that the new TAF+PSV model is a better representation of the ancient magnetic field, since both the average and fluctuation of the field are consistently explained. In the inversion procedure, we used direction cosines instead of inclinations and declinations, as the latter quantities show singularity or unstable behavior at the high latitudes. The obtained model gives reasonably good fit to the observed means and variances of direction cosines. In the TAF model, the geocentric axial dipole term ( g10) is the dominant component; it is much more pronounced than that in the present magnetic field. The equatorial dipole component is quite small, after averaging over time. The model shows a very smooth spatial variation; the nondipole components also seem to be averaged out quite effectively over time. Among the other coefficients, the geocentric axial quadrupole term ( g20) is significantly larger than the other components. On the other hand, the axial octupole term ( g30) is much smaller than that in a TAF model excluding the PSV effect. It is likely that the effect of PSV is most clearly seen in this term, which is consistent with the conclusion reached in our previous work. The PSV
NASA Astrophysics Data System (ADS)
Basu, Santasri; McCrae, Jack E.; Fiorino, Steven T.
2015-05-01
A time-lapse imaging experiment was conducted to monitor the effects of the atmosphere over some period of time. A tripod-mounted digital camera captured images of a distant building every minute. Correlation techniques were used to calculate the position shifts between the images. Two factors causing shifts between the images are: atmospheric turbulence, causing the images to move randomly and quickly, plus changes in the average refractive index gradient along the path which cause the images to move vertically, more slowly and perhaps in noticeable correlation with solar heating and other weather conditions. A technique for estimating the path-averaged C 2n from the random component of the image motion is presented here. The technique uses a derived set of weighting functions that depend on the size of the imaging aperture and the patch size in the image whose motion is being tracked. Since this technique is phase based, it can be applied to strong turbulence paths where traditional irradiance based techniques suffer from saturation effects.
Enhancing MALDI Time-Of-Flight Mass Spectrometer Performance through Spectrum Averaging
Mitchell, Morgan; Mali, Sujina; King, Charles C.; Bark, Steven J.
2015-01-01
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30–50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD) detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements. PMID:25798583
Enhancing MALDI time-of-flight mass spectrometer performance through spectrum averaging.
Mitchell, Morgan; Mali, Sujina; King, Charles C; Bark, Steven J
2015-01-01
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30-50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD) detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements. PMID:25798583
NASA Astrophysics Data System (ADS)
Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho
2016-03-01
We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.
Calculations of the time-averaged local heat transfer coefficients in circulating fluidized bed
Dai, T.H.; Qian, R.Z.; Ai, Y.F.
1999-04-01
The great potential to burn a wide variety of fuels and the reduced emission of pollutant gases mainly SO{sub x} and NO{sub x} have inspired the investigators to conduct research at a brisk pace all around the world on circulating fluidized bed (CFB) technology. An accurate understanding of heat transfer to bed walls is required for proper design of CFB boilers. To develop an optimum economic design of the boiler, it is also necessary to know how the heat transfer coefficient depends on different design and operating parameters. It is impossible to do the experiments under all operating conditions. Thus, the mathematical model prediction is a valuable method instead. Based on the cluster renewal theory of heat transfer in circulating fluidized beds, a mathematical model for predicting the time-averaged local bed-to-wall heat transfer coefficients is developed. The effects of the axial distribution of the bed density on the time-average local heat transfer coefficients are taken into account via dividing the bed into a series of sections along its height. The assumptions are made about the formation and falling process of clusters on the wall. The model predictions are in an acceptable agreement with the published data.
Asynchronous H∞ filtering for linear switched systems with average dwell time
NASA Astrophysics Data System (ADS)
Wang, Bo; Zhang, Hongbin; Wang, Gang; Dang, Chuangyin
2016-09-01
This paper is concerned with the H∞ filtering problem for a class of continuous-time linear switched systems with the asynchronous behaviours, where 'asynchronous' means that the switching of the filters to be designed has a lag to the switching of the system modes. By using the Lyapunov-like functions and the average dwell time technique, a sufficient condition is obtained to guarantee the asymptotic stability with a weighted H∞ performance index for the filtering error system. Moreover, the results are formulated in the form of linear matrix inequalities that are numerical feasible. As a result, the filter design problem is solved. Finally, an illustrative numerical example is presented to show the effectiveness of the results.
Bose-Einstein condensation in large time-averaged optical ring potentials
NASA Astrophysics Data System (ADS)
Bell, Thomas A.; Glidden, Jake A. P.; Humbert, Leif; Bromley, Michael W. J.; Haine, Simon A.; Davis, Matthew J.; Neely, Tyler W.; Baker, Mark A.; Rubinsztein-Dunlop, Halina
2016-03-01
Interferometric measurements with matter waves are established techniques for sensitive gravimetry, rotation sensing, and measurement of surface interactions, but compact interferometers will require techniques based on trapped geometries. In a step towards the realisation of matter wave interferometers in toroidal geometries, we produce a large, smooth ring trap for Bose-Einstein condensates using rapidly scanned time-averaged dipole potentials. The trap potential is smoothed by using the atom distribution as input to an optical intensity correction algorithm. Smooth rings with a diameter up to 300 μm are demonstrated. We experimentally observe and simulate the dispersion of condensed atoms in the resulting potential, with good agreement serving as an indication of trap smoothness. Under time of flight expansion we observe low energy excitations in the ring, which serves to constrain the lower frequency limit of the scanned potential technique. The resulting ring potential will have applications as a waveguide for atom interferometry and studies of superfluidity.
ERIC Educational Resources Information Center
Scott, Caitlin; Lasley, Nora
2014-01-01
In 2014, state and national leaders found many aspects of turning around America's low-performing schools even more daunting than in the previous year. These views were revealed in the Center on School Turnaround's (CST's) 2014 February/March survey of school turnaround leaders in State Education Agencies (SEA) and directors of the nation's…
Coherent averaging of the passive fathometer response using short correlation time.
Traer, James; Gerstoft, Peter
2011-12-01
The passive fathometer algorithm was applied to data from two drifting array experiments in the Mediterranean, Boundary 2003 and 2004. The passive fathometer response was computed with correlation times from 0.34 to 90 s and, for correlation times less than a few seconds, the observed signal-to-noise ratio (SNR) agrees with a 1D model of SNR of the passive fathometer response in an ideal waveguide. In the 2004 experiment, the fathometer response showed the array depth varied periodically with an amplitude of 1 m and a period of 7 s consistent with wave driven motion of the array. This introduced a destructive interference, which prevents the SNR growing with increasing correlation time. A peak-tracking algorithm applied to the fathometer response of experimental data was used to remove this motion allowing the coherent passive fathometer response to be averaged over several minutes without destructive interference. Multirate adaptive beamforming, using 90 s correlation time to form adaptive steer vectors which were applied to 0.34 s data snapshots, increases the SNR of the passive fathometer response. PMID:22225020
Time-averaged flow over a hydrofoil at high Reynolds number
NASA Astrophysics Data System (ADS)
Bourgoyne, Dwayne A.; Hamel, Joshua M.; Ceccio, Steven L.; Dowling, David R.
2003-12-01
At high Reynolds number, the flow of an incompressible viscous fluid over a lifting surface is a rich blend of fluid dynamic phenomena. Here, boundary layers formed at the leading edge develop over both the suction and pressure sides of the lifting surface, transition to turbulence, separate near the foil's trailing edge, combine in the near wake, and eventually form a turbulent far-field wake. The individual elements of this process have been the subject of much prior work. However, controlled experimental investigations of these flow phenomena and their interaction on a lifting surface at Reynolds numbers typical of heavy-lift aircraft wings or full-size ship propellers (chord-based Reynolds numbers, Re_C {˜} 10(7{-}10^8) ) are largely unavilable. This paper presents results from an experimental effort to identify and measure the dominant features of the flow over a two-dimensional hydrofoil at nominal Re_C values from near one million to more than 50 million. The experiments were conducted in the US Navy's William B. Morgan Large Cavitation Channel with a solid-bronze hydrofoil (2.1 m chord, 3.0 m span, 17 cm maximum thickness) at flow speeds from 0.25 to 18.3 m s(-1) . The foil section, a modified NACA 16 with a pressure side that is nearly flat and a suction side that terminates in a blunt trailing-edge bevel, approximates the cross-section of a generic naval propeller blade. Time-averaged flow-field measurements drawn from laser-Doppler velocimetry, particle-imaging velocimetry, and static pressure taps were made for two trailing-edge bevel angles (44 (°) and 56 (°) ). These velocity and pressure measurements were concentrated in the trailing-edge and near-wake regions, but also include flow conditions upstream and far downstream of the foil, as well as static pressure distributions on the foil surface and test section walls. Observed Reynolds-number variations in the time-averaged flow over the foil are traced to changes in suction-side boundary
Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas
2014-01-01
Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467
Effect of spatial averaging on multifractal properties of meteorological time series
NASA Astrophysics Data System (ADS)
Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika
2016-04-01
Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased
ARECIBO MULTI-FREQUENCY TIME-ALIGNED PULSAR AVERAGE-PROFILE AND POLARIZATION DATABASE
Hankins, Timothy H.; Rankin, Joanna M. E-mail: Joanna.Rankin@uvm.edu
2010-01-15
We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.
Time-Averaged Adiabatic Potentials: Versatile Matter-Wave Guides and Atom Traps
NASA Astrophysics Data System (ADS)
Lesanovsky, Igor; von Klitzing, Wolf
2007-08-01
We demonstrate a novel class of trapping potentials, time-averaged adiabatic potentials (TAAP), which allows the generation of a large variety of traps for quantum gases and matter-wave guides for atom interferometers. Examples include stacks of pancakes, rows of cigars, and multiple rings or sickles. The traps can be coupled through controllable tunneling barriers or merged altogether. We present analytical expressions for pancake-, cigar-, and ring-shaped traps. The ring geometry is of particular interest for guided matter-wave interferometry as it provides a perfectly smooth waveguide of widely tunable diameter and thus adjustable sensitivity of the interferometer. The flexibility of the TAAP would make possible the use of Bose-Einstein condensates as coherent matter waves in large-area atom interferometers.
Time-averaged adiabatic potentials: versatile matter-wave guides and atom traps.
Lesanovsky, Igor; von Klitzing, Wolf
2007-08-24
We demonstrate a novel class of trapping potentials, time-averaged adiabatic potentials (TAAP), which allows the generation of a large variety of traps for quantum gases and matter-wave guides for atom interferometers. Examples include stacks of pancakes, rows of cigars, and multiple rings or sickles. The traps can be coupled through controllable tunneling barriers or merged altogether. We present analytical expressions for pancake-, cigar-, and ring-shaped traps. The ring geometry is of particular interest for guided matter-wave interferometry as it provides a perfectly smooth waveguide of widely tunable diameter and thus adjustable sensitivity of the interferometer. The flexibility of the TAAP would make possible the use of Bose-Einstein condensates as coherent matter waves in large-area atom interferometers. PMID:17930945
Time-Averaged Adiabatic Potentials: Versatile Matter-Wave Guides and Atom Traps
Lesanovsky, Igor; Klitzing, Wolf von
2007-08-24
We demonstrate a novel class of trapping potentials, time-averaged adiabatic potentials (TAAP), which allows the generation of a large variety of traps for quantum gases and matter-wave guides for atom interferometers. Examples include stacks of pancakes, rows of cigars, and multiple rings or sickles. The traps can be coupled through controllable tunneling barriers or merged altogether. We present analytical expressions for pancake-, cigar-, and ring-shaped traps. The ring geometry is of particular interest for guided matter-wave interferometry as it provides a perfectly smooth waveguide of widely tunable diameter and thus adjustable sensitivity of the interferometer. The flexibility of the TAAP would make possible the use of Bose-Einstein condensates as coherent matter waves in large-area atom interferometers.
Non-linear adaptive sliding mode switching control with average dwell-time
NASA Astrophysics Data System (ADS)
Yu, Lei; Zhang, Maoqing; Fei, Shumin
2013-03-01
In this article, an adaptive integral sliding mode control scheme is addressed for switched non-linear systems in the presence of model uncertainties and external disturbances. The control law includes two parts: a slide mode controller for the reduced model of the plant and a compensation controller to deal with the non-linear systems with parameter uncertainties. The adaptive updated laws have been derived from the switched multiple Lyapunov function method, also an admissible switching signal with average dwell-time technique is given. The simplicity of the proposed control scheme facilitates its implementation and the overall control scheme guarantees the global asymptotic stability in the Lyapunov sense such that the sliding surface of the control system is well reached. Simulation results are presented to demonstrate the effectiveness and the feasibility of the proposed approach.
A Continuous Time Model for Interest Rate with Autoregressive and Moving Average Components
NASA Astrophysics Data System (ADS)
Benth, F. E.; Koekebakker, S.; Zakamouline, V.
2010-09-01
In this paper we present a multi-factor continuous-time autoregressive moving-average (CARMA) model for the short and forward interest rates. This models is able to present a more adequate statistical description of the short and forward rate dynamics. We show that this is a tractable term structure model and provide closed-form solutions to bond and bond option prices, bond yields, and the forward rate volatility term structure. We demonstrate the capabilities of our model by calibrating it to market data and show that it can reproduce rather complex shapes of the empirical volatility term structure. In particular, a three-factor CARMA model can easily capture the dynamics of the level, slope, and curvature factors widely documented in term structure models.
Long term average distribution of O+ in the quiet-time terrestrial magnetosphere
NASA Astrophysics Data System (ADS)
Liu, Wenlong; Wang, Jing; Cao, Jinbin
2016-06-01
Cluster measurements from 2001 to 2011 provide a unique opportunity to study the characteristics of O+ with full spatial coverage between 4 to 19 RE, especially in the mid-latitude region. Three-dimensional spatial distributions of averaged omnidirectional O+ differential fluxes in three energy channels (E1:40-136 eV; E2:136 eV-3 keV; E3:3-30 keV) during quiet times (Dst > -20~nT) are presented in this paper. Comparing the distributions of O+ in three energy channels helps us to better understand the transport and energization of O+. Consistent with previous studies, it is suggested that during quiet times O+ is heated in the high-altitude cusp and mantle, and part of this heated population is transported through the lobes to the plasmasheet, where it is further heated/accelerated. The distributions presented provide important background information for relevant simulation and observation studies of O+ behavior during storm and non-storm times.
Time-dependent density functional theory with twist-averaged boundary conditions
NASA Astrophysics Data System (ADS)
Schuetrumpf, B.; Nazarewicz, W.; Reinhard, P.-G.
2016-05-01
Background: Time-dependent density functional theory is widely used to describe excitations of many-fermion systems. In its many applications, three-dimensional (3D) coordinate-space representation is used, and infinite-domain calculations are limited to a finite volume represented by a spatial box. For finite quantum systems (atoms, molecules, nuclei, hadrons), the commonly used periodic or reflecting boundary conditions introduce spurious quantization of the continuum states and artificial reflections from boundary; hence, an incorrect treatment of evaporated particles. Purpose: The finite-volume artifacts for finite systems can be practically cured by invoking an absorbing potential in a certain boundary region sufficiently far from the described system. However, such absorption cannot be applied in the calculations of infinite matter (crystal electrons, quantum fluids, neutron star crust), which suffer from unphysical effects stemming from a finite computational box used. Here, twist-averaged boundary conditions (TABC) have been used successfully to diminish the finite-volume effects. In this work, we extend TABC to time-dependent modes. Method: We use the 3D time-dependent density functional framework with the Skyrme energy density functional. The practical calculations are carried out for small- and large-amplitude electric dipole and quadrupole oscillations of 16O. We apply and compare three kinds of boundary conditions: periodic, absorbing, and twist-averaged. Results: Calculations employing absorbing boundary conditions (ABC) and TABC are superior to those based on periodic boundary conditions. For low-energy excitations, TABC and ABC variants yield very similar results. With only four twist phases per spatial direction in TABC, one obtains an excellent reduction of spurious fluctuations. In the nonlinear regime, one has to deal with evaporated particles. In TABC, the floating nucleon gas remains in the box; the amount of nucleons in the gas is found to be
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
NASA Astrophysics Data System (ADS)
Li, Jian; Yang, Guang-Hong
2013-12-01
This article is concerned with the problem of fault detection and isolation (FDI) for discrete-time switched linear systems based on the average dwell-time method. The proposed FDI framework consists of a bank of FDI filters, which are divided into N groups for N subsystems. The FDI filters belonging to one group correspond to the faults for a subsystem, and generate a residual signal to guarantee the fault sensitivity performance for the subsystem, the fault attenuation performance for other subsystems and the disturbance attenuation performance for all subsystems. Different form employing the weighting matrices to restrict the frequency ranges of faults for each subsystem, the finite-frequency H - performance for switched systems is first defined. Sufficient conditions are established by linear matrix inequalities (LMIs), and the filter gains are characterised in terms of the solution of a convex optimisation problem. Two examples are used to demonstrate the effectiveness of the proposed design method.
NASA Astrophysics Data System (ADS)
Akimoto, Takuma; Yamamoto, Eiji
2016-06-01
We consider the Langevin equation with dichotomously fluctuating diffusivity, where the diffusion coefficient changes dichotomously over time, in order to study fluctuations of time-averaged observables in temporally heterogeneous diffusion processes. We find that the time-averaged mean-square displacement (TMSD) can be represented by the occupation time of a state in the asymptotic limit of the measurement time and hence occupation time statistics is a powerful tool for calculating the TMSD in the model. We show that the TMSD increases linearly with time (normal diffusion) but the time-averaged diffusion coefficients are intrinsically random when the mean sojourn time for one of the states diverges, i.e., intrinsic nonequilibrium processes. Thus, we find that temporally heterogeneous environments provide anomalous fluctuations of time-averaged diffusivity, which have relevance to large fluctuations of the diffusion coefficients obtained by single-particle-tracking trajectories in experiments.
NASA Astrophysics Data System (ADS)
Mao, Yijun; Xu, Chen; Qi, Datong
2015-02-01
A vector aeroacoustics method is developed to analyze the acoustic energy flow path from the rotating source. In this method, the instantaneous and time-averaged active acoustic intensity vectors are evaluated from the time-domain and frequency-domain acoustic pressure and acoustic velocity formulations, respectively. With the above method, the acoustic intensity vectors and the acoustic energy streamlines are visualized to investigate the propagation feature of the noise radiated from the monopole and dipole point sources and the rotor in subsonic rotation. The result reveals that a portion of the acoustic energy spirals many circles before moving towards the far field, and another portion of the acoustic energy firstly flows inward along the radial direction and then propagates along the axial direction. Further, an acoustic black hole exists in the plane of source rotation, from which the acoustic energy cannot escape once the acoustic energy flows into it. Moreover, by visualizing the acoustic intensity field around the rotating sources, the acoustic-absorption performance of the acoustic liner built in the casing and centerbody is discussed.
Modified box dimension and average weighted receiving time on the weighted fractal networks
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-01-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355
Time-weighted average SPME analysis for in planta determination of cVOCs.
Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G
2012-03-20
The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592
Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi
2015-03-01
In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ∈ (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by θ(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.
NASA Technical Reports Server (NTRS)
Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.
1995-01-01
Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.
Detecting macroeconomic phases in the Dow Jones Industrial Average time series
NASA Astrophysics Data System (ADS)
Wong, Jian Cheng; Lian, Heng; Cheong, Siew Ann
2009-11-01
In this paper, we perform statistical segmentation and clustering analysis of the Dow Jones Industrial Average (DJI) time series between January 1997 and August 2008. Modeling the index movements and log-index movements as stationary Gaussian processes, we find a total of 116 and 119 statistically stationary segments respectively. These can then be grouped into between five and seven clusters, each representing a different macroeconomic phase. The macroeconomic phases are distinguished primarily by their volatilities. We find that the US economy, as measured by the DJI, spends most of its time in a low-volatility phase and a high-volatility phase. The former can be roughly associated with economic expansion, while the latter contains the economic contraction phase in the standard economic cycle. Both phases are interrupted by a moderate-volatility market correction phase, but extremely-high-volatility market crashes are found mostly within the high-volatility phase. From the temporal distribution of various phases, we see a high-volatility phase from mid-1998 to mid-2003, and another starting mid-2007 (the current global financial crisis). Transitions from the low-volatility phase to the high-volatility phase are preceded by a series of precursor shocks, whereas the transition from the high-volatility phase to the low-volatility phase is preceded by a series of inverted shocks. The time scale for both types of transitions is about a year. We also identify the July 1997 Asian Financial Crisis to be the trigger for the mid-1998 transition, and an unnamed May 2006 market event related to corrections in the Chinese markets to be the trigger for the mid-2007 transition.
NASA Astrophysics Data System (ADS)
Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.
2009-02-01
In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (μCT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.
NASA Astrophysics Data System (ADS)
Liotard, Arnaud; Muratet, Sylvaine; Zamkotsian, Fr‰d.‰ric; Fourniols, Jean-Yves
2005-01-01
Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing, characterizing and modeling this key-component. Actuators and a continuous-membrane micro deformable mirror (3*3 actuators, 600*600 μm2) have been designed in-house and processed by surface micromachining in the Cronos foundry. A dedicated characterization bench has been developed for the complete analysis. This Twyman-Green interferometer allows high in-plane resolution (4 μm) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps and FEM can be fitted. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometer. The deformable mirror exhibit a 350nm stroke for 35 volts on the central actuator. This limited stroke could be overcome by changing the components material and promising actuators are made with polymers.
NASA Astrophysics Data System (ADS)
Liotard, Arnaud; Muratet, Sylvaine; Zamkotsian, Frédéric; Fourniols, Jean-Yves
2004-12-01
Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing, characterizing and modeling this key-component. Actuators and a continuous-membrane micro deformable mirror (3*3 actuators, 600*600 µm2) have been designed in-house and processed by surface micromachining in the Cronos foundry. A dedicated characterization bench has been developed for the complete analysis. This Twyman-Green interferometer allows high in-plane resolution (4 µm) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps and FEM can be fitted. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometer. The deformable mirror exhibit a 350nm stroke for 35 volts on the central actuator. This limited stroke could be overcome by changing the components material and promising actuators are made with polymers.
Selection of Ka-Band Transponder Turnaround Frequency Ratio
NASA Technical Reports Server (NTRS)
Koukos, J.
1993-01-01
The Consultative Committee for Space Data Systems has issued recommendations specifying Transponder Turn-around Frequency Ratios for S-band and X-band coherent earth-to-space and space-to earth links.
Turnaround operations analysis for OTV. Volume 2: Detailed technical report
NASA Technical Reports Server (NTRS)
1988-01-01
The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.
34. BOILER HOUSE, COAL CONVEYOR AND TURNAROUND TRACK FOR COAL ...
34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA
NASA Astrophysics Data System (ADS)
Wu, Yue-Chao; Zhao, Bin; Lee, Soo-Y.
2016-02-01
Femtosecond stimulated Raman spectroscopy (FSRS) on the Stokes side arises from a third order polarization, P(3)(t), which is given by an overlap of a first order wave packet, |" separators=" Ψ2 ( 1 ) ( p u , t ) > , prepared by a narrow band (ps) Raman pump pulse, Epu(t), on the upper electronic e2 potential energy surface (PES), with a second order wave packet, <" separators=" Ψ1 ( 2 ) ( p r ∗ , p u , t ) | , that is prepared on the lower electronic e1 PES by a broadband (fs) probe pulse, Epr(t), acting on the first-order wave packet. In off-resonant FSRS, |" separators=" Ψ2 ( 1 ) ( p u , t ) > resembles the zeroth order wave packet |" separators=" Ψ1 ( 0 ) ( t ) > on the lower PES spatially, but with a force on |" separators=" Ψ2 ( 1 ) ( p u , t ) > along the coordinates of the reporter modes due to displacements in the equilibrium position, so that <" separators=" Ψ1 ( 2 ) ( p r ∗ , p u , t ) | will oscillate along those coordinates thus giving rise to similar oscillations in P(3)(t) with the frequencies of the reporter modes. So, by recovering P(3)(t) from the FSRS spectrum, we are able to deduce information on the time-dependent quantum-mechanical wave packet averaged frequencies, ω ¯ j ( t ) , of the reporter modes j along the trajectory of |" separators=" Ψ1 ( 0 ) ( t ) > . The observable FSRS Raman gain is related to the imaginary part of P(3)(ω). The imaginary and real parts of P(3)(ω) are related by the Kramers-Kronig relation. Hence, from the FSRS Raman gain, we can obtain the complex P(3)(ω), whose Fourier transform then gives us the complex P(3)(t) to analyze for ω ¯ j ( t ) . We apply the theory, first, to a two-dimensional model system with one conformational mode of low frequency and one reporter vibrational mode of higher frequency with good results, and then we apply it to the time-resolved FSRS spectra of the cis-trans isomerization of retinal in rhodopsin [P. Kukura et al., Science 310, 1006 (2005)]. We obtain the vibrational
NASA Astrophysics Data System (ADS)
Santer, B. D.; Wigley, T. M. L.; Boyle, J. S.; Gaffen, D. J.; Hnilo, J. J.; Nychka, D.; Parker, D. E.; Taylor, K. E.
2000-03-01
This paper examines trend uncertainties in layer-average free atmosphere temperatures arising from the use of different trend estimation methods. It also considers statistical issues that arise in assessing the significance of individual trends and of trend differences between data sets. Possible causes of these trends are not addressed. We use data from satellite and radiosonde measurements and from two reanalysis projects. To facilitate intercomparison, we compute from reanalyses and radiosonde data temperatures equivalent to those from the satellite-based Microwave Sounding Unit (MSU). We compare linear trends based on minimization of absolute deviations (LA) and minimization of squared deviations (LS). Differences are generally less than 0.05°C/decade over 1959-1996. Over 1979-1993, they exceed 0.10°C/decade for lower tropospheric time series and 0.15°C/decade for the lower stratosphere. Trend fitting by the LA method can degrade the lower-tropospheric trend agreement of 0.03°C/decade (over 1979-1996) previously reported for the MSU and radiosonde data. In assessing trend significance we employ two methods to account for temporal autocorrelation effects. With our preferred method, virtually none of the individual 1979-1993 trends in deep-layer temperatures are significantly different from zero. To examine trend differences between data sets we compute 95% confidence intervals for individual trends and show that these overlap for almost all data sets considered. Confidence intervals for lower-tropospheric trends encompass both zero and the model-projected trends due to anthropogenic effects. We also test the significance of a trend in d(t), the time series of differences between a pair of data sets. Use of d(t) removes variability common to both time series and facilitates identification of small trend differences. This more discerning test reveals that roughly 30% of the data set comparisons have significant differences in lower-tropospheric trends
Redshift-space equal-time angular-averaged consistency relations of the gravitational dynamics
NASA Astrophysics Data System (ADS)
Nishimichi, Takahiro; Valageas, Patrick
2015-12-01
We present the redshift-space generalization of the equal-time angular-averaged consistency relations between (ℓ+n )- and n -point polyspectra (i.e., the Fourier counterparts of correlation functions) of the cosmological matter density field. Focusing on the case of the ℓ=1 large-scale mode and n small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of N -body simulations, we find that our relation on the bispectrum in the squeezed limit (i.e., the limit where one wave number is much smaller than the other two) is valid to better than 20% up to 1 h Mpc-1 , for both the monopole and quadrupole at z =0.35 , in a Λ CDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come from the breakdown of the approximate symmetry of the gravitational dynamics. For practical applications, we introduce a simple ansatz to estimate the new derivative terms in the relation using only observables. Although the relation holds worse after using this ansatz, we can still recover it within 20% up to 1 h Mpc-1 , at z =0.35 for the monopole. On larger scales, k =0.2 h Mpc-1 , it still holds within the statistical accuracy of idealized simulations of volume ˜8 h-3Gpc3 without shot-noise error.
Paleosecular Variation and Time-Averaged Field Behavior: Global and Regional Signatures
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Cromwell, G.; Tauxe, L.; Constable, C.
2012-12-01
We use an updated global dataset of directional and intensity data from lava flows to investigate time-averaged field (TAF) and paleosecular variation (PSV) signatures regionally and globally. The data set includes observations from the past 10 Ma, but we focus our investigations on the field structure over past 5 Ma, in particular during the Brunhes and Matuyama. We restrict our analyses to sites with at least 5 samples (all of which have been stepwise demagnetized), and for which the estimate of the Fisher precision parameter, k, is at least 50. The data set comprises 1572 sites from the past 5 Ma that span latitudes 78oS to 71oN; of these ˜40% are from the Brunhes chron and ˜20% are from the Matuyama chron. Age control at the site level is variable because radiometric dates are available for only about one third of our sites. New TAF models for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling, and the limited age information available for many sites. Results from Hawaii indicate that resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but a PSV estimate (virtual geomagnetic pole dispersion) that is increased relative to that obtained from the unevenly sampled data. The global distribution of sites in our dataset allows us to investigate possible hemispheric asymmetries in field structure, in particular differences between north and south high latitude field behavior and low latitude differences between the Pacific and Atlantic hemispheres.
Where the world stands still: turnaround as a strong test of ΛCDM cosmology
Pavlidou, V.; Tomaras, T.N. E-mail: tomaras@physics.uoc.gr
2014-09-01
Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ΛCDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ΛCDM model is, for a given mass M, equal to (3GM/Λ c{sup 2}){sup 1/3}, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ΛCDM cosmology. Current data appear to favor ΛCDM over alternatives with local inhomogeneities and no Λ. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ΛCDM cosmology.
Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.
2011-01-01
Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Time-Averaged Velocity, Temperature and Density Surveys of Supersonic Free Jets
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.; Mielke, Amy F.
2005-01-01
A spectrally resolved molecular Rayleigh scattering technique was used to simultaneously measure axial component of velocity U, static temperature T, and density p in unheated free jets at Mach numbers M = 0.6,0.95, 1.4 and 1.8. The latter two conditions were achieved using contoured convergent-divergent nozzles. A narrow line-width continuous wave laser was passed through the jet plumes and molecular scattered light from a small region on the beam was collected and analyzed using a Fabry-Perot interferometer. The optical spectrum analysis air density at the probe volume was determined by monitoring the intensity variation of the scattered light using photo-multiplier tubes. The Fabry-Perot interferometer was operated in the imaging mode, whereby the fringe formed at the image plane was captured by a cooled CCD camera. Special attention was given to remove dust particles from the plume and to provide adequate vibration isolation to the optical components. The velocity profiles from various operating conditions were compared with that measured by a Pitot tube. An excellent comparison within 5m's demonstrated the maturity of the technique. Temperature was measured least accurately, within 10K, while density was measured within 1% uncertainty. The survey data consisted of centerline variations and radial profiles of time-averaged U, T and p. The static temperature and density values were used to determine static pressure variations inside the jet. The data provided a comparative study of jet growth rates with increasing Mach number. The current work is part of a data-base development project for Computational Fluid Dynamics and Aeroacoustics codes that endeavor to predict noise characteristics of high speed jets. A limited amount of far field noise spectra from the same jets are also presented. Finally, a direct experimental validation was obtained for the Crocco-Busemann equation which is commonly used to predict temperature and density profiles from known velocity
Diagnostic quality of time-averaged ECG-gated CT data
NASA Astrophysics Data System (ADS)
Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan
2009-02-01
Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.
NASA Technical Reports Server (NTRS)
Chung, William; Chachad, Girish; Hochstetler, Ronald
2016-01-01
The Integrated Gate Turnaround Management (IGTM) concept was developed to improve the gate turnaround performance at the airport by leveraging relevant historical data to support optimization of airport gate operations, which include: taxi to the gate, gate services, push back, taxi to the runway, and takeoff, based on available resources, constraints, and uncertainties. By analyzing events of gate operations, primary performance dependent attributes of these events were identified for the historical data analysis such that performance models can be developed based on uncertainties to support descriptive, predictive, and prescriptive functions. A system architecture was developed to examine system requirements in support of such a concept. An IGTM prototype was developed to demonstrate the concept using a distributed network and collaborative decision tools for stakeholders to meet on time pushback performance under uncertainties.
Schmidt, R E; Yu, M J; McDougal, D B
1980-09-01
Reversal of direction (turnaround) of axonal transport of particle-specific enzyme activities was studied at a ligature placed on rat sciatic nerve. In the principal experiment, the ligature remained on the nerve in vivo several hours, allowing enzyme activities (acetylcholinesterase, acid phosphatase, and monoamine oxidase) to accumulate immediately proximal to the tie. The nerve was then tied a second time, proximal to the first tie, and incubated in vitro for several more hours. Accumulation of enzyme activities just distal to the second tie was measured. This second accumulation, of activities traveling in the retrograde direction, was shown to be the result of turnaround in several ways. (1) The increase in activity distal to the second tie was equal to the decrease in activity proximal to the first. (2) The increase in enzyme activities distal to the second tie was greatly reduced when the accumulation proximal to the first tie was trapped by placing a third tie between the first and second ties. (3) It was shown that the activity that accumulated distal to the second tie could not have been in retrograde motion at the time of the first tie. (4) Accumulation distal to the second tie was not a function of the length of nerve segment included between the two ties. In contrast to the consistent occurrence of turnaround of orthograde flow, turnaround of retrograde flow could not be demonstrated. Turnaround transport was blocked by incubation in the cold and in the presence of NaCN or vinblastine. The turnaround process operated on all three enzymes studied, suggesting that it operates on lysosomes and mitochondria, as well as on the endoplasmic reticulum-like material bearing acetylcholinesterase. Evidence for the participation of the transport process in the renewal of AChE in the distal portions of the axon was obtained in experiments using diisopropylphosphorofluoridate and cycloheximide. PMID:6161227
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta
2004-01-01
Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature
Discrete Randomness in Discrete Time Quantum Walk: Study Via Stochastic Averaging
NASA Astrophysics Data System (ADS)
Ellinas, D.; Bracken, A. J.; Smyrnakis, I.
2012-10-01
The role of classical noise in quantum walks (QW) on integers is investigated in the form of discrete dichotomic random variable affecting its reshuffling matrix parametrized as a SU2)/U (1) coset element. Analysis in terms of quantum statistical moments and generating functions, derived by the completely positive trace preserving (CPTP) map governing evolution, reveals a pronounced eventual transition in walk's diffusion mode, from a quantum ballistic regime with rate O(t) to a classical diffusive regime with rate O(√{t}), when condition (strength of noise parameter)2 × (number of steps) = 1, is satisfied. The role of classical randomness is studied showing that the randomized QW, when treated on the stochastic average level by means of an appropriate CPTP averaging map, turns out to be equivalent to a novel quantized classical walk without randomness. This result emphasizes the dual role of quantization/randomization in the context of classical random walk.
Code of Federal Regulations, 2010 CFR
2010-07-01
... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...
Code of Federal Regulations, 2013 CFR
2013-07-01
... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...
Code of Federal Regulations, 2014 CFR
2014-07-01
... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...
Code of Federal Regulations, 2011 CFR
2011-07-01
... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...
NASA Astrophysics Data System (ADS)
Maksimchuk, A.; Nantel, M.; Workman, J.; Umstadter, D.; Mourou, G.; Pikuz, S.
1996-11-01
We have developed an averaging picosecond x-ray streak camera that uses a DC-biased semi-insulating GaAs photoconductive switch as a generator of a high-voltage ramp for the deflection plates of the streak camera. The streak camera is operated at a sweep speed of up to 8 ps/mm, with a shot-to-shot jitter of 1 ps. The streak camera driven by this switch is absolutely synhronized with the femtosecond laser pulse and allows the direct accumulation of the x-ray temporal signals at a 10 Hz repetition rate. The streak camera has been used for the investigation of temporal behavior of keV x-ray lines and spectrally resolved XUV emission from solid target at the laser intensity of 10^17 W/cm^2. This work is supported by NSF under Grant STC PHY 8920108.
NASA Astrophysics Data System (ADS)
Yang, Mino; Skinner, J. L.
2011-10-01
The time-averaging approximation (TAA), originally developed to calculate vibrational line shapes for coupled chromophores using mixed quantum/classical methods, is reformulated. In the original version of the theory, time averaging was performed for the full one-exciton Hamiltonian, while herein the time averaging is performed on the coupling (off-diagonal) Hamiltonian in the interaction picture. As a result, the influence of the dynamic fluctuations of the transition energies is more accurately described. We compare numerical results of the two versions of the TAA with numerically exact results for the vibrational absorption line shape of the OH stretching modes in neat water. It is shown that the TAA in the interaction picture yields theoretical line shapes that are in better agreement with exact results.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Empirical wind model for the middle and lower atmosphere. Part 1: Local time average
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.
Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R
2010-01-01
Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.
ERIC Educational Resources Information Center
Jackson, Delores J.
2010-01-01
The purpose of this study was to determine if admission selection strategies that utilize cumulative and/or pre-requisite GPA are predictive of timely graduation for associate degree nursing (RN-AD) students. Data were obtained from de-identified records of 437 associate degree nursing students enrolled in three Midwest community colleges from…
ERIC Educational Resources Information Center
Heffez, Jack
To determine what effects employment will have on high school students' grade point averages and rate of school attendance, the author involved fifty-six students in an experiment. Twenty-eight students were employed part-time under the Youth Incentive Entitlement Project (YIEP). The twenty-eight students in the control group were eligible for…
NASA Astrophysics Data System (ADS)
Kumar, T.; Cahay, M.; Roenker, K.
1996-11-01
The average base transit time is computed using a current impulse response technique for three typical abrupt Npn heterojunction bipolar transistors as a function of the emitter-base bias, VBE. This technique is based on a hybrid model of carrier transport incorporating a quantum-mechanical analysis of carrier injection at the emitter-base junction and a Monte Carlo analysis of base transport. For typical AlGaAs/GaAs and InP/InGaAs structures, the base transit time first increases with VBE, reaches a maximum, and then decreases towards a value close to the one predicted using a semi-Maxwellian injection of carriers into the base at an energy equal to the emitter-base conduction band spike. For a typical InAlAs/InGaAs structure, the average base transit time is found to decrease with an increase in VBE. For all structures, we show that there is a correlation between the bias dependence of the average base transit time and the bias dependence of the average number of collisions per carrier (calculated for carriers transmitted across the base).
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic... SOURCES Operator Training and Qualification Monitoring § 60.2943 How do I convert my 1-hour arithmetic... emissions at 7 percent oxygen. (b) Use Equation 2 in § 60.2975 to calculate the 12-hour rolling averages...
The Rural Turnaround in Ohio: Some Evidence Related to Implications.
ERIC Educational Resources Information Center
Thomas, Donald W.; Bachtel, Douglas C.
Limited to residents of small towns, villages and the rural open country, the study assessed the implications of the rural turnaround in the southern Ohio counties of Athens, Gallia, Jackson, Meigs, and Vinton. All five counties experienced outmigration in the 1950s, and all but Athens County lost population through outmigration in the 1960s. In…
The BBSome controls IFT assembly and turnaround in cilia.
Wei, Qing; Zhang, Yuxia; Li, Yujie; Zhang, Qing; Ling, Kun; Hu, Jinghua
2012-09-01
The bidirectional movement of intraflagellar transport (IFT) particles, which are composed of motors, IFT-A and IFT-B subcomplexes, and cargoes, is required for the biogenesis and signalling of cilia(1,2). A successful IFT cycle depends on the proper assembly of the massive IFT particle at the ciliary base and its turnaround from anterograde to retrograde transport at the ciliary tip. However, how IFT assembly and turnaround are regulated in vivo remains elusive. From a whole-genome mutagenesis screen in Caenorhabditis elegans, we identified two hypomorphic mutations in dyf-2 and bbs-1 as the only mutants showing normal anterograde IFT transport but defective IFT turnaround at the ciliary tip. Further analyses revealed that the BBSome (refs 3, 4), a group of conserved proteins affected in human Bardet-Biedl syndrome(5) (BBS), assembles IFT complexes at the ciliary base, then binds to the anterograde IFT particle in a DYF-2- (an orthologue of human WDR19) and BBS-1-dependent manner, and lastly reaches the ciliary tip to regulate proper IFT recycling. Our results identify the BBSome as the key player regulating IFT assembly and turnaround in cilia. PMID:22922713
Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround
ERIC Educational Resources Information Center
Peurach, Donald J.; Neumerski, Christine M.
2015-01-01
The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…
129. Back side technical facilities passageways between turnaround trans. & ...
129. Back side technical facilities passageways between turnaround trans. & building no. 104, "plans, sections & elevations" - architectural, AS-BLT AW 36-25-13, sheet 9 of 40, dated 23 November, 1960. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
The Turnaround Mindset: Aligning Leadership for Student Success
ERIC Educational Resources Information Center
Fairchild, Tierney Temple; DeMary, Jo Lynne
2011-01-01
This book provides a valuable balance between what one must know and what one must do to turn around low-performing schools. The 3-E framework simplifies this complex process by focusing resources on the environment, the executive, and the execution of the turnaround plan. Central to each of these components is a spotlight on the values supporting…
Turnaround: Leading Stressed Colleges and Universities to Excellence
ERIC Educational Resources Information Center
Martin, James; Samels, James E.
2008-01-01
Nearly one thousand colleges and universities in the United States face major challenges--from catastrophic hurricanes to loss of accreditation to sagging enrollment. What can leaders of such at-risk institutions do to improve their situation? "Turnaround" gives college and university leaders the tools they need to put their fragile institutions…
Turnaround and Closure Rates in the Charter and District Sectors
ERIC Educational Resources Information Center
Stuit, David
2012-01-01
This article examines the turnaround and closure rates of charter and district schools in 10 states to determine if one school sector has done comparatively better than the other at addressing chronically low-performing schools. Low-performing charter and district schools were identified in 2003-04 based on their reading and math proficiency rates…
California Turnaround Schools: An Analysis of School Improvement Grant Effectiveness
ERIC Educational Resources Information Center
Graham, Khalil N.
2013-01-01
The purpose of this study was to evaluate the effectiveness of School Improvement Grants (SIGs) in the state of California (CA) in increasing student achievement using the turnaround implementation model. The American Recovery and Reinvestment Act of 2009 (ARRA) included educational priorities focused on fixing America's lowest achieving schools.…
Democratic School Turnarounds: Pursuing Equity and Learning from Evidence
ERIC Educational Resources Information Center
Trujillo, Tina; Renée, Michelle
2013-01-01
the report "Democratic School Turnarounds" considers the democratic tensions inherent in the federal School Improvement Grant (SIG) policy's market-based school reforms and critiques the research base that many of these reforms are based on. It concludes with a set of recommendations that re-center the purposes of public education…
Negotiation Areas for "Transformation" and "Turnaround" Intervention Models
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…
Policy Perspective: School Turnaround in England. Utilizing the Private Sector
ERIC Educational Resources Information Center
Corbett, Julie
2014-01-01
This paper, written by strategic partner of the Center on School Turnaround (CST), Julie Corbett, provides research and examples on England's approach to turning around its lowest performing schools. The English education system utilizes private vendors to support chronically low-performing schools and districts. The introduction is followed by…
Time series analysis to monitor and assess water resources: a moving average approach.
Reghunath, Rajesh; Murthy, T R Sreedhara; Raghavan, B R
2005-10-01
An understanding of the behavior of the groundwater body and its long-term trends are essential for making any management decision in a given watershed. Geostatistical methods can effectively be used to derive the long-term trends of the groundwater body. Here an attempt has been made to find out the long-term trends of the water table fluctuations of a river basin through a time series approach. The method was found to be useful for demarcating the zones of discharge and of recharge of an aquifer. The recharge of the aquifer is attributed to the return flow from applied irrigation. In the study area, farmers mainly depend on borewells for water and water is pumped from the deep aquifer indiscriminately. The recharge of the shallow aquifer implies excessive pumping of the deep aquifer. Necessary steps have to be taken immediately at appropriate levels to control the irrational pumping of deep aquifer groundwater, which is needed as a future water source. The study emphasizes the use of geostatistics for the better management of water resources and sustainable development of the area. PMID:16240189
Quantifying the Average of the Time-varying Hazard Ratio via a Class of Transformations
CHEN, QINGXIA; ZENG, DONGLIN; IBRAHIM, JOSEPH G.; CHEN, MING-HUI; PAN, ZHIYING; XUE, XIAODONG
2014-01-01
The hazard ratio derived from the Cox model is a commonly used summary statistic to quantify a treatment effect with a time-to-event outcome. The proportional hazards assumption of the Cox model, however, is frequently violated in practice and many alternative models have been proposed in the statistical literature. Unfortunately, the regression coefficients obtained from different models are often not directly comparable. To overcome this problem, we propose a family of weighted hazard ratio measures that are based on the marginal survival curves or marginal hazard functions, and can be estimated using readily available output from various modeling approaches. The proposed transformation family includes the transformations considered by [18] as special cases. In addition, we propose a novel estimate of the weighted hazard ratio based on the maximum departure from the null hypothesis within the transformation family, and develop a Kolmogorov–Smirnov type of test statistic based on this estimate. Simulation studies show that when the hazard functions of two groups either converge or diverge, this new estimate yields a more powerful test than tests based on the individual transformations recommended in [18], with a similar magnitude of power loss when the hazards cross. The proposed estimates and test statistics are applied to a colorectal cancer clinical trial. PMID:25073864
ERIC Educational Resources Information Center
Murphy, Joseph; Meyers, Coby V.
2009-01-01
In this article, we provide a grounded narrative of capacity building in the turnaround equation by exploring the turnaround literature outside of education and applying it to troubled schools. Our analysis is based upon reviews of: (1) 14 comprehensive, historical volumes that examine the turnaround phenomenon; (2) 16 book-length analyses of…
NASA Astrophysics Data System (ADS)
Uneyama, Takashi; Miyaguchi, Tomoshige; Akimoto, Takuma
2015-09-01
The mean-square displacement (MSD) is widely utilized to study the dynamical properties of stochastic processes. The time-averaged MSD (TAMSD) provides some information on the dynamics which cannot be extracted from the ensemble-averaged MSD. In particular, the relative standard deviation (RSD) of the TAMSD can be utilized to study the long-time relaxation behavior. In this work, we consider a class of Langevin equations which are multiplicatively coupled to time-dependent and fluctuating diffusivities. Various interesting dynamics models such as entangled polymers and supercooled liquids can be interpreted as the Langevin equations with time-dependent and fluctuating diffusivities. We derive a general formula for the RSD of the TAMSD for the Langevin equation with the time-dependent and fluctuating diffusivity. We show that the RSD can be expressed in terms of the correlation function of the diffusivity. The RSD exhibits the crossover at the long time region. The crossover time is related to a weighted average relaxation time for the diffusivity. Thus the crossover time gives some information on the relaxation time of fluctuating diffusivity which cannot be extracted from the ensemble-averaged MSD. We discuss the universality and possible applications of the formula via some simple examples.
Apollo/Saturn 5 space vehicle countdown. Volume 2: Turnaround from scrub
NASA Technical Reports Server (NTRS)
1972-01-01
The procedures required to prepare a space vehicle for subsequent launch attempt after cancelling lift-off activities are presented. The cancellation must occur after the start of cryogenic loading, but prior to initiation of ignition sequences. The sequence of operations necessary to return the space vehicle to a predetermined configuration at which time the launch count can be resumed or rescheduled for following launch opportunities is developed. The considerations and constraints that are the basis for the scrub/turnaround times are described.
Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter
NASA Astrophysics Data System (ADS)
Johnson, W. R.; Nilsen, J.
2016-03-01
The influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity and also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.
Radke, Christopher D; McManamen, J Patrick; Kastengren, Alan L; Halls, Benjamin R; Meyer, Terrence R
2015-05-01
A method for quantitative measurements of gas and liquid distributions is demonstrated using simultaneous x-ray fluorescence and radiography of both phases in an atomizing coaxial spray. Synchrotron radiation at 10.1 keV from the Advanced Photon Source at Argonne National Laboratory is used for x-ray fluorescence of argon gas and two tracer elements seeded into the liquid stream. Simultaneous time-resolved x-ray radiography combined with time-averaged dual-tracer fluorescence measurements enabled corrections for reabsorption of x-ray fluorescence photons for accurate, line-of-sight averaged measurements of the distribution of the gas and liquid phases originating from the atomizing nozzle. PMID:25927776
Time-resolved and time-averaged stereo-PIV measurements of a unit-ratio cavity
NASA Astrophysics Data System (ADS)
Immer, Marc; Allegrini, Jonas; Carmeliet, Jan
2016-06-01
An experimental setup was developed to perform wind tunnel measurements on a unit-ratio, 2D open cavity under perpendicular incident flow. The open cavity is characterized by a mixing layer at the cavity top, that divides the flow field into a boundary layer flow and a cavity flow. Instead of precisely replicating a specific type of inflow, such as a turbulent flat plate boundary layer or an atmospheric boundary layer, the setup is capable of simulating a wide range of inflow profiles. This is achieved by using triangular spires as upstream turbulence generators, which can modify the otherwise laminar inflow boundary layer to be moderately turbulent and stationary, or heavily turbulent and intermittent. Measurements were performed by means of time-resolved stereo PIV. The cavity shear layer is analyzed in detail using flow statistics, spectral analysis, and space-time plots. The ability of the setup to generate typical cavity flow cases is demonstrated for characteristic inflow boundary layers, laminar and turbulent. Each case is associated with a distinct shear layer flow phenomena, self-sustained oscillations for the former and Kelvin-Helmholtz instabilities for the latter. Additionally, large spires generate a highly turbulent wake flow, resulting in a significantly different cavity flow. Large turbulent sweep and ejection events in the wake flow suppress the typical shear layer and sporadic near wall sweep events generate coherent vortices at the upstream edge.
Lepere, A. J.; Slack-Smith, L. M.
2002-01-01
Intravenous sedation has been used in dentistry for many years because of its perceived advantages over general anesthesia, including shorter recovery times. However, there is limited literature available on recovery from intravenous dental sedation, particularly in the private general practice setting. The aim of this study was to describe the recovery times when sedation was conducted in private dental practice and to consider this in relation to age, weight, procedure type, and procedure time. The data were extracted from the intravenous sedation records available with 1 general anesthesia-trained dental practitioner who provides ambulatory sedation services to a number of private general dental practices in the Perth, Western Australia Metropolitan Area. Standardized intravenous sedation techniques as well as clear standardized discharge criteria were utilized. The sedatives used were fentanyl, midazolam, and propofol. Results from 85 patients produced an average recovery time of 19 minutes. Recovery time was not associated with the type or length of dental procedures performed. PMID:15384295
Leclercq, Mathieu; Karray, Mayssa; Isnard, Vincent; Gautier, François; Picart, Pascal
2013-01-01
This paper proposes a first attempt to visualize and analyze the vibrations induced by a bone-conduction device and propagating at the surface of the skin of a human face. The method is based on a new approach in a so-called quasi-time-averaging regime, resulting in the retrieval of the vibration amplitude and phase from a sequence of digital Fresnel holograms recorded with a high image rate. The design of the algorithm depends on the ratio between the exposure time and the vibration period. The results show the propagation of vibrations at the skin surface, and quantitative analysis is achieved by the proposed approach. PMID:23292386
NASA Astrophysics Data System (ADS)
Qiu, Long-Qing; Liu, Chao; Dong, Hui; Xu, Lu; Zhang, Yi; Hans-Joachim, Krause; Xie, Xiao-Ming; Andreas, Offenhäusser
2012-10-01
Using a second-order helium-cooled superconducting quantum interference device gradiometer as the detector, ultra-low-field nuclear magnetic resonance (ULF-NMR) signals of protons are recorded in an urban environment without magnetic shielding. The homogeneity and stability of the measurement field are investigated. NMR signals of protons are studied at night and during working hours. The Larmor frequency variation caused by the fluctuation of the external magnetic field during daytime reaches around 5 Hz when performing multiple measurements for about 10 min, which seriously affects the results of averaging. In order to improve the performance of the averaged data, we suggest the use of a data processor, i.e. the so-called time-domain frequency correction (TFC). For a 50-times averaged signal spectrum, the signal-to-noise ratio is enhanced from 30 to 120 when applying TFC while preserving the NMR spectrum linewidth. The TFC is also applied successfully to the measurement data of the hetero-nuclear J-coupling in 2,2,2-trifluoroethanol.
NASA Astrophysics Data System (ADS)
Sakhel, Asaad R.
2016-07-01
The dynamics of a Bose-Einstein condensate are examined numerically in the presence of a one-dimensional bichromatic optical lattice (BCOL) with external harmonic confinement in the strongly interacting regime. The condensate is excited by a focusing stirring red laser. Two realizations of the BCOL are considered, one with a rational and the other with an irrational ratio of the two constituting wave lengths. The system is simulated by the time-dependent Gross Pitaevskii equation that is solved using the Crank Nicolson method in real time. It is found that for a weak BCOL, the long-time averaged physical observables of the condensate respond only very weakly (or not at all) to changes in the secondary OL depth V1 showing that under these conditions the harmonic trap plays a dominant role in governing the dynamics. However, for a much larger strength of the BCOL, the response is stronger as it begins to compete with the external harmonic trap, such that the frequency of Bloch oscillations of the bosons rises with V1 yielding higher time-averages. Qualitatively there is no difference between the dynamics of the condensate resulting from the use of a rational or irrational ratio of the wavelengths since the external harmonic trap washes it out. It is further found that in the presence of an external harmonic trap, the BCOL acts in favor of superflow.
NASA Astrophysics Data System (ADS)
Jia, Hongwei; Zhao, Jun
2016-08-01
The output regulation problem of switched linear multi-agent systems with stabilisable and unstabilisable subsystems is investigated in this paper. A sufficient condition for the solvability of the problem is given. Owing to the characteristics of switched multi-agent systems, even if each agent has its own dwell time, the multi-agent systems, if viewed as an overall switched system, may not have a dwell time. To overcome this difficulty, we present a new approach, called an agent-dependent average dwell time method. Due to the limited information exchange between agents, a distributed dynamic observer network for agents is provided. Further, a distributed dynamic controller based on observer is designed. Finally, simulation results show the effectiveness of the proposed solutions.
Turnaround Aid Raising Hopes, Also Concerns
ERIC Educational Resources Information Center
Klein, Alyson
2009-01-01
As the U.S. Department of Education prepares to throw $3 billion in one-time money on the table to improve perennially foundering schools, a gulf is emerging between what federal officials would like to see done with the funds and what many districts say is their capacity--and inclination--to deliver. While some districts say the federal largess…
NASA Astrophysics Data System (ADS)
Chainakun, P.; Young, A. J.; Kara, E.
2016-08-01
General relativistic ray tracing simulations of the time-averaged spectrum and energy-dependent time delays in active galactic nuclei (AGN) are presented. We model the lamp-post geometry in which the accreting gas is illuminated by an X-ray source located on the rotation axis of the black hole. The spectroscopic features imprinted in the reflection component are modelled using REFLIONX. The associated time delays after the direct continuum, known as reverberation lags, are computed including the full effects of dilution and ionization gradients on the disc. We perform, for the first time, simultaneous fitting of the time-averaged and lag-energy spectra in three AGN: Mrk 335, IRAS 13224-3809 and Ark 564 observed with XMM-Newton. The best-fitting source height and central mass of each AGN partly agree with those previously reported. We find that including the ionization gradient in the model naturally explains lag-energy observations in which the 3 keV and 7-10 keV bands precede other bands. To obtain the clear 3 keV and 7-10 keV dips in the lag-energy profile, the model requires either a source height >5 rg, or a disc that is highly ionized at small radii and is colder further out. We also show that fitting the lag or the mean spectra alone can lead to different results and interpretations. This is therefore important to combine the spectral and timing data in order to find the plausible but self-consistent fits which are achievable with our model.
NASA Astrophysics Data System (ADS)
Chainakun, P.; Young, A. J.; Kara, E.
2016-08-01
General relativistic ray tracing simulations of the time-averaged spectrum and energy-dependent time delays in AGN are presented. We model the lamp-post geometry in which the accreting gas is illuminated by an X-ray source located on the rotation axis of the black hole. The spectroscopic features imprinted in the reflection component are modelled using REFLIONX. The associated time delays after the direct continuum, known as reverberation lags, are computed including the full effects of dilution and ionization gradients on the disc. We perform, for the first time, simultaneous fitting of the time-averaged and lag-energy spectra in three AGN: Mrk 335, IRAS 13224-3809 and Ark 564 observed with XMM-Newton. The best fitting source height and central mass of each AGN partly agree with those previously reported. We find that including the ionization gradient in the model naturally explains lag-energy observations in which the 3 keV and 7-10 keV bands precede other bands. To obtain the clear 3 keV and 7-10 keV dips in the lag-energy profile, the model requires either a source height > 5$r_g$, or a disc that is highly ionized at small radii and is colder further out. We also show that fitting the lag or the mean spectra alone can lead to different results and interpretations. This is therefore important to combine the spectral and timing data in order to find the plausible but self-consistent fits which is achievable with our model.
Aarts, B E; Convens, J; Bronkhorst, E M; Kuijpers-Jagtman, A M; Fudalej, P S
2015-12-01
Placement of a single-tooth implant should be performed when a patient's facial growth has ceased. In this retrospective observational study, we evaluated if there was a difference in the timing of cessation of craniofacial growth in short, average, and long facial types. Based on the value of the angle between cranial base and mandibular plane (SN/MP angle), three groups comprising 48 subjects with short facial type (SF; SN/MP ≤28°), 77 with average facial type (AF; SN/MP ≥31.5° and ≤34.5°), and 44 with long facial type (LF; SN/MP ≥38°) were selected. Facial growth was assessed on lateral cephalograms taken at 15.4 years of age, and 2, 5, and 10 years later. Variables were considered to be stable when the difference between two successive measurements was less than 1 mm or 1°. We found no difference between facial types in the timing of cessation of facial growth. Depending on the variable, the mean age when variables became stable ranged from 18.0 years (Is-Pal in LF group) to 22.0 years (SN/MP in LF group). However, facial growth continued at the last follow-up in approximately 20% subjects. This study demonstrates that facial type is not associated with the timing of cessation of facial growth. PMID:26548528
NASA Astrophysics Data System (ADS)
Ben-David, Avishai; Vanderbeek, Richard G.; Gotoff, Steven W.; D'Amico, Francis M.
1997-10-01
The effects of flight geometry, signal averaging and time- lag correlation coefficient on airborne CO2 dial lidar measurements are shown in simulations and field measurements. These factors have implications for multi- vapor measurements and also for measuring a shingle vapor with a wide absorption spectra for which one would like to make DIAL measurements at many wavelengths across the absorption spectra of the gas. Thus it is of interest to know how many wavelengths and how many groups of wavelengths can be used effectively in DIAL measurements. Our data indicate that for our lidar about 80 wavelengths can be used for DIAL measurements of a stationary vapor. The lidar signal is composed of fluctuations with three time scales: a very short time scale due to system noise which is faster than the data acquisition sampling rate of the receiver, a medium time scale due to atmospheric turbulence, and a long time scale due to slow atmospheric transmission drift from aerosol in homogeneities. The decorrelation time scale of fluctuations for airborne lidar measurements depends on the flight geometry.
Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M
1992-01-01
A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523
On the turn-around phenomenon in n-MOS transistors under NBTI conditions
NASA Astrophysics Data System (ADS)
Benabdelmoumene, A.; Djezzar, B.; Chenouf, A.; Tahi, H.; Zatout, B.; Kechouane, M.
2016-07-01
We have experimentally analyzed negative bias temperature instability (NBTI) stress/recovery cycle on n-channel metal oxide semiconductor field effect transistors (n-MOSFET's). Data obtained by current-voltage (I-V) and charge pumping (CP) techniques have revealed a turn-around phenomenon in both threshold voltage shift (ΔVth) and maximum CP current shift (ΔICP-Max). This allows us to separate the evolution of interface traps (permanent) and near interfacial oxide traps "border traps" (recoverable) as a function of the stress time. The ability of separation comes from the fact that interface and oxide traps induce opposite shifts in ΔVth. Contrarily to NBTI/n-MOSFET, NBTI/p-MOSFET is unable to achieve trap separation because both trap types induce shifts in the same direction. Exploiting the turn-around effect, we have been able to follow the evolution of the degradation over the stress time. NBTI stress/relaxation cycle CP measurements on n-MOSFET have shown a progressive creation of traps; starting from the interface traps to near interfacial traps. This new and simple procedure will give a deeper insight into the dynamics of traps build up under NBTI conditions.
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first
NASA Technical Reports Server (NTRS)
Chung, William W.; Ingram, Carla D.; Ahlquist, Douglas Kurt; Chachad, Girish H.
2016-01-01
"Gate Turnaround" plays a key role in the National Air Space (NAS) gate-to-gate performance by receiving aircraft when they reach their destination airport, and delivering aircraft into the NAS upon departing from the gate and subsequent takeoff. The time spent at the gate in meeting the planned departure time is influenced by many factors and often with considerable uncertainties. Uncertainties such as weather, early or late arrivals, disembarking and boarding passengers, unloading/reloading cargo, aircraft logistics/maintenance services and ground handling, traffic in ramp and movement areas for taxi-in and taxi-out, and departure queue management for takeoff are likely encountered on the daily basis. The Integrated Gate Turnaround Management (IGTM) concept is leveraging relevant historical data to support optimization of the gate operations, which include arrival, at the gate, departure based on constraints (e.g., available gates at the arrival, ground crew and equipment for the gate turnaround, and over capacity demand upon departure), and collaborative decision-making. The IGTM concept provides effective information services and decision tools to the stakeholders, such as airline dispatchers, gate agents, airport operators, ramp controllers, and air traffic control (ATC) traffic managers and ground controllers to mitigate uncertainties arising from both nominal and off-nominal airport gate operations. IGTM will provide NAS stakeholders customized decision making tools through a User Interface (UI) by leveraging historical data (Big Data), net-enabled Air Traffic Management (ATM) live data, and analytics according to dependencies among NAS parameters for the stakeholders to manage and optimize the NAS performance in the gate turnaround domain. The application will give stakeholders predictable results based on the past and current NAS performance according to selected decision trees through the UI. The predictable results are generated based on analysis of the
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang
2016-01-01
Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727
Fluid trajectory evaluation based on an ensemble-averaged cross-correlation in time-resolved PIV
NASA Astrophysics Data System (ADS)
Jeon, Young Jin; Chatellier, Ludovic; David, Laurent
2014-07-01
A novel multi-frame particle image velocimetry (PIV) method, able to evaluate a fluid trajectory by means of an ensemble-averaged cross-correlation, is introduced. The method integrates the advantages of the state-of-art time-resolved PIV (TR-PIV) methods to further enhance both robustness and dynamic range. The fluid trajectory follows a polynomial model with a prescribed order. A set of polynomial coefficients, which maximizes the ensemble-averaged cross-correlation value across the frames, is regarded as the most appropriate solution. To achieve a convergence of the trajectory in terms of polynomial coefficients, an ensemble-averaged cross-correlation map is constructed by sampling cross-correlation values near the predictor trajectory with respect to an imposed change of each polynomial coefficient. A relation between the given change and corresponding cross-correlation maps, which could be calculated from the ordinary cross-correlation, is derived. A disagreement between computational domain and corresponding physical domain is compensated by introducing the Jacobian matrix based on the image deformation scheme in accordance with the trajectory. An increased cost of the convergence calculation, associated with the nonlinearity of the fluid trajectory, is moderated by means of a V-cycle iteration. To validate enhancements of the present method, quantitative comparisons with the state-of-arts TR-PIV methods, e.g., the adaptive temporal interval, the multi-frame pyramid correlation and the fluid trajectory correlation, were carried out by using synthetically generated particle image sequences. The performances of the tested methods are discussed in algorithmic terms. A high-rate TR-PIV experiment of a flow over an airfoil demonstrates the effectiveness of the present method. It is shown that the present method is capable of reducing random errors in both velocity and material acceleration while suppressing spurious temporal fluctuations due to measurement noise.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
NASA Astrophysics Data System (ADS)
Sánchez-Duque, A.; Mejia, V.; Opdyke, N. D.; Huang, K.; Rosales-Rivera, A.
2016-02-01
Paleomagnetic results obtained from 47 Plio-Pleistocene volcanic flows from the Ruiz-Tolima Volcanic Chain (Colombia) are presented. The mean direction of magnetization among these flows, which comprise normal (n = 43) and reversed (n = 4) polarities, is Dec = 1.8°, Inc = 3.2°, α95 = 5.0°, and κ = 18.4. This direction of magnetization coincides with GAD plus a small persistent axial quadrupolar component (around 5%) at the site-average latitude (4.93°). This agreement is robust after applying several selection criteria (α95 < 10º α95 < 5.5º polarities: normal, reversed, and tentatively transitional). The data are in agreement with Model G proposed by McElhinny and McFadden (1997) and the fit is improved when sites tentatively identified as transitional (two that otherwise have normal polarity) are excluded from the calculations. Compliance observed with the above mentioned time-averaged field and paleosecular variation models, is also observed for many recent similar studies from low latitudes, with the exception of results from Galapagos Islands that coincide with GAD and tend to be near sided.
NASA Astrophysics Data System (ADS)
Shimada, Ken-ichi; Ishii, Toshiki; Hoshizawa, Taku; Takashima, Yuzuru
2015-09-01
Mechanical instabilities during recording and involved degradation of signal quality in high density and high data transfer rate holographic data storage system (HDSS) is one of the obstacles to prevent the technology from being a stable system. We analytically formulated effects of mechanical instabilities of a Galvano mirror and spindle motor on the HDSS by incorporating the concept of a time-averaged holography. Mechanical parameters such as amplitude and frequency of mechanical oscillation are related to optical parameters such as amplitude and phase of reference and signal beams. Especially, the analytical formulation led to a new method of optical and post compensation for mechanical instability during recording hologram. The optical post compensation method enables a robust implementation of HDSS against mechanical instabilities.
Zhao, Xudong; Yin, Yunfei; Niu, Ben; Zheng, Xiaolong
2016-08-01
In this paper, the problem of switching stabilization for a class of switched nonlinear systems is studied by using average dwell time (ADT) switching, where the subsystems are possibly all unstable. First, a new concept of ADT is given, which is different from the traditional definition of ADT. Based on the new proposed switching signals, a sufficient condition of stabilization for switched nonlinear systems with unstable subsystems is derived. Then, the T-S fuzzy modeling approach is applied to represent the underlying nonlinear system to make the obtained condition easily verified. A novel multiple quadratic Lyapunov function approach is also proposed, by which some conditions are provided in terms of a set of linear matrix inequalities to guarantee the derived T-S fuzzy system to be asymptotically stable. Finally, a numerical example is given to demonstrate the effectiveness of our developed results. PMID:26316288
Time-averaged distributions of solute and solvent motions: exploring proton wires of GFP and PfM2DH.
Velez-Vega, Camilo; McKay, Daniel J J; Aravamuthan, Vibhas; Pearlstein, Robert; Duca, José S
2014-12-22
Proton translocation pathways of selected variants of the green fluorescent protein (GFP) and Pseudomonas fluorescens mannitol 2-dehydrogenase (PfM2DH) were investigated via an explicit solvent molecular dynamics-based analysis protocol that allows for direct quantitative relationship between a crystal structure and its time-averaged solute-solvent structure obtained from simulation. Our study of GFP is in good agreement with previous research suggesting that the proton released from the chromophore upon photoexcitation can diffuse through an extended internal hydrogen bonding network that allows for the proton to exit to bulk or be recaptured by the anionic chromophore. Conversely for PfM2DH, we identified the most probable ionization states of key residues along the proton escape channel from the catalytic site to bulk solvent, wherein the solute and high-density solvent crystal structures of binary and ternary complexes were properly reproduced. Furthermore, we proposed a plausible mechanism for this proton translocation process that is consistent with the state-dependent structural shifts observed in our analysis. The time-averaged structures generated from our analyses facilitate validation of MD simulation results and provide a comprehensive profile of the dynamic all-occupancy solvation network within and around a flexible solute, from which detailed hydrogen-bonding networks can be inferred. In this way, potential drawbacks arising from the elucidation of these networks by examination of static crystal structures or via alternate rigid-protein solvation analysis procedures can be overcome. Complementary studies aimed at the effective use of our methodology for alternate implementations (e.g., ligand design) are currently underway. PMID:25405925
NASA Technical Reports Server (NTRS)
Adeleye, Sanya; Chung, Christopher
2006-01-01
Commercial aircraft undergo a significant number of maintenance and logistical activities during the turnaround operation at the departure gate. By analyzing the sequencing of these activities, more effective turnaround contingency plans may be developed for logistical and maintenance disruptions. Turnaround contingency plans are particularly important as any kind of delay in a hub based system may cascade into further delays with subsequent connections. The contingency sequencing of the maintenance and logistical turnaround activities were analyzed using a combined network and computer simulation modeling approach. Experimental analysis of both current and alternative policies provides a framework to aid in more effective tactical decision making.
Project teams produce successful turnaround for Illinois hospital.
2003-10-01
When Jay Kreuzer was hired as president and CEO of West Suburban Health Care, it didn't take him long to realize the organization was headed in the wrong direction. The not-for-profit system, which includes a 258-bed medical center, was projected to end fiscal year 2001 with a loss of $19 million. Kreuzer put together a team that implemented an organization-wide performance improvement effort. In just two years the turnaround has been completed, as West Suburban ended fiscal year 2003 with a small surplus. PMID:14650979
Mumbare, Sachin S; Gosavi, Shriram; Almale, Balaji; Patil, Aruna; Dhakane, Supriya; Kadu, Aniruddha
2014-01-01
Background: India's National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Materials and Methods: Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q) nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA) was applied. Results: Forecasting showed that the replacement level of 2.1 total fertility rate (TFR) will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Conclusion: Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples. PMID:25364146
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-01-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
NASA Astrophysics Data System (ADS)
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-02-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4-6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100-1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
NASA Technical Reports Server (NTRS)
Huff, Edward M.; Mosher, Marianne; Barszcz, Eric
2002-01-01
Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local
BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra
NASA Technical Reports Server (NTRS)
Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.
1996-01-01
We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.
NASA Astrophysics Data System (ADS)
Cromwell, G.; Johnson, C. L.; Tauxe, L.; Constable, C.; Jarboe, N.
2015-12-01
Previous paleosecular variation (PSV) and time-averaged field (TAF) models draw on compilations of paleodirectional data that lack equatorial and high latitude sites and use latitudinal virtual geomagnetic pole (VGP) cutoffs designed to remove transitional field directions. We present a new selected global dataset (PSV10) of paleodirectional data spanning the last 10 Ma. We include all results calculated with modern laboratory methods, regardless of site VGP colatitude, that meet statistically derived selection criteria. We exclude studies that target transitional field states or identify significant tectonic effects, and correct for any bias from serial correlation by averaging directions from sequential lava flows. PSV10 has an improved global distribution compared with previous compilations, comprising 1519 sites from 71 studies. VGP dispersion in PSV10 varies with latitude, exhibiting substantially higher values in the southern hemisphere than at corresponding northern latitudes. Inclination anomaly estimates at many latitudes are within error of an expected GAD field, but significant negative anomalies are found at equatorial and mid-northern latitudes. Current PSV models Model G and TK03 do not fit observed PSV or TAF latitudinal behavior in PSV10, or subsets of normal and reverse polarity data, particularly for southern hemisphere sites. Attempts to fit these observations with simple modifications to TK03 showed slight statistical improvements, but still exceed acceptable errors. The root-mean-square misfit of TK03 (and subsequent iterations) is substantially lower for the normal polarity subset of PSV10, compared to reverse polarity data. Two-thirds of data in PSV10 are normal polarity, most which are from the last 5 Ma, so we develop a new TAF model using this subset of data. We use the resulting TAF model to explore whether new statistical PSV models can better describe our new global compilation.
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low
Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F.
2008-09-15
for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.
NASA Technical Reports Server (NTRS)
Dunn, M. G.; Kim, J.; Civinskas, K. C.; Boyle, R. J.
1992-01-01
Time-averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row and the first-stage blade row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the component. Stanton-number distributions are also reported for the second-stage vane at 50 percent span. A shock tube is used as a short-duration source of heated and pressurized air to which the turbine is subjected. Platinum thin-film gages are used to obtain the heat-flux measurements and miniature silicone-diaphragm pressure transducers are used to obtain the surface pressure measurements. The first-stage vane Stanton number distributions are compared with predictions obtained using a quasi-3D Navier-Stokes solution and a version of STAN5. This same N-S technique was also used to obtain predictions for the first blade and the second vane.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Kilburn, Tina R.; Eriksen, Hanne-Lise Falgreen; Underbjerg, Mette; Thorsen, Poul; Mortensen, Erik Lykke; Landrø, Nils Inge; Bakketeig, Leiv S.; Grove, Jakob; Sværke, Claus; Kesmodel, Ulrik Schiøler
2015-01-01
Background Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT) and information processing time (IPT) in young children. Method Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60–64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R) was administered. Results Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1–4. Conclusion This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring. PMID:26382068
School Turnarounds: Resisting the Hype, Giving Them Hope. Education Outlook No. 2
ERIC Educational Resources Information Center
Hess, Frederick M.; Gift, Thomas
2009-01-01
Education reformers are abuzz over school "turnarounds," a simple idea that has undeniable appeal. School turnarounds offer the opportunity to take familiar educational institutions and improve them through coaching, mentoring, capacity building, best practices, and other existing tools. Unlike most reform efforts, which focus on incremental…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-30
... Development and Research, Attention: Richard Muller, 200 Constitution Avenue, NW., Room N-5641, Washington, DC... will be requested in the quick turnaround surveys is not otherwise available. Other research and... coordinate the quick turnaround surveys with other research it is conducting, in order to ease the burden...
On the Edge: A Study of Small Private Colleges That Have Made a Successful Financial Turnaround
ERIC Educational Resources Information Center
Carey, Amy Bragg
2013-01-01
This dissertation was a qualitative research study regarding two small private universities and their process of transformation from an institution headed toward closure to a successful turnaround. The primary questions that guided the study included the factors and persons that contributed to the institutional turnaround, the issues and…
On the Edge: A Study of Small Private Colleges That Have Made a Successful Financial Turnaround
ERIC Educational Resources Information Center
Carey, Amy Bragg
2014-01-01
This article describes a qualitative study that involved two small private universities, examining their process of transformation from institutions headed toward closure to institutions that underwent a successful turnaround. The primary questions that guided the study included the issues and circumstances that led to the need for a turnaround,…
School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform
ERIC Educational Resources Information Center
Peck, Craig; Reitzug, Ulrich C.
2014-01-01
School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…
NASA Astrophysics Data System (ADS)
Maglich, Bogdan C.
1988-08-01
A D + beam of kinetic energy Ti = 0.7 MeV was stored in a "simple mirror" magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times, ion energy distribution, nuclear reaction rate, and reaction product spectrum. Migma formed a disc 20 cm in diameter and 0.5 cm thick. Its ion density was sharply peaked in the center; the ion-to-electron temperature ratio was TiTe ˜ 10 3; ion-electron temperature equilibrium was never reached. The volume average and central D + density were n = 3.2 × 10 9 cm -3 and nc = 3 × 10 10 cm -3 respectively, compared to the space charge limit density nsc = 4 × 10 8 cm -3. The energy confinement time was τc = 20-30 s, limited by the change exchange reactions with the residual gas in the vacuum (5 × 10 -9 Torr). The ion energy loss rate was 1.4 keV/s. None of the instabilities that were observed in mirrors at several orders of magnitude lower density occurred. The proton energy spectrum for dd + d → T + p + 4 MeV shows that dd collided at an average crossing angle of 160°. Evidence for exponential density buildup has also been observed. Relative to Migma III results and measured in terms of the product of ion energy E, density n, and confinement time τ, device performance was improved by a factor of 500. Using the central fast ion density, we obtained the triple product: Tnτ ≅ 4 × 10 14 keV s cm -3, which is greater than that of the best fusion devices. The luminosity (collision rate per unit cross section) was ˜ 10 29 cm -2s -1, with o.7 A ion current through the migma center. The stabilizing features of migma are: (1) large Larmor radius; (2) small canonical angular momentum; (3) short axial length z (disc shape); (4) nonadiabatic motions in r and z
Li, Peng; Cheng, Yuxuan; Li, Pei; Zhou, Liping; Ding, Zhihua; Ni, Yang; Pan, Cong
2016-09-01
The current temporal, wavelength, angular, and spatial averaging approaches trade imaging time and resolution for multiple independent measurements that improve the flow contrast in optical coherence tomography angiography (OCTA). We find that these averaging approaches are equivalent in principle, offering almost the same flow contrast enhancement as the number of averages increases. Based on this finding, we propose a hybrid averaging strategy for contrast enhancement by cost apportionment. We demonstrate that, compared with any individual approach, the hybrid averaging is able to offer a desired flow contrast without severe degradation of imaging time and resolution. Making use of the extended range of a VCSEL-based swept-source OCT, an angular averaging approach by path length encoding is also demonstrated for flow contrast enhancement. PMID:27607943
NASA Astrophysics Data System (ADS)
Tomasovych, Adam; Gallmetzer, Ivo; Haselmair, Alexandra; Kaufman, Darrell S.; Zuschin, Martin
2016-04-01
Stratigraphic changes in temporal resolution of fossil assemblages and the degree of their stratigraphic mixing in the Holocene deposits are of high importance in paleoecology, conservation paleobiology and paleoclimatology. However, few studies quantified downcore changes in time averaging and in stratigraphic disorder on the basis of dating of multiple shells occurring in individual stratigraphic layers. Here, we investigate downcore changes in frequency distribution of postmortem ages of the infaunal bivalve Gouldia minima in two, ~150 cm-thick piston cores (separated by more than 1 km) in the northern Adriatic Sea, close to the Slovenian city Piran at a depth of 24 m. We use radiocarbon-calibrated amino acid racemization to obtain postmortem ages of 564 shells, and quantify age-frequency distributions in 4-5 cm-thick stratigraphic intervals (with 20-30 specimens sampled per interval). Inter-quartile range for individual 4-5 cm-thick layers varies between 850 and 1,700 years, and range encompassing 95% of age data varies between 2,000 and 5,000 years in both cores. The uppermost sediments (20 cm) are age-homogenized and show that median age of shells is ~700-800 years. The interval between 20 and 90 cm shows a gradual increase in median age from ~2,000 to ~5,000 years, with maximum age ranging to ~8,000 years. However, the lowermost parts of both cores show a significant disorder, with median age of 3,100-3,300 years. This temporal disorder implies that many shells were displaced vertically by ~1 m. Absolute and proportional abundance of the bivalve Gouldia minima strongly increases towards the top of the both cores. We hypothesize that such increase in abundance, when coupled with depth-declining reworking, can explain stratigraphic disorder because numerically abundant young shells from the top of the core were more likely buried to larger sediment depths than less frequent shells at intermediate sediment depths.
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
NASA Astrophysics Data System (ADS)
Foresti, Daniele; Nabavi, Majid; Poulikakos, Dimos
2012-05-01
Acoustic levitation is a physical phenomenon that arises when the acoustic radiation pressure is strong enough to overcome gravitational force. It is a nonlinear phenomenon which can be predicted only if higher order terms are included in the acoustic field calculation. The study of acoustic levitation is usually conducted by solving the linear acoustic equation and bridging the gap with an analytical solution. Only recently, the scientific community has shown interest in the full solution of the Navier-Stokes' equation with the aim of deeply investigating the acoustic radiation pressure. We present herein a numerical model based on Finite Volume Method (FVM) and Dynamic Mesh (DM) for the calculation of the acoustic radiation pressure acting on a rigid sphere inside an axisymmetric levitator which is the most widely used and investigated type of levitators. In this work, we focus on the third resonance mode. The use of DM is new in the field of acoustic levitation, allowing a more realistic simulation of the phenomenon, since no standing wave has to be necessarily imposed as boundary condition. The radiating plate is modeled as a rigid cylinder moving sinusoidally along the central axis. The time-averaged acoustic force exerting on the sphere is calculated for different radii Rs of the sphere (0.025 to 0.5 wavelengths). It is shown that the acoustic force increases proportional to Rs3 for small radii, then decreases when the standing wave condition is violated and finally rises again in the travelling wave radiation pressure configuration. The numerical model is validated for the inviscid case with a Finite Element Method model of the linear acoustic model based on King's approximation.
NASA Astrophysics Data System (ADS)
Simond, G.; Bendahan, D.; Cozzone, P. J.
2001-03-01
We present an exact analytical method dedicated to fitting time-dependent exponential-like changes in MR spectra. As an illustration, this method has been applied to fitting metabolic changes recorded by 31P MRS in human skeletal muscle occurring during a rest-exercise-recovery protocol. When recording metabolic changes with the accumulative method, the time averaging of the MR signals implies the choice of a time index for fitting any changes in the features of the associated MR spectra. A critical examination of the different ways (constant, linear, and exponential) of choosing the time index is reported. By numerical analysis, we have calculated the errors generated by the three methods and we have compared their sensitivity to noise. In the case of skeletal muscle, both constant and linear methods introduce large and uncontrolled errors for the whole set of metabolic parameters derived from [PCr] changes. In contrast, the exponential method affords a reliable estimation of critical parameters in muscle bioenergetics in both normal and pathological situations. This method is very easy to implement and provides an exact analytical solution to fitting changes in MR spectra recorded by the accumulative method.
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler
Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround
NASA Astrophysics Data System (ADS)
Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.
An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.
ERIC Educational Resources Information Center
Player, Daniel; Hambrick Hitt, Dallas; Robinson, William
2014-01-01
This guide provides state education agencies (SEAs) and districts (LEAs) with guidance about how to assess the district's readiness to support school turnaround initiatives. Often, school turnaround efforts focus only on the school's structure and leadership. Rarely do policymakers or practitioners think about school turnaround as a system-level…
NASA Astrophysics Data System (ADS)
Liu, Yang-Wei; Liu, Bao-Jie; Lu, Li-Peng
2012-04-01
The average-passage equation system (APES) provides a rigorous mathematical framework for accounting for the unsteady blade row interaction through multistage compressors in steady state environment by introducing deterministic correlations (DC) that need to be modeled to close the equation system. The primary purpose of this study was to provide insight into the DC characteristics and the influence of DC on the time-averaged flow field of the APES. In Part 2 of this two-part paper, the influence of DC on the time-averaged flow field was systematically studied. Several time-averaging computations were conducted with various boundary conditions and DC for the downstream stator in a transonic compressor stage, by employing the CFD solver developed in Part 1 of this two-part paper. These results were compared with the time-averaged unsteady flow field and the steady one. The study indicated that the circumferential-averaged DC can take into account major part of the unsteady effects on spanwise redistribution of flow fields in compressors. Furthermore, it demonstrated that both deterministic stresses and deterministic enthalpy fluxes are necessary to reproduce the time-averaged flow field.
A Sensitive Branched DNA HIV-1 Signal Amplification Viral Load Assay with Single Day Turnaround
Baumeister, Mark A.; Zhang, Nan; Beas, Hilda; Brooks, Jesse R.; Canchola, Jesse A.; Cosenza, Carlo; Kleshik, Felix; Rampersad, Vinod; Surtihadi, Johan; Battersby, Thomas R.
2012-01-01
Branched DNA (bDNA) is a signal amplification technology used in clinical and research laboratories to quantitatively detect nucleic acids. An overnight incubation is a significant drawback of highly sensitive bDNA assays. The VERSANT® HIV-1 RNA 3.0 Assay (bDNA) (“Versant Assay”) currently used in clinical laboratories was modified to allow shorter target incubation, enabling the viral load assay to be run in a single day. To dramatically reduce the target incubation from 16–18 h to 2.5 h, composition of only the “Lysis Diluent” solution was modified. Nucleic acid probes in the assay were unchanged. Performance of the modified assay (assay in development; not commercially available) was evaluated and compared to the Versant Assay. Dilution series replicates (>950 results) were used to demonstrate that analytical sensitivity, linearity, accuracy, and precision for the shorter modified assay are comparable to the Versant Assay. HIV RNA-positive clinical specimens (n = 135) showed no significant difference in quantification between the modified assay and the Versant Assay. Equivalent relative quantification of samples of eight genotypes was demonstrated for the two assays. Elevated levels of several potentially interfering endogenous substances had no effect on quantification or specificity of the modified assay. The modified assay with drastically improved turnaround time demonstrates the viability of signal-amplifying technology, such as bDNA, as an alternative to the PCR-based assays dominating viral load monitoring in clinical laboratories. Highly sensitive bDNA assays with a single day turnaround may be ideal for laboratories with especially stringent cost, contamination, or reliability requirements. PMID:22479381
Atwell, J.; Evans, C.; Francoeur, T.L.; Guerra, R.
1995-12-31
This paper addresses the use of rapid turnaround, heated headspace/gas chromatography (GC) to support a soil remediation project in the state of New Jersey. In the past, rapid turnaround, heated head space/GC procedures have been used primarily as a screening tool to delineate areas of volatile organic compound (VOC) contamination on-site. For this project, the heated headspace/GC method was the primary analytical tool used to support a large soil remediation project. This paper reviews the project goals, presents analytical protocol, presents internal quality assurance/quality control (QA/QC), evaluates laboratory split data, and discusses the advantages and disadvantages of this rapid turnaround method.
Wang, Huapei; Kent, Dennis V.; Rochette, Pierre
2015-01-01
The geomagnetic field is predominantly dipolar today, and high-fidelity paleomagnetic mean directions from all over the globe strongly support the geocentric axial dipole (GAD) hypothesis for the past few million years. However, the bulk of paleointensity data fails to coincide with the axial dipole prediction of a factor-of-2 equator-to-pole increase in mean field strength, leaving the core dynamo process an enigma. Here, we obtain a multidomain-corrected Pliocene–Pleistocene average paleointensity of 21.6 ± 11.0 µT recorded by 27 lava flows from the Galapagos Archipelago near the Equator. Our new result in conjunction with a published comprehensive study of single-domain–behaved paleointensities from Antarctica (33.4 ± 13.9 µT) that also correspond to GAD directions suggests that the overall average paleomagnetic field over the past few million years has indeed been dominantly dipolar in intensity yet only ∼60% of the present-day field strength, with a long-term average virtual axial dipole magnetic moment of the Earth of only 4.9 ± 2.4 × 1022 A⋅m2. PMID:26598664
Wang, Huapei; Kent, Dennis V; Rochette, Pierre
2015-12-01
The geomagnetic field is predominantly dipolar today, and high-fidelity paleomagnetic mean directions from all over the globe strongly support the geocentric axial dipole (GAD) hypothesis for the past few million years. However, the bulk of paleointensity data fails to coincide with the axial dipole prediction of a factor-of-2 equator-to-pole increase in mean field strength, leaving the core dynamo process an enigma. Here, we obtain a multidomain-corrected Pliocene-Pleistocene average paleointensity of 21.6 ± 11.0 µT recorded by 27 lava flows from the Galapagos Archipelago near the Equator. Our new result in conjunction with a published comprehensive study of single-domain-behaved paleointensities from Antarctica (33.4 ± 13.9 µT) that also correspond to GAD directions suggests that the overall average paleomagnetic field over the past few million years has indeed been dominantly dipolar in intensity yet only ∼ 60% of the present-day field strength, with a long-term average virtual axial dipole magnetic moment of the Earth of only 4.9 ± 2.4 × 10(22) A ⋅ m(2). PMID:26598664
Rapid Turnaround of Costing/Designing of Space Missions Operations
NASA Technical Reports Server (NTRS)
Kudrle, Paul D.; Welz, Gregory A.; Basilio, Eleanor
2008-01-01
The Ground Segment Team (GST), at NASA's Jet Propulsion Laboratory in Pasadena, California, provides high-level mission operations concepts and cost estimates for projects that are in the formulation phase. GST has developed a tool to track costs, assumptions, and mission requirements, and to rapidly turnaround estimates for mission operations, ground data systems, and tracking for deep space and near Earth missions. Estimates that would often take several weeks to generate are now generated in minutes through the use of an integrated suite of cost models. The models were developed through interviews with domain experts in areas of Mission Operations, including but not limited to: systems engineering, payload operations, tracking resources, mission planning, navigation, telemetry and command, and ground network infrastructure. Data collected during interviews were converted into parametric cost models and integrated into one tool suite. The tool has been used on a wide range of missions from small Earth orbiters, to flagship missions like Cassini. The tool is an aid to project managers and mission planners as they consider different scenarios during the proposal and early development stages of their missions. The tool is also used for gathering cost related requirements and assumptions and for conducting integrated analysis of multiple missions.
NASA Astrophysics Data System (ADS)
Scarponi, D.; Kaufman, D.; Bright, J.; Kowalewski, M.
2009-04-01
Single fossiliferous beds contain biotic remnants that commonly vary in age over a time span of hundreds to thousands of years. Multiple recent studies suggest that such temporal mixing is a widespread phenomenon in marine depositional systems. This research focuses on quantitative estimates of temporal mixing obtained by direct dating of individual corbulid bivalve shells (Lentidium mediterraneum and Corbula gibba) from Po plain marine units of the Holocene 4th-order depositional sequence, including Transgressive Systems Tract [TST] and Highstand Systems Tract [HST]. These units displays a distinctive succession of facies consisting of brackish to marginal marine retrogradational deposits, (early TST), overlain by fully marine fine to coarse gray sands (late TST), and capped with progradational deltaic clays and sands (HST). More than 300 corbulid specimens, representing 19 shell-rich horizons evenly distributed along the depositional sequence and sampled from 9 cores, have been dated by means of aspartic acid racemization calibrated using 23 AMS-radiocarbon dates (14 dates for Lentidium mediterraneum and 9 dates for Corbula gibba, respectively). The results indicate that the scale of time-averaging is comparable when similar depositional environments from the same systems tract are compared across cores. However, time averaging is notably different when similar depositional environments from TST and HST segments of the sequence are compared. Specifically, late HST horizons (n=8) display relatively low levels of time-averaging: the mean within-horizon range of shell ages is 537 years and standard deviation averages 165 years. In contrast, late TST horizons (n=7) are dramatically more time-averaged: mean range of 5104 years and mean standard deviations of 1420 years. Thus, late TST horizons experience a 1 order of magnitude higher time-averaging than environmentally comparable late HST horizons. In conclusion the HST and TST systems tracts of the Po Plain display
Time-Averaged Indoor Radon Concentrations and Infiltration RatesSampled in Four U.S. Cities
Doyle, S.M.; Nazaroff, W.W.; Nero, A.V.
1983-09-01
Indoor radon concentrations, measured in 58 houses over a four to five month period during the winter and spring of 1981-1982, varied from 0.1 to 16 pCi 1{sup -1} (4-590 Bq m{sup -3}). Average infiltration rates were determined for each house over the same period, based on a measurement of the effective leakage area and an infiltration model, and found to range from 0.2 to 2.2 air changes per hour (hr{sup -1}). Indoor radon concentrations correlated poorly with infiltration rates for houses within each city as well as for the entire sample. Differences in radon entry rates among houses thus appear to be more important than differences in infiltration rates in determining whether a house has high indoor radon levels, consistent with previous indications from grab-sample measurements. Radon entry rates and indoor radon concentrations were generally higher in houses in Fargo, ND and Colorado Springs, CO than in houses in Portland, ME and Charleston, NC.
Prevosto, L.; Mancinelli, B.; Kelly, H.
2013-12-15
This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.
Prevosto, L; Kelly, H; Mancinelli, B
2013-12-01
This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core. PMID:24387432
ERIC Educational Resources Information Center
Smith, Scott G.
2015-01-01
In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…
ERIC Educational Resources Information Center
Sawyer, Richard
2013-01-01
Education officials and journalists frequently track changes over time in the average ACT® College Readiness Assessment Composite scores and ACT College Readiness Benchmark attainment rates of individual high schools. Using standard statistical methods, I examined how often changes in these statistics are unambiguously positive or negative, rather…
Bruna, C; Vado, A; Rossetti, G; Racca, E; Borello, V; Cherasco, E; Isoardi, D; Uslenghi, E
1996-12-01
Several studies showed that time domain analysis of the signal-averaged ECG may identify groups of patients with low and high risk for arrhythmic events after myocardial infarction (MI). However, the signal averaging methods were not uniform and the definition of abnormal signal-averaged ECG was empiric. To identify the best quantitative signal-averaged variable in predicting arrhythmic events (sustained ventricular tachycardia, ventricular fibrillation and witnessed, instantaneous death) 262 patients surviving acute MI were prospectively evaluated. Twelve clinical variables, left ventricular ejection fraction (LVEF), complex ventricular arrhythmias (CVA) on Holter monitoring and three conventional signal-averaged variables (either at 25-250 or 40-250 Hz) were entered in a Cox proportional hazards regression model. During a mean follow-up of 20.3 +/- 13.7 months 16 (6.1%) patients had arrhythmic events. All six signal-averaged variables were independent predictors of arrhythmic events and the filtered QRS duration (fQRSD) > or = 120 ms at 40 Hz high pass filtering resulted the most predictive. In a regression analysis, including the best signal-averaged variable, LVEF and CVA, only fQRSD > or = 120 ms at 40 Hz and LVEF independently predicted arrhythmic events. Sensitivity, specificity, positive predictive value and odds ratio for fQRSD > or = 120 ms at 40 Hz were 63, 90, 29 and 11%, respectively, and for the combination of fQRSD > or = 120 ms at 40 Hz and LVEF < 40%, were 73, 95, 47 and 39%, respectively. In conclusion, the fQRSD > or = 120 ms at 40 Hz best predicts arrhythmic events in the post-infarction period. The combination of signal-averaged ECG and LVEF is recommended to stratify patients at risk of arrhythmic events after MI. PMID:9031532
NASA Technical Reports Server (NTRS)
Ovryn, B.; Haacke, E. M.
1993-01-01
A technique that compensates for low spatial frequency spurious phase changes during an interference experiment is developed; it permits temporal averaging of multiple-phase measurements, made before and after object displacement. The method is tested with phase-stepped real-time holographic interferometry applied to cantilever bending of a piezoelectric bimorph ceramic. Results indicate that temporal averaging of the corrected data significantly reduces the white noise in a phase measurement without incurring systematic errors or sacrificing spatial resolution. White noise is reduced from 3 deg to less than 1 deg using these methods.
NASA Astrophysics Data System (ADS)
Ouriev (Ur'ev), Boris; Windhab, Erich
2003-01-01
In this work a methodology for high-resolution time averaged two-dimensional flow mapping of converging flows was explored. Flow of non-transparent, highly concentrated shear-thinning and shear-thickening suspensions was circulating through the entrance flow adapter with adjustable position of the die entry. The entrance region was scanned with the distance resolution of 2.7 mm × 1 mm, radial to axial displacement respectively. The time averaged flow map was composed from one-dimensional flow profiles measured along the ultrasonic sensor beam using the ultrasonic pulsed echo Doppler technique. Priory to die entry visualization an investigation of flow properties was performed using a novel in-line non-invasive measuring technique. The method is based on combination of the ultrasound velocity profiler velocity monitoring and pressure difference method. The rheological flow properties were derived from simultaneous recording and on-line analysis of the velocity profiles across the tube channel and related radial shear stress profiles calculated from the pressure loss along the flow channel. For the first time the entrance flow of shear-thickening suspension could be visualized. A comparison between the flow of the investigated model suspensions was qualitatively analysed. This method gives an opportunity for time averaged flow mapping of viscoelastic and viscous, non-transparent, multiphase and highly concentrated fluids.
Shaffer, F.; Massah, H.; Sinclair, J.; Shahnam, M.
1995-04-30
This paper describes an experimental facility to measure time-averaged properties of particle-wall collisions. A particle tracking velocimetry (PTV) system measures particle rebound angles and velocities and impact angles and velocities for specific particles. From these measured properties, values of normal and tangential coefficient of restitution are derived. The PTV system rapidly measures thousands of particle-wall collisions enabling determination of time-averaged properties. In this initial work we use 191 {micro}m glass particles colliding with a glass plate at a fixed angle of 17.5{degree} and a velocity of 17.07 m/s. Observations are also made of aspherical particles colliding with gross sliding. In the future, this experimental system will be used with particles and wall materials of industrial significance.
Fast-turnaround alkyl nitrate measurements during the PROPHET 1998 summer intensive
NASA Astrophysics Data System (ADS)
Ostling, Kristi; Kelly, Brendan; Bird, Stephen; Bertman, Steven; Pippin, Margaret; Thornberry, Troy; Carroll, Mary Anne
2001-10-01
An automated gas chromatographic system for making fast-turnaround measurements of alkyl nitrates was developed and tested. Every 30 min, samples were automatically acquired on a 1/16-inch OD Tenax trap made from fused silica-lined stainless steel tubing and injected by thermal desorption into the gas chromatograph. The system exhibited good chromatographic resolution, minimal breakthrough, and on-trap stability for C3-C5 nitrates. The detection limit of the instrument was <0.1 parts per trillion by volume for C3-C5 nitrates with an overall uncertainty of 30% for isopropyl nitrate and 50% for all other quantified compounds. Overall uncertainty of the method was limited by calibration precision rather than by sampling method or chromatography. The system was deployed in the field as part of the Program for Research on Oxidants: Photochemistry, Emissions, and Transport intensive held at the University of Michigan Biological Station in northern Michigan during the summer of 1998 from July 15 to August 22. Ambient results show a significant dynamic range with absolute levels affected by air mass history and pollution sources. The sum of measured C3-C5 alkyl nitrates obtained during this intensive ranged from 3.45 to 65.8 pptv, with a mean of 18.6 pptv and contributed 1% on average to total NOy, with a range of 0.5% to 3%. Differences in the contribution to NOy are found when the data are filtered according to air mass origin. A positive relationship between n-propyl nitrate and peroxypropionyl nitrate was observed over the course of the intensive.
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Hulot, G.; Johnson, C. L.
2013-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.
Mineo, Placido; Vitalini, Daniele; Scamporrino, Emilio; Bazzano, Sebastiano; Alicata, Rossana
2005-01-01
The dependence of the calculated average molecular mass of a polyethylene glycol with a large polydispersity on the instrumental parameters adopted in the acquisition of mass spectra using delayed extraction matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (DE MALDI-TOFMS) was investigated. It has been shown that a combined effect of delay times and potential gradients can act on the ion cloud in the source chamber affecting both mass resolution and average molecular mass value of the analyzed polymeric sample. Also examined was a blend of two different polymers (a PEG and a PMMA commercial sample having a similar average molecular mass), which presents an additional problem concerning the discrimination among the different polymer species as a function of the experimental conditions. In this work, the best instrumental conditions to obtain both good resolution and a correct average molecular mass for the examined polydisperse sample are reported. PMID:16134231
ERIC Educational Resources Information Center
Herman, Rebecca; Huberman, Mette
2012-01-01
The TALPS study aims to build on the existing research base to develop promising methodologies to identify chronically low-performing and turnaround schools, as well as to identify promising strategies for turning around chronically low-performing schools. By looking specifically at schools identified as turnaround, in comparison to nonturnaround…
ERIC Educational Resources Information Center
Cucchiara, Maia Bloomfield; Rooney, Erin; Robertson-Kraft, Claire
2015-01-01
School turnaround--a reform strategy that strives for quick and dramatic transformation of low-performing schools--has gained prominence in recent years. This study uses interviews and focus groups conducted with 86 teachers in 13 schools during the early stages of school turnaround in a large urban district to examine teachers' perceptions of the…
ERIC Educational Resources Information Center
Murphy, Joseph
2008-01-01
Purpose: The purpose of this paper is to review the literature from the organizational sciences to develop a grounded narrative of turnaround leadership. Design/methodology/approach: The paper is a review of literature, which employs a ten-step process to explore and make sense of the turnaround literature from the organizational sciences. The…
ERIC Educational Resources Information Center
MacTaggart, Terrence, Ed.
2007-01-01
This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the true…
Holyoak, Marcel; Meese, Robert J.; Graves, Emily E.
2014-01-01
In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005–2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method
Holyoak, Marcel; Meese, Robert J; Graves, Emily E
2014-01-01
In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required
Canciani, B; Mormino, P; Scanavacca, G; Della Rocca, F
1994-02-01
We report on a patient suffering from asymmetrical hypertrophic cardiomyopathy, with alterations occurring in the signal-averaged electrocardiogram (SAQRS). Recordings of 3-lead orthogonal ECG were obtained and analyzed by a Del Mar Avionics 750A Innovator device. The time domain analysis showed late potentials and a slow inscription of the initial portion of the SAQRS, that we called "early potentials". On the frequency domain there was a high degree of spectral turbulence at the beginning and at the end of the SAQRS. We discuss the possibility that the slow and fragmented conduction of the initial portion of the SAQRS could be related to the electrical instability of the disease. PMID:8013767
Chronically Low-Performing Schools and Turnaround: Evidence from Three States
ERIC Educational Resources Information Center
Hansen, Michael; Choi, Kilchan
2012-01-01
The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This paper…
Achieving Exact and Constant Turnaround Ratio in a DDS-Based Coherent Transponder
NASA Technical Reports Server (NTRS)
D'Addario, Larry R.
2011-01-01
A report describes a non-standard direct digital synthesizer (DDS) implementation that can be used as part of a coherent transponder so as to allow any rational turnaround ratio to be exactly achieved and maintained while the received frequency varies. (A coherent transponder is a receiver-transmitter in which the transmitted carrier is locked to a pre-determined multiple of the received carrier's frequency and phase. That multiple is called the turnaround ratio.) The report also describes a general model for coherent transponders that are partly digital. A partially digital transponder is one in which analog signal processing is used to convert the signals between high frequencies at which they are radiated and relatively low frequencies at which they are converted to or from digital form, with most of the complex processing performed digitally. There is a variety of possible architectures for such a transponder, and different ones can be selected by choosing different parameter values in the general model. Such a transponder uses a DDS to create a low-frequency quasi-sinusoidal signal that tracks the received carrier s phase, and another DDS to generate an IF or near-baseband version of the transmitted carrier. With conventional DDS implementations, a given turnaround ratio can be achieved only approximately, and the error varies slightly as the received frequency changes. The non-conventional implementation employed here allows any rational turnaround ratio to be exactly maintained.
Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform
ERIC Educational Resources Information Center
Mette, Ian M.; Scribner, Jay P.
2014-01-01
This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in student…
The Lay of the Land: State Practices and Needs for Supporting School Turnaround
ERIC Educational Resources Information Center
Scott, Caitlin; Lasley, Nora
2013-01-01
The goal of the Center on School Turnaround (CST) is to provide technical assistance on research-based practices and emerging promising practices that will increase the capacity of states to support their districts in turning around the lowest-performing schools. When the CST opened its doors in October 2012, it began its work by asking the…
ERIC Educational Resources Information Center
Stuit, David A.
2010-01-01
This study investigates the successes of the charter and district sectors in eliminating bad schools via dramatic turnarounds in performance and/or shutdowns. It identified 2,025 low-performing charter and district schools across ten states, each of which is home to a sizable number of charter schools. These particular schools were tracked from…
A Case Study of Change Strategies Implemented in a Turnaround Elementary School
ERIC Educational Resources Information Center
Colson, Jo Ann
2012-01-01
This case study examined the change strategies in a turnaround school at the elementary level to understand and describe how change occurred and was sustained at this campus. This study examined the factors which contributed to the change in academic success of students, examined beliefs about change that led to the change process, identified the…
ERIC Educational Resources Information Center
Schmidt-Davis, Jon; Bottoms, Gene
2012-01-01
Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apart…
Participatory Democracy and Struggling Schools: Making Space for Youth in School Turnarounds
ERIC Educational Resources Information Center
Kirshner, Ben; Jefferson, Anton
2015-01-01
Background/Context:Federal policy, as codified in Race to the Top (RTT) funding guidelines, outlines four types of intervention: turnaround, restart, closure, and transformation. RTT has embraced a technocratic paradigm for school reform that frames choice less as the opportunity for the public to deliberate about what it wants from its schools…
Choosing a School Turnaround Provider. Lessons Learned. Volume 1, Issue 3
ERIC Educational Resources Information Center
Lockwood, Anne Turnbaugh; Fleischman, Steve
2010-01-01
Droves of school turnaround providers are chasing the massive federal infusion of funds flowing into failing schools. They arrive armed with glossy materials, impressive sounding claims, and, often, citing their prior relationships or experiences with one's school to support their promises of great service and impressive outcomes. But, are their…
Tinkering and Turnarounds: Understanding the Contemporary Campaign to Improve Low-Performing Schools
ERIC Educational Resources Information Center
Duke, Daniel L.
2012-01-01
An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The article…
ERIC Educational Resources Information Center
Herman, Rebecca; Graczewski, Cheryl; James-Burdumy, Susanne; Murray, Matthew; Perez-Johnson, Irma; Tanenbaum, Courtney
2013-01-01
The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…
State Capacity to Support School Turnaround. NCEE Evaluation Brief. NCEE 2015-4012
ERIC Educational Resources Information Center
Tanenbaum, Courtney; Boyle, Andrea; Graczewski, Cheryl; James-Burdumy, Susanne; Dragoset, Lisa; Hallgren, Kristin
2015-01-01
One objective of the U.S. Department of Education's (ED) School Improvement Grants (SIG) and Race to the Top (RTT) program is to help states enhance their capacity to support the turnaround of low-performing schools. This capacity may be important, given how difficult it is to produce substantial and sustained achievement gains in low-performing…
CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success
ERIC Educational Resources Information Center
Gorman, Lynn
2012-01-01
Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…
ERIC Educational Resources Information Center
Brown, Angela Antuanette
2012-01-01
The specific purpose of this study was to identify the practices used by leaders of National Blue Ribbon Awarded Schools to successfully turnaround a school or to maintain proficient achievement scores within the school. The school principals answered a four part questionnaire for this study. Part 1 of the questionnaire asked about the extent to…
How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds
ERIC Educational Resources Information Center
Kowal, Julie; Ableidinger, Joe
2011-01-01
In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…
NASA Technical Reports Server (NTRS)
Kim, J. H.; Hudson, R. D.; Thompson, A. M.
1996-01-01
Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass
NASA Astrophysics Data System (ADS)
Kosnik, Matthew A.; Hua, Quan; Kaufman, Darrell S.; Zawadzki, Atun
2015-03-01
Carbon-14 calibrated amino acid racemization (14C/AAR) data and lead-210 (210Pb) data are used to examine sediment accumulation rates, stratigraphic order, and the extent of time-averaging in sediments collected from the One Tree Reef lagoon (southern Great Barrier Reef, Australia). The top meter of lagoonal sediment preserves a stratigraphically ordered deposit spanning the last 600 yrs. Despite different assumptions, the 210Pb and 14C/AAR chronologies are remarkably similar indicating consistency in sedimentary processes across sediment grain sizes spanning more than three orders of magnitude (0.1-10 mm). Estimates of long-term sediment accumulation rates range from 2.2 to 1.2 mm yr-1. Molluscan time-averaging in the taphonomically active zone is 19 yrs, whereas below the depth of final burial (~15 cm), it is ~110 yrs/5 cm layer. While not a high-resolution paleontological record, this reef lagoon sediment is suitable for paleoecological studies spanning the period of Western colonization and development. This sedimentary deposit, and others like it, should be useful, albeit not ideal, for quantifying anthropogenic impacts on coral reef systems.
NASA Astrophysics Data System (ADS)
Mark, William D.
2015-10-01
The transmission-error frequency spectrum of meshing gear pairs, operating at constant speed and constant loading, is decomposed into harmonics arising from the fundamental period of the gear pair, rotational harmonics of the individual gears of the pair, and tooth-meshing harmonics. In the case of hunting-tooth gear pairs, no rotational harmonics from the individual gears, other than the tooth-meshing harmonics, are shown to occur at the same frequencies. Time-synchronous averages utilizing a number of contiguous revolutions of the gear of interest equal to an integer multiple of the number of teeth on the mating gear is shown to eliminate non-tooth-meshing transmission-error rotational-harmonic contributions from the mating gear, and those from the gear pair, in the case of hunting-tooth gear pairs, and to minimize these contributions in the case of non-hunting-tooth gear pairs. An example computation is shown to illustrate the effectiveness of the suggested time-synchronous-averaging procedure.
NASA Astrophysics Data System (ADS)
Flynn, Michael R.; Eisner, Alfred D.
2004-04-01
This paper presents verification and validation results for the time-averaged, three-dimensional velocity field immediately downstream of a finite elliptic cylinder at a Reynolds number of 1.35 × 10 4. Numerical simulations were performed with the finite element package, Fidap, using the steady state, standard k-epsilon model. The ratio of the cylinder height to the major axis of the elliptical cross section is 5.0; the aspect ratio of the cross section is 0.5625. This particular geometry is selected as a crude surrogate for the human form in consideration of further applied occupational and environmental health studies. Predictions of the velocity and turbulence kinetic energy fields in the very near-wake are compared to measurements taken in a wind tunnel using laser Doppler anemometry. Results show that at all locations where a reliable grid convergence index can be calculated there is not a demonstrable difference between simulated and measured values. The overall topology of the time-averaged flow field is reasonably well predicted, although the simulated near-wake is narrower than the measured one.
Cleary, P. W.; Hijmans, T. W.; Walraven, J. T. M.
2010-12-15
We report on the manipulation of the center-of-mass motion ('sloshing') of a Bose-Einstein condensate in a time-averaged orbiting potential (TOP) trap. We start with a condensate at rest in the center of a static trapping potential. When suddenly replacing the static trap with a TOP trap centered about the same position, the condensate starts to slosh with an amplitude much larger than the TOP micromotion. We show, both theoretically and experimentally, that the direction of sloshing is related to the initial phase of the rotating magnetic field of the TOP. We show further that the sloshing can be quenched by applying a carefully timed and sized jump in the phase of the rotating field.
Durantin, Gautier; Scannella, Sebastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frederic
2014-01-01
Real-time solutions for noise reduction and signal processing represent a central challenge for the development of Brain Computer Interfaces (BCI). In this paper, we introduce the Moving Average Convergence Divergence (MACD) filter, a tunable digital passband filter for online noise reduction and onset detection without preliminary learning phase, used in economic markets analysis. MACD performance was tested and benchmarked with other filters using data collected with functional Near Infrared Spectoscopy (fNIRS) during a digit sequence memorization task. This filter has a good performance on filtering and real-time peak activity onset detection, compared to other techniques. Therefore, MACD could be implemented for efficient BCI design using fNIRS. PMID:25570400
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta
2004-01-01
A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.
NASA Technical Reports Server (NTRS)
Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.
1987-01-01
Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. PMID:26778545
ERIC Educational Resources Information Center
Walker, Michele
2010-01-01
The purpose of this study was to observe, describe, and analyze the aspects of school capacity within an urban turnaround school. School capacity factors included: "Teacher Knowledge, Skills and Disposition"; "Professional Community"; "Program Coherence"; "Technical Resources"; and "Principal…
Albian, Joseph M; Geihsler, Debra A; Sacks, Lee B; Schmidt, Daniel P
2002-03-25
This is the second installment in a series of group discussions by top executives on key issues in healthcare today. Straight Talk is present by Modern Healthcare and PricewaterhouseCoopers. This session tackles the subject of physician practice turnaround in an Integrated Delivery System. The discussion was held on March 5, 2002 at Modern Healthcare's Chicago headquarters, moderated by Charles S. Lauer. PMID:11951354
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
NASA Astrophysics Data System (ADS)
Raju, M. S.; Krishnamurthy, L.
1986-07-01
The near-wake region in a ducted bluff-body combustor was investigated by finite-difference computations. The numerical predictions are based upon the time-independent, Reynolds-averaged Navier-Stokes equations and the k-epsilon turbulence model. The steady-state calculations address both nonreacting and reacting flowfields in a novel configuration to more realistically simulate some of the essential features of the primary zone of a gas turbine combustion chamber. This configuration is characterized by turbulent mixing and combustion in the recirculating near-wake region downstream of an axisymmetric bluff body due to two annular air streams--an outer swirl-free flow and an inner swirling flow--and a central fuel jet. The latter contains propane for reacting flows and carbon dioxide for nonreacting flows. In view of the large number of geometrical and flow parameters involved, the reported results are concerned with only a limited parametric examination with the major emphasis being on nonreacting flows. Questions addressed for a particular set of geometric parameters include the effects of variation of mass flow rates in all three streams and the influence of swirl in the middle stream. Reacting computations investigate the influence of swirl on combustion, as well as that of combustion on the flowfield.
Using corporate finance to engineer an organizational turnaround.
Sussman, Jason H; Dziesinski, Ray R
2002-11-01
Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process. PMID:12656032
NASA Astrophysics Data System (ADS)
Harris, Brent J.; Steber, Amanda L.; Pate, Brooks H.
2013-06-01
The design and performance of a 260-295 GHz segmented chirped-pulse Fourier transform mm-wave spectrometer is presented. The spectrometer uses an arbitrary waveform generator to create an excitation and detection waveform. The excitation waveform is a series of chirped pulses with 720 MHz bandwidth at mm-wave and about 200 ns pulse duration. The excitation pulses are produced using an x24 active multiplier chain with a peak power of 30 mW. Following a chirped pulse excitation, the molecular emission from all transitions in the excitation bandwidth is detected using heterodyne detection. The free induction decay (FID) is collected for about 1.5 microseconds and each segment measurement time period is 2 microseconds. The local oscillator for the detection in each segment is also created from the arbitrary waveform generator. The full excitation waveform contains 50 segments that scan the chirped pulse frequency and LO frequency across the 260-295 GHz frequency range in a total measurement time of 100 microseconds. The FID from each measurement segment is digitized at 4 GSamples/s, for a record length of 400 kpts. Signal averaging is performed by accumulating the FID signals from each sweep through the spectrum in a 32-bit FPGA. This allows the acquisition of 16 million sequential 260-295 GHz spectra in real time. The final spectrum is produced from fast Fourier transform of the FID in each measurement segment with the frequency calculated using the segment's LO frequency. The agility of the arbitrary waveform generator light source makes it possible to perform several coherent spectroscopic measurements to speed the analysis of the spectrum. In particular, high-sensitivity double-resonance measurements can be performed by applying a "pi-pulse" to a selected molecular transition and observing the changes to all other transitions in the 260-295 GHz frequency range of the spectrometer. In this mode of operation, up to 50 double-resonance frequencies can be used in each
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28
NASA Astrophysics Data System (ADS)
Morán-López, J. T.; Schilling, O.
2014-05-01
Turbulent mixing generated by shock-driven acceleration of a perturbed interface is simulated using a new multi-component Reynolds-averaged Navier-Stokes (RANS) model closed with a two-equation - model. The model is implemented in a hydrodynamics code using a third-order weighted essentially non-oscillatory finite-difference method for the advection terms and a second-order central difference method for the gradients in the source and diffusion terms. In the present reshocked Richtmyer-Meshkov instability and mixing study, an incident shock with Mach number is generated in air and progresses into a sulfur hexafluoride test section. The time evolution of the predicted mixing layer widths corresponding to six shock tube test section lengths are compared with experimental measurements and three-dimensional multi-mode numerical simulations. The mixing layer widths are also compared with the analytical self-similar power-law solution of the simplified model equations prior to reshock. A set of model coefficients and initial conditions specific to these six experiments is established, for which the widths before and after reshock agree very well with experimental and numerical simulation data. A second set of general coefficients that accommodates a broader range of incident shock Mach numbers, Atwood numbers, and test section lengths is also established by incorporating additional experimental data for , , and with and with and previous RANS modeling. Terms in the budgets of the turbulent kinetic energy and dissipation rate equations are examined to evaluate the relative importance of turbulence production, dissipation and diffusion mechanisms during mixing. Convergence results for the mixing layer widths, mean fields, and turbulent fields under grid refinement are presented for each of the cases.
Field demonstration of rapid turnaround, multilevel groundwater screening
Tingle, A.R.; Baker, L.; Long, D.D.; Miracle, M.
1994-09-01
A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.
Fresh policies and procedures, transparency fuel ED turnaround.
2016-05-01
The ED at Banner Payson Medical Center in Payson, AZ, has charted dramatic improvements on key metrics through a range of staff and policy changes. In just a few months, the ED has halved wait times, patient satisfaction has improved, and daily volume is up. Administrators say the secret to the success of the effort is a move to be transparent by posting key metrics regarding patient flow, a tactic that has helped the team pull together and feel a sense of accomplishment when performance goals are achieved. Administrators adjusted staff schedules to better match patient volume patterns in the ED, and they added staff to help nurses during peak hours. A new provider group enlisted the assistance of scribes during peak hours to manage the documentation workload while also enabling physicians to focus more patient interaction. Physicians hold end-of-shift huddles to review successes and challenges, and to improve physician/nurse communication. The tactic also helps develop the staff from an educational standpoint. PMID:27266001
ERIC Educational Resources Information Center
McLester, Susan
2011-01-01
The Obama administration has grand hopes for turning around the nation's lowest-performing schools, in part by allocating $3.5 billion for School Improvement Grants. Unfortunately, there simply aren't enough qualified principals to replace those mandated to be fired under two of the four school improvement models that the federal government says…
NASA Astrophysics Data System (ADS)
Giassi, Davide; Long, Marshall B.
2016-08-01
Two alternative image readout approaches are demonstrated to improve the signal-to-noise ratio (SNR) in temporally resolved laser-based imaging experiments of turbulent phenomena. The first method exploits the temporal decay characteristics of the phosphor screens of image intensifiers when coupled to an interline-transfer CCD camera operated in double-frame mode. Specifically, the light emitted by the phosphor screen, which has a finite decay constant, is equally distributed and recorded over the two sequential frames of the detector so that an averaged image can be reconstructed. The characterization of both detector and image intensifier showed that the technique preserves the correct quantitative information, and its applicability to reactive flows was verified using planar Rayleigh scattering and tested with the acquisition of images of both steady and turbulent partially premixed methane/air flames. The comparison between conventional Rayleigh results and the averaged ones showed that the SNR of the averaged image is higher than the conventional one; with the setup used in this work, the gain in SNR was seen to approach 30 %, for both the steady and turbulent cases. The second technique uses the two-frame readout of an interline-transfer CCD to increase the image SNR based on high dynamic range imaging, and it was tested in an unsteady non-reactive flow of Freon-12 injected in air. The result showed a 15 % increase in the SNR of the low-pixel-count regions of an image, when compared to the pixels of a conventionally averaged one.
Honoré, Peggy A; Stefanak, Matthew; Dessens, Scott
2012-01-01
A turnaround describes an organization's ability to recover from successive periods of decline. Current and projected declines in US economic conditions continue to place local public health departments at risk of fiscal exigency. This examination focused on turnaround methodologies used by a local public health department to reverse successive periods of operational and financial declines. Illustrations are provided on the value added by implementing financial ratio and trend analysis in addition to using evidence-based private sector turnaround strategies of retrenchment, repositioning, and reorganization. Evidence has shown how the financial analysis and strategies aided in identifying operational weakness and set in motion corrective measures. The Public Health Uniform Data System is introduced along with a list of standards offered for mainstreaming these and other routine stewardship practices to diagnose, predict, and prevent agency declines. PMID:22635191
The Race Is On To Shorten the Turnaround Time for Diagnosis of Multidrug-Resistant Tuberculosis
Somoskovi, Akos
2015-01-01
To realize the most benefit from multidrug-resistant tuberculosis (MDR-TB) screening, all nucleic acid amplification test (NAAT)-positive respiratory specimens should be universally tested. Once an MDR-TB diagnosis is established, additional testing is warranted to provide details about the detected mutations. The lab-on-chip technology described by A. M. Cabibbe et al. (J Clin Microbiol 53:3876–3880, 2015, http://dx.doi.org/10.1128/JCM.01824-15) potentially provides this much needed information. PMID:26378276
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
The Averaging Problem in Cosmology
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2009-06-01
This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.
ERIC Educational Resources Information Center
Calkins, Andrew; Guenther, William; Belfiore, Grace; Lash, Dave
2007-01-01
The turnaround recommendations and framework in "The Turnaround Challenge" grew out of both new research and synthesis of extensive existing research, as carried out by Mass Insight Education & Research Institute and its partners since September 2005. If the main report is the tip of the proverbial iceberg, this supplement represents at least part…
ERIC Educational Resources Information Center
Jones, Stephanie; Clarke, Lane; Enriquez, Grace
2009-01-01
This book demonstrates a five-part framework for teachers, reading specialists, and literacy coaches who want to help their least engaged students become powerful readers. Merging theory and practice, the guide offers successful strategies to reach your "struggling" learners. The authors show how you can "turn-around" your instructional practice,…
ERIC Educational Resources Information Center
Jacobson, Stephen L.; Szczesek, Jill
2013-01-01
This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…
ERIC Educational Resources Information Center
Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan
2013-01-01
Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…
ERIC Educational Resources Information Center
Reyes, Augustina; Garcia, Andres
2014-01-01
In an era of school reform and high stakes accountability, the major challenge in education is to turnaround the nation's lowest-performing schools. National policy provides abundant resources with often restrictive prerequisites. Research indicates that quality principals and teachers are the most important factors for turning around the…
ERIC Educational Resources Information Center
Carlson, Robert V.
This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failed…
ERIC Educational Resources Information Center
Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad
2014-01-01
Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of…
NASA Astrophysics Data System (ADS)
Zanis, P.; Maillard, E.; Staehelin, J.; Zerefos, C.; Kosmidis, E.; Tourpali, K.; Wohltmann, I.
2006-11-01
In this work, we investigate the issue of the turnaround in ozone trends of the recently homogenized Umkehr ozone record of Arosa, Switzerland, which is the longest Umkehr data set, extending from 1956 to date, using different statistical methods. All methods show statistically significant negative ozone trends from 1970 to 1995 in the upper stratosphere (above 32.6 km) throughout the course of the year as well as in the lower stratosphere (below 23.5 km) mainly during winter to spring, which can be partially attributed to dynamical changes. Over the recent period (1996-2004) the year-round trends in the lower stratosphere become positive and are more positive during the winter to spring period. The results also show changes in upper stratospheric ozone trends after 1996, which are, however, not statistically significant at 95% if aerosol correction is applied on the retrieved data. This lack of significant trend changes during the recent period in the upper stratosphere is regionally coherent with recent results derived from upper stratospheric ozone data recorded by lidars, microwave radiometers, and satellite instruments at an adjacent location. Although the positive change in trends after 1996 both for upper and lower stratospheric ozone is in line with the reduction of the emissions of ozone-depleting substances from the successful implementation of the Montreal Protocol and its amendments, we recommend, because of lack of significance for the upper stratospheric trends, repeating this analysis in a few years in order to overcome ambiguous results for documentation of the turnaround of upper stratospheric ozone.
NASA Astrophysics Data System (ADS)
Kim, Seung-Kyu; Park, Jong-Eun
2014-06-01
Despite remarkable achievements with r some chemicals, a field-measurement technique has not been advanced for volatile hydrophobic organic chemicals (HOCs) that are the subjects of international concern. This study assesses the applicability of passive air sampling (PAS) by comparing PUF-PAS and its modified SIP-PAS which was made by impregnating XAD-4 powder into PUF, overviewing the principles of PAS, screening sensitive parameters, and determining the uncertainty range of PAS-derived concentration. The PAS air sampling rate determined in this study, corrected by a co-deployed low-volume active air sampler (LAS) for neutral PFCs as model chemicals, was ˜1.2 m3 day-1. Our assessment shows that the improved sorption capacity in a SIP lengthens PAS deployment duration by expanding the linear uptake range and then enlarges the effective air sampling volume and detection frequency of chemicals at trace level. Consequently, volatile chemicals can be collected during sufficiently long times without reaching equilibrium when using SIP, while this is not possible for PUF. The most sensitive parameter to influence PAS-derived CA was an air-side mass transfer coefficient (kA), implying the necessity of spiking depuration chemicals (DCs) because this parameter is strongly related with meteorological conditions. Uncertainty in partition coefficients (KPSM-A or KOA) influences PAS-derived CA to a greater extent with regard to lower KPSM-A chemicals. Also, the PAS-derived CA has an uncertainty range of a half level to a 3-fold higher level of the calculated one. This work is expected to establish solid grounds for the improvement of field measurement technique of HOCs.
Exact averaging of laminar dispersion
NASA Astrophysics Data System (ADS)
Ratnakar, Ram R.; Balakotaiah, Vemuri
2011-02-01
We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.
NASA Astrophysics Data System (ADS)
Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi
2016-04-01
Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.
Averaging the inhomogeneous universe
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2012-03-01
A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Vibration analysis by time-average holography
NASA Technical Reports Server (NTRS)
Aprahamian, R.; Evensen, D. A.
1971-01-01
Method photographs vibration modes as high as 100 kHz. Technique also measures vibration decay and damping characteristics. Compensation techniques for structural size limitations and for background noise excitations are discussed.
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.; Johnson, C. L.
2012-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). Yet, numerous numerical dynamo simulations show that some departures from such a simple geometry is to be expected, not least because of the heterogeneous thermal core-mantle boundary conditions that the convecting mantle imposes on the geodynamo. Indeed, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) would suggest this is the case. However, assessing the significance of such minor departures from the GAD is particularly challenging, because non-linear directional data are sensitive not only to the time-averaged component of the field, but also to its time fluctuating component, known as the paleosecular variation (PSV). This means that in addition to data errors, PSV also must be taken into account when assessing any lava flow directional data based claims of departures of the TAF from the GAD. Furthermore, because of limited age information for these data , it is necessary to assess departures from the GAD by resorting to a statistical approach. We report recent progress using an approach we have suggested and further developed (Khokhlov et al., Geophysical Journal International, 2001, 2006) to test the compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models, against any lava flow paleomagnetic database, asssuming that these TAF and PSV models are defined within the Giant Gaussian Process statistical framework. In particular we will show how sensitive statistical measures of the compatibility of a combined set of TAF and PSV models with a given directional database can be defined. These measures can be used to test published TAF and PSV models with updated 0-5 Ma lava flow paleomagnetic data sets. They also lay the groundwork for designing inverse methods better suited to seek the minimum required
Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)
NASA Technical Reports Server (NTRS)
Gross, R. Steven
1996-01-01
This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.
A Bound Violation on the Galaxy Group Scale: The Turn-around Radius of NGC 5353/4
NASA Astrophysics Data System (ADS)
Lee, Jounghun; Kim, Suk; Rey, Soo-Chang
2015-12-01
The first observational evidence for the violation of the maximum turn-around radius on the galaxy group scale is presented. The NGC 5353/4 group is chosen as an ideal target for our investigation of the bound-violation because of its proximity, low-density environment, optimal mass scale, and the existence of a nearby thin straight filament. Using the observational data on the line-of-sight velocities and three-dimensional distances of the filament galaxies located in the bound zone of the NGC 5353/4 group, we construct their radial velocity profile as a function of separation distance from the group center and then compare it to the analytic formula obtained empirically by Falco et al. to find the best-fit value of an adjustable parameter with the help of the maximum likelihood method. The turn-around radius of NGC 5353/4 is determined to be the separation distance where the adjusted analytic formula for the radial velocity profile yields zero. The estimated turn-around radius of NGC 5353/4 turned out to substantially exceed the upper limit predicted by the spherical model based on the ΛCDM cosmology. Even when the restrictive condition of spherical symmetry is released, the estimated value is found to be only marginally consistent with the ΛCDM expectation.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Bonnor, W.B.
1987-05-01
The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.
On radar time and the twin ``paradox''
NASA Astrophysics Data System (ADS)
Dolby, Carl E.; Gull, Stephen F.
2001-12-01
In this paper we apply the concept of radar time (popularized by Bondi in his work on k calculus) to the well-known relativistic twin "paradox." Radar time is used to define hypersurfaces of simultaneity for a class of traveling twins, from the "immediate turn-around" case, through the "gradual turn-around" case, to the "uniformly accelerating" case. We show that this definition of simultaneity is independent of choice of coordinates, and assigns a unique time to any event (with which the traveling twin can send and receive signals), resolving some common misconceptions.
NASA Astrophysics Data System (ADS)
Siegel, Edward Carl-Ludwig; Newcomb, Simon; Strutt-Rayleigh, John William; Poincare, Henri; Weyl, Hermann; Benford, Frederick; Antonoff, Marvin
2015-03-01
DIGIT-PHYSICS: DIGITS?: For a Very Long Time Giving Us All The FINGER!!!: CONTRA Wigner,``On the Unreasonable Effectiveness of Physics in Mathematics!'' A Surprise in Theoretical/Experimental Physics and/or Ostensibly Pure-Mathematics: PHYSICS: Quantum-Mechanics/Statistical-.Mechanics. DIGITS-LAW(S); DIGITS' ostensibly ``pure-mathematics' 1:1-map onto the QUANTUM!!! [Google:''http://www.benfordonline.net/ list/ chronological'']: Newcomb[Am.J.Math.4,39(1881)]-Poincare[Calcul des Probabilité(1912)]-Weyl[Math.Ann., 77, 313(1916)-Benford[J.Am.Phil Soc,78,115 (1938)]-..-Antonoff/Siegel[AMS Joint-Mtg.,San Diego(2002)-abs.# 973-60-124] empirical inter-digit{on-ANY/ALL averageS)
= log[base =10] (1 + 1/d) = log[base =10] ([d +1]/d) upon algebraic-inversion is d = 1/[10⌃[ ] -1] 1/[2.303..e⌃[ ] -1] 1/[2.303..e⌃[< ω>] -1] 1/[2.303..e⌃[ ω] -1]: Digits Are Bosons Are Quanta Because (On Average) Quanta and Bosons Are and Always Were Digits!!! (Ex: atom energy-levels numbering: 0,...,9) ANY/ALL QUANTUM-physics[Planck(1901)-Einstein(1905)-Bose(1924)-Einstein(1925)-vs.Fermi(1927)-Dirac(1927)-...] is and always was Newcomb(1881) DIGIT-physics!!!
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to
Chen, Guang-Hong; Li, Yinsheng
2015-01-01
Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Poland, Michael P.
2014-01-01
Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of Kīlauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at Kīlauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of Kīlauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of Kīlauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at Kīlauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.
Averaging procedures for flow within vegetation canopies
NASA Astrophysics Data System (ADS)
Raupach, M. R.; Shaw, R. H.
1982-01-01
Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’ term describing the transfer of energy from large-scale motion to wake turbulence by form drag. The differences are primarily due to the appearance, in the covariances produced by the second scheme, of dispersive components arising from the spatial correlation of time-averaged flow variables. The two schemes are shown to coincide if these dispersive fluxes vanish.
Averaging Robertson-Walker cosmologies
NASA Astrophysics Data System (ADS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-04-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
Rosowski, John J.; Cheng, Jeffrey Tao; Ravicz, Michael E.; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme
2009-01-01
Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f > 4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Rackes, A; Waring, M S
2016-08-01
We used existing data to develop distributions of time-averaged air exchange rates (AER), whole-building 'effective' emission rates of volatile organic compounds (VOC), and other variables for use in Monte Carlo analyses of U.S. offices. With these, we explored whether long-term VOC emission rates were related to the AER over the sector, as has been observed in the short term for some VOCs in single buildings. We fit and compared two statistical models to the data. In the independent emissions model (IEM), emissions were unaffected by other variables, while in the dependent emissions model (DEM), emissions responded to the AER via coupling through a conceptual boundary layer between the air and a lumped emission source. For 20 of 46 VOCs, the DEM was preferable to the IEM and emission rates, though variable, were higher in buildings with higher AERs. Most oxygenated VOCs and some alkanes were well fit by the DEM, while nearly all aromatics and halocarbons were independent. Trends by vapor pressure suggested multiple mechanisms could be involved. The factors of temperature, relative humidity, and building age were almost never associated with effective emission rates. Our findings suggest that effective emissions in real commercial buildings will be difficult to predict from deterministic experiments or models. PMID:26010216
ERIC Educational Resources Information Center
Abbott, Laura Lynn Tanner
2014-01-01
The purpose of this quantitative non-experimental predictive study was to determine if CIA alignment factors and related district support systems are associated with student achievement to enable the turnaround of schools in crisis. This study aimed to utilize the District Snapshot Tool to determine if the district systems that support CIA…
Orbit-averaged implicit particle codes
NASA Astrophysics Data System (ADS)
Cohen, B. I.; Freis, R. P.; Thomas, V.
1982-03-01
The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the numerical stability of each scheme are given. Simulation examples are presented for a one-dimensional electrostatic model. Schemes are constructed that are stable at large-time step, require fewer particles, and, hence, reduce input-output and memory requirements. Orbit averaging, however, in the unmagnetized electrostatic models tested so far is not as successful as in cases where there is a magnetic field. Methods are suggested in which orbit averaging should achieve more significant improvements in code efficiency.
Seneca, Sara; Lissens, Willy; Endels, Kristof; Caljon, Ben; Bonduelle, Maryse; Keymolen, Kathleen; De Rademaeker, Marjan; Ullmann, Urielle; Haentjens, Patrick; Van Berkel, Kim; Van Dooren, Sonia
2012-11-01
This study evaluated a large set of blinded, previously analyzed prenatal DNA samples with a novel, CGG triplet-repeat primed (TP)-PCR assay (Amplidex FMR1 PCR Kit; Asuragen, Austin, TX). This cohort of 67 fetal DNAs contained 18 full mutations (270 to 1100 repeats, including 1 mosaic), 12 premutations (59 to 150 repeats), 9 intermediate mutations (54 to 58 repeats), and 28 normal samples (17 to 50 repeats, including 3 homozygous female samples). TP-PCR accurately identified FMR1 genotypes, ranging from normal to full- mutation alleles, with a 100% specificity (95% CI, 85.0% to 100%) and a 97.4% sensitivity (95% CI, 84.9% to 99.9%) in comparison with Southern blot analysis results. Exact sizing was possible for a spectrum of normal, intermediate, and premutation (up to 150 repeats) alleles, but CGG repeat numbers >200 are only identified as full mutations. All homozygous alleles were correctly resolved. The assay is also able to reproducibly detect a 2.5% premutation and a 3% full-mutation mosaicism in a normal male background, but a large premutation in a full male mutation background was masked when the amount of the latter was >5%. Implementation of this TP-PCR will significantly reduce reflex testing using Southern blot analyses. Additional testing with methylation-informative techniques might still be needed for a few cases with (large) premutations or full mutations. PMID:22921311
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Searching for the Beginning of the Ozone Turnaround Using a 22-Year Merged Satellite Data Set
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Meeson, Blanche W. (Technical Monitor)
2001-01-01
We have used the data from six satellite instruments that measure the total column amount of ozone to construct a consistent merged data set extending from late 1978 into 2000. The keys to constructing a merged data set are to minimize potential drift of individual instruments and to accurately establish instrument-to-instrument offsets. We have used the short-wavelength D-pair measurements (306nm-313nm) of the SBUV and SBUV/2 instruments near the equator to establish a relatively drift-free record for these instruments. We have then used their overlap with the Nimbus 7 and EP TOMS instruments to establish the relative calibration of the various instruments. We have evaluated the drift uncertainty in our merged ozone data (MOD) set by examining both the individual instrument drift uncertainty and the uncertainty in establishing the instrument- to-instrument differences. We conclude that the instrumental drift uncertainty over the 22-year data record is 0.9 %/decade (2-sigma). We have compared our MOD record with 37 ground stations that have a continuous record over that time period. We have a mean drift with respect to the stations of +0.3 %/decade which is within 1-sigma of our uncertainty estimate. Using the satellite record as a transfer standard, we can estimate the capability of the ground instruments to establish satellite calibration. Adding the statistical variability of the station drifts with respect to the satellite to an estimate of the overall drift uncertainty of the world standard instrument, we conclude that the stations should be able to be used to establish the drift of the satellite data record to within and uncertainty of 0.6 %/decade (2-sigma). Adding to this an uncertainty due to the-incomplete global coverage of the stations, we conclude that the station data should be able to establish the global trend with an uncertainty of about 0.7 %/decade, slightly better than for the satellite record. We conclude that merging the two records together
Arithmetic averaging: A versatile technique for smoothing and trend removal
Clark, E.L.
1993-12-31
Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.
Averaging Internal Consistency Reliability Coefficients
ERIC Educational Resources Information Center
Feldt, Leonard S.; Charter, Richard A.
2006-01-01
Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.
2012-01-01
ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data
40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity
Code of Federal Regulations, 2011 CFR
2011-07-01
... Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity for 1...) Average opacity for 1-minute interval blowing without visible emission interferences(percent)...
IgG/anti-IgG immunoassay based on a turn-around point long period grating
NASA Astrophysics Data System (ADS)
Chiavaioli, F.; Biswas, P.; Trono, C.; Giannetti, A.; Tombelli, S.; Bandyopadhyay, S.; Basumallick, N.; Dasgupta, K.; Baldini, F.
2014-02-01
Long period fiber gratings (LPFGs) have been proposed as label-free optical biosensor for a few years. Refractive index changes, which modify the fiber transmission spectrum, are still used for evaluating a biochemical interaction that occurs along the grating region. A turn-around point (TAP) LPFG was manufactured for enhancing the refractive index sensitivity of these devices. Considering the simplicity and the fast process with respect to the silanization procedure, the functionalization of the fiber was carried out by Eudragit L100 copolymer. An IgG/anti-IgG immunoassay was implemented for studying the antigen/antibody interaction. A limit of detection lower than 100 μg L-1 was achieved. Based on the same model assay, we compared the resonance wavelength shifts during the injection of 10 mg L-1 anti-IgG antigen between the TAP LPFG and a standard non-TAP one, in which the coupling occurs with a lower order cladding mode, as performance improvement of the LPFG-based biosensors.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
On generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag M.
2007-09-01
We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.
ERIC Educational Resources Information Center
Conrad, Jill K.
2013-01-01
This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,…
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Averaged Electroencephalic Audiometry in Infants
ERIC Educational Resources Information Center
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Averaging inhomogeneous cosmologies - a dialogue.
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging inhomogenous cosmologies - a dialogue
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Average Cost of Common Schools.
ERIC Educational Resources Information Center
White, Fred; Tweeten, Luther
The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…
Interlibrary Loan Time and Motion Study, Colorado Western Slope.
ERIC Educational Resources Information Center
Thomas, Sharon D.
This report, which investigates turnaround time for interlibrary loans, presents a 1-month study of the interlibrary loan (ILL) process operating in the Western Slope areas of Colorado during 1980. It comprises introductory material presenting the importance, scope and limitations of the study, problem statement, hypothesis and term definitions; a…
Kuss Middle School: Expanding Time to Accelerate School Improvement
ERIC Educational Resources Information Center
Massachusetts 2020, 2012
2012-01-01
In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning, deepen…
Herrinton, Lisa J; Liu, Liyan; Altschuler, Andrea; Dell, Richard; Rabrenovich, Violeta; Compton-Phillips, Amy L
2015-01-01
Context: Disease registries enable priority setting and batching of clinical tasks, such as reaching out to patients who have missed a routine laboratory test. Building disease registries requires collaboration among professionals in medicine, population science, and information technology. Specialty care addresses many complex, uncommon conditions, and these conditions are diverse. The cost to build and maintain traditional registries for many diverse, complex, low-frequency conditions is prohibitive. Objective: To develop and to test the Specialty Miniregistries platform, a collaborative interface designed to streamline the medical specialist’s contributions to the science and management of population health. Design: We used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing expected utilization, and to influence clinicians and others to change practices to improve care. The platform was composed of staff, technology, and structured collaborations, organized into a workflow. The platform was tested in five medical specialty departments. Main Outcome Measure: Proof of concept. Results: The platform enabled medical specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improvement opportunities. Their knowledge was used to build and to deploy the miniregistries. Each miniregistry required 1 to 2 hours of collaboration by a medical specialist. Turnaround was 1 to 14 days. Conclusions: The Specialty Miniregistries platform is useful for low-volume questions that often occur in specialty care, and it requires low levels of investment. The efficient organization of information workers to support accountable care is an emerging question. PMID:25785640
Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Luo, J.
1993-01-01
Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.
Reflight of the First Microgravity Science Laboratory: Quick Turnaround of a Space Shuttle Mission
NASA Technical Reports Server (NTRS)
Simms, Yvonne
1998-01-01
Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained
Orbit Averaging in Perturbed Planetary Rings
NASA Astrophysics Data System (ADS)
Stewart, Glen R.
2015-11-01
The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
Average configuration of the induced venus magnetotail
McComas, D.J.; Spence, H.E.; Russell, C.T.
1985-01-01
In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.
Average shape of fluctuations for subdiffusive walks
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Acedo, L.
2004-03-01
We study the average shape of fluctuations for subdiffusive processes, i.e., processes with uncorrelated increments but where the waiting time distribution has a broad power-law tail. This shape is obtained analytically by means of a fractional diffusion approach. We find that, in contrast with processes where the waiting time between increments has finite variance, the fluctuation shape is no longer a semicircle: it tends to adopt a tablelike form as the subdiffusive character of the process increases. The theoretical predictions are compared with numerical simulation results.
ERIC Educational Resources Information Center
Olson, Cathy Applefeld
2010-01-01
It's true that each school and each classroom presents its own unique challenges. But Kevin Mixon--who in October traded his instrumental music instructor duties at Blodgett School on the west side of Syracuse, New York, for a job as fine arts coordinator for Syracuse City Schools--knows from experience that some schools are far more challenging…
ERIC Educational Resources Information Center
Hood, Lucy
2006-01-01
This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P.
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
NASA Astrophysics Data System (ADS)
Amoroso, Richard L.
2013-09-01
The concept of time in the `clockwork' Newtonian world was irrelevant; and has generally been ignored until recently by several generations of physicists since the implementation of quantum mechanics. We will set aside the utility of time as a property relating to physical calculations of events relating to a metrics line element or as an aspect of the transformation of a particles motion/interaction in a coordinate system or in relation to thermodynamics etc., i.e. we will discard all the usual uses of time as a concept used to circularly define physical parameters in terms of other physical parameters; concentrating instead on time as an aspect of the fundamental cosmic topology of our virtual reality especially as it inseparably relates to the nature and role of the observer in natural science.
Predictabilty of time averages: The influence of the boundary forcing
NASA Technical Reports Server (NTRS)
Shukla, J.
1982-01-01
The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.
Effect of wind averaging time on wind erosivity estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Transforming Schools through Expanded Learning Time: Orchard Gardens K-8 Pilot School. Update 2013
ERIC Educational Resources Information Center
Chan, Roy
2013-01-01
For years, Orchard Gardens K-8 Pilot School was plagued by low student achievement and high staff turnover. Then, in 2010, with an expanded school schedule made possible through federal funding, Orchard Gardens began a remarkable turnaround. Today, the school is demonstrating how increased learning time, combined with other key turnaround…
Delineating the Average Rate of Change in Longitudinal Models
ERIC Educational Resources Information Center
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity
Code of Federal Regulations, 2013 CFR
2013-07-01
... of Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity... or no) Average opacity for 1-minute interval blowing without visible emission interferences(percent)...
Average observational quantities in the timescape cosmology
Wiltshire, David L.
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
ERIC Educational Resources Information Center
Murphy, Joseph
2008-01-01
In this article, we review research from the organizational sciences to develop lessons for educators and policy makers. The approach is an integrative review of the literature. We employ a comprehensive process to unpack and make sense of the turnaround literature from the organizational sciences. We rely on strategies appropriate for document…
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
Spectral averaging techniques for Jacobi matrices
Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann
2008-02-15
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-01-01
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Real-Time Patient Survey Data During Routine Clinical Activities for Rapid-Cycle Quality Improvement
Jones, Robert E
2015-01-01
Background Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. Objective We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. Methods We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Results Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)—8.0, 11.8, 16.8, seconds, respectively. Conclusions This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging.
Neutron resonance averaging with filtered beams
Chrien, R.E.
1985-01-01
Neutron resonance averaging using filtered beams from a reactor source has proven to be an effective nuclear structure tool within certain limitations. These limitations are imposed by the nature of the averaging process, which produces fluctuations in radiative intensities. The fluctuations have been studied quantitatively. Resonance averaging also gives us information about initial or capture state parameters, in particular the photon strength function. Suitable modifications of the filtered beams are suggested for the enhancement of non-resonant processes.
Spatial limitations in averaging social cues.
Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Spectral and parametric averaging for integrable systems
NASA Astrophysics Data System (ADS)
Ma, Tao; Serota, R. A.
2015-05-01
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.
Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu
2010-05-01
Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813
Averaging underwater noise levels for environmental assessment of shipping.
Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John
2012-10-01
Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575
Direct Statistical Simulation: Ensemble Averaging and Basis Reduction
NASA Astrophysics Data System (ADS)
Allawala, Altan; Marston, Brad
2015-11-01
Low-order statistics of models of geophysical fluids may be directly accessed by solving the equations of motion for the equal-time cumulants themselves. We investigate a variant of the second-order cumulant expansion (CE2) in which zonal averaging is replaced by ensemble averaging. Proper orthogonal decomposition (POD) of the second cumulant is used to reduce the dimensionality of the problem. The approach is tested on a quasi-geostrophic 2-layer baroclinic model of planetary atmospheres by comparison to the traditional approach of accumulating statistics via numerical simulation, and to zonal averaged CE2. Supported in part by NSF DMR-1306806 and NSF CCF-1048701.
Modelling and designing digital control systems with averaged measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1988-01-01
An account is given of the control systems engineering methods applicable to the design of digital feedback controllers for aerospace deterministic systems in which the output, rather than being an instantaneous measure of the system at the sampling instants, instead represents an average measure of the system over the time interval between samples. The averaging effect can be included during the modeling of the plant, thereby obviating the iteration of design/simulation phases.
Average oxidation state of carbon in proteins
Dick, Jeffrey M.
2014-01-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Whatever Happened to the Average Student?
ERIC Educational Resources Information Center
Krause, Tom
2005-01-01
Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the...
A note on generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag
2007-11-01
We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
New results on averaging theory and applications
NASA Astrophysics Data System (ADS)
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.
ERIC Educational Resources Information Center
Caruk, Joan Marie
To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…
Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.
Alvarez-Castro, José M; Yang, Rong-Cai
2012-01-01
Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178
Physics of the spatially averaged snowmelt process
NASA Astrophysics Data System (ADS)
Horne, Federico E.; Kavvas, M. Levent
1997-04-01
It has been recognized that the snowmelt models developed in the past do not fully meet current prediction requirements. Part of the reason is that they do not account for the spatial variation in the dynamics of the spatially heterogeneous snowmelt process. Most of the current physics-based distributed snowmelt models utilize point-location-scale conservation equations which do not represent the spatially varying snowmelt dynamics over a grid area that surrounds a computational node. In this study, to account for the spatial heterogeneity of the snowmelt dynamics, areally averaged mass and energy conservation equations for the snowmelt process are developed. As a first step, energy and mass conservation equations that govern the snowmelt dynamics at a point location are averaged over the snowpack depth, resulting in depth averaged equations (DAE). In this averaging, it is assumed that the snowpack has two layers. Then, the point location DAE are averaged over the snowcover area. To develop the areally averaged equations of the snowmelt physics, we make the fundamental assumption that snowmelt process is spatially ergodic. The snow temperature and the snow density are considered as the stochastic variables. The areally averaged snowmelt equations are obtained in terms of their corresponding ensemble averages. Only the first two moments are considered. A numerical solution scheme (Runge-Kutta) is then applied to solve the resulting system of ordinary differential equations. This equation system is solved for the areal mean and areal variance of snow temperature and of snow density, for the areal mean of snowmelt, and for the areal covariance of snow temperature and snow density. The developed model is tested using Scott Valley (Siskiyou County, California) snowmelt and meteorological data. The performance of the model in simulating the observed areally averaged snowmelt is satisfactory.
Cosmic Inhomogeneities and Averaged Cosmological Dynamics
NASA Astrophysics Data System (ADS)
Paranjape, Aseem; Singh, T. P.
2008-10-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a “dark energy.” However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be “no.” Averaging effects negligibly influence the cosmological dynamics.
Average shape of transport-limited aggregates.
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z
2005-08-12
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793
Average Shape of Transport-Limited Aggregates
NASA Astrophysics Data System (ADS)
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.
2005-08-01
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Code of Federal Regulations, 2013 CFR
2013-07-01
... offset by positive credits from engine families below the applicable emission standard, as allowed under the provisions of this subpart. Averaging of credits in this manner is used to determine...
Code of Federal Regulations, 2010 CFR
2010-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2011 CFR
2011-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2013 CFR
2013-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2012 CFR
2012-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2014 CFR
2014-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Total-pressure averaging in pulsating flows.
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.
Stochastic Averaging of Duhem Hysteretic Systems
NASA Astrophysics Data System (ADS)
YING, Z. G.; ZHU, W. Q.; NI, Y. Q.; KO, J. M.
2002-06-01
The response of Duhem hysteretic system to externally and/or parametrically non-white random excitations is investigated by using the stochastic averaging method. A class of integrable Duhem hysteresis models covering many existing hysteresis models is identified and the potential energy and dissipated energy of Duhem hysteretic component are determined. The Duhem hysteretic system under random excitations is replaced equivalently by a non-hysteretic non-linear random system. The averaged Ito's stochastic differential equation for the total energy is derived and the Fokker-Planck-Kolmogorov equation associated with the averaged Ito's equation is solved to yield stationary probability density of total energy, from which the statistics of system response can be evaluated. It is observed that the numerical results by using the stochastic averaging method is in good agreement with that from digital simulation.
Geologic analysis of averaged magnetic satellite anomalies
NASA Technical Reports Server (NTRS)
Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.
1985-01-01
To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Total pressure averaging in pulsating flows
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Heuristic approach to capillary pressures averaging
Coca, B.P.
1980-10-01
Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.
NASA Technical Reports Server (NTRS)
Moore, Franklin K; Ostrach, Simon
1957-01-01
The time-average characteristics of boundary layers over a flat plate in nearly quasi-steady flow are determined. The plate may be either insulated or isothermal. The time averages are found without specifying the plate velocity explicitly except that it is positive and has an average value.
Explicit cosmological coarse graining via spatial averaging
NASA Astrophysics Data System (ADS)
Paranjape, Aseem; Singh, T. P.
2008-01-01
The present matter density of the Universe, while highly inhomogeneous on small scales, displays approximate homogeneity on large scales. We propose that whereas it is justified to use the Friedmann Lemaître Robertson Walker (FLRW) line element (which describes an exactly homogeneous and isotropic universe) as a template to construct luminosity distances in order to compare observations with theory, the evolution of the scale factor in such a construction must be governed not by the standard Einstein equations for the FLRW metric, but by the modified Friedmann equations derived by Buchert (Gen Relat Gravit 32:105, 2000; 33:1381, 2001) in the context of spatial averaging in Cosmology. Furthermore, we argue that this scale factor, defined in the spatially averaged cosmology, will correspond to the effective FLRW metric provided the size of the averaging domain coincides with the scale at which cosmological homogeneity arises. This allows us, in principle, to compare predictions of a spatially averaged cosmology with observations, in the standard manner, for instance by computing the luminosity distance versus red-shift relation. The predictions of the spatially averaged cosmology would in general differ from standard FLRW cosmology, because the scale-factor now obeys the modified FLRW equations. This could help determine, by comparing with observations, whether or not cosmological inhomogeneities are an alternative explanation for the observed cosmic acceleration.
Factors Influencing Grade Point Averages at a Community College.
ERIC Educational Resources Information Center
Johnson, Marvin L.; Walberg, Herbert J.
1989-01-01
Examines the applicability of Walberg's model of educational productivity to a community college setting. Finds that prior achievement, use of out-of-school time, motivation, social context of the classroom, and age have positive effects on grade point average, while quantity of instruction and emphasis on education at home have negative effects.…
Designing a Response Scale to Improve Average Group Response Reliability
ERIC Educational Resources Information Center
Davies, Randall
2008-01-01
Creating surveys is a common task in evaluation research; however, designing a survey instrument to gather average group response data that can be interpreted in a meaningful way over time can be challenging. When surveying groups of people for the purpose of longitudinal analysis, the reliability of the result is often determined by the response…
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
High Average Power Yb:YAG Laser
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity
Code of Federal Regulations, 2014 CFR
2014-07-01
... for Determination of Average Opacity Clock time Number of converters blowing Converter aisle activity... interval?(yes or no) Average opacity for 1-minute interval blowing without visible emission...
40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity
Code of Federal Regulations, 2012 CFR
2012-07-01
... for Determination of Average Opacity Clock time Number of converters blowing Converter aisle activity... interval?(yes or no) Average opacity for 1-minute interval blowing without visible emission...
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Polarized electron beams at milliampere average current
Poelker, Matthew
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Mean Element Propagations Using Numerical Averaging
NASA Technical Reports Server (NTRS)
Ely, Todd A.
2009-01-01
The long-term evolution characteristics (and stability) of an orbit are best characterized using a mean element propagation of the perturbed two body variational equations of motion. The averaging process eliminates short period terms leaving only secular and long period effects. In this study, a non-traditional approach is taken that averages the variational equations using adaptive numerical techniques and then numerically integrating the resulting EOMs. Doing this avoids the Fourier series expansions and truncations required by the traditional analytic methods. The resultant numerical techniques can be easily adapted to propagations at most solar system bodies.
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2014-12-01
The JPL Airborne Snow Observatory (ASO) is an integrated LIDAR and Spectrometer measuring snow depth and rate of snow melt in the Sierra Nevadas, specifically, the Tuolumne River Basin, Sierra Nevada, California above the O'Shaughnessy Dam of the Hetch Hetchy reservoir, and the Uncompahgre Basin, Colorado, amongst other sites. The ASO data was delivered to water resource managers from the California Department of Water Resources in under 24 hours from the time that the Twin Otter aircraft landed in Mammoth Lakes, CA to the time disks were plugged in to the ASO Mobile Compute System (MCS) deployed at the Sierra Nevada Aquatic Research Laboratory (SNARL) near the airport. ASO performed weekly flights and each flight took between 500GB to 1 Terabyte of raw data, which was then processed from level 0 data products all the way to full level 4 maps of Snow Water Equivalent, albedo mosaics, and snow depth from LIDAR. These data were produced by Interactive Data analysis Language (IDL) algorithms which were then unobtrusively and automatically integrated into an Apache OODT and Apache Tika based Big Data processing system. Data movement was both electronic and physical including novel uses of LaCie 1 and 2 TeraByte (TB) data bricks and deployment in rugged terrain. The MCS was controlled remotely from the Jet Propulsion Laboratory, California Institute of Technology (JPL) in Pasadena, California on behalf of the National Aeronautics and Space Administration (NASA). Communication was aided through the use of novel Internet Relay Chat (IRC) command and control mechanisms and through the use of the Notifico open source communication tools. This talk will describe the high powered, and light-weight Big Data processing system that we developed for ASO and its implications more broadly for airborne missions at NASA and throughout the government. The lessons learned from ASO show the potential to have a large impact in the development of Big Data processing systems in the years
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Cryo-Electron Tomography and Subtomogram Averaging.
Wan, W; Briggs, J A G
2016-01-01
Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Averaging models for linear piezostructural systems
NASA Astrophysics Data System (ADS)
Kim, W.; Kurdila, A. J.; Stepanyan, V.; Inman, D. J.; Vignola, J.
2009-03-01
In this paper, we consider a linear piezoelectric structure which employs a fast-switched, capacitively shunted subsystem to yield a tunable vibration absorber or energy harvester. The dynamics of the system is modeled as a hybrid system, where the switching law is considered as a control input and the ambient vibration is regarded as an external disturbance. It is shown that under mild assumptions of existence and uniqueness of the solution of this hybrid system, averaging theory can be applied, provided that the original system dynamics is periodic. The resulting averaged system is controlled by the duty cycle of a driven pulse-width modulated signal. The response of the averaged system approximates the performance of the original fast-switched linear piezoelectric system. It is analytically shown that the averaging approximation can be used to predict the electromechanically coupled system modal response as a function of the duty cycle of the input switching signal. This prediction is experimentally validated for the system consisting of a piezoelectric bimorph connected to an electromagnetic exciter. Experimental results show that the analytical predictions are observed in practice over a fixed "effective range" of switching frequencies. The same experiments show that the response of the switched system is insensitive to an increase in switching frequency above the effective frequency range.
A Measure of the Average Intercorrelation
ERIC Educational Resources Information Center
Meyer, Edward P.
1975-01-01
Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)