Science.gov

Sample records for average turnaround time

  1. Emergency Department Overcrowding and Ambulance Turnaround Time

    PubMed Central

    Lee, Yu Jin; Shin, Sang Do; Lee, Eui Jung; Cho, Jin Seong; Cha, Won Chul

    2015-01-01

    Objective The aims of this study were to describe overcrowding in regional emergency departments in Seoul, Korea and evaluate the effect of crowdedness on ambulance turnaround time. Methods This study was conducted between January 2010 and December 2010. Patients who were transported by 119-responding ambulances to 28 emergency centers within Seoul were eligible for enrollment. Overcrowding was defined as the average occupancy rate, which was equal to the average number of patients staying in an emergency department (ED) for 4 hours divided by the number of beds in the ED. After selecting groups for final analysis, multi-level regression modeling (MLM) was performed with random-effects for EDs, to evaluate associations between occupancy rate and turnaround time. Results Between January 2010 and December 2010, 163,659 patients transported to 28 EDs were enrolled. The median occupancy rate was 0.42 (range: 0.10-1.94; interquartile range (IQR): 0.20-0.76). Overcrowded EDs were more likely to have older patients, those with normal mentality, and non-trauma patients. Overcrowded EDs were more likely to have longer turnaround intervals and traveling distances. The MLM analysis showed that an increase of 1% in occupancy rate was associated with 0.02-minute decrease in turnaround interval (95% CI: 0.01 to 0.03). In subgroup analyses limited to EDs with occupancy rates over 100%, we also observed a 0.03 minute decrease in turnaround interval per 1% increase in occupancy rate (95% CI: 0.01 to 0.05). Conclusions In this study, we found wide variation in emergency department crowding in a metropolitan Korean city. Our data indicate that ED overcrowding is negatively associated with turnaround interval with very small practical significance. PMID:26115183

  2. Turnaround Time Modeling for Conceptual Rocket Engines

    NASA Technical Reports Server (NTRS)

    Nix, Michael; Staton, Eric J.

    2004-01-01

    Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.

  3. Improving Reports Turnaround Time: An Essential Healthcare Quality Dimension.

    PubMed

    Khan, Mustafa; Khalid, Parwaiz; Al-Said, Youssef; Cupler, Edward; Almorsy, Lamia; Khalifa, Mohamed

    2016-01-01

    Turnaround time is one of the most important healthcare performance indicators. King Faisal Specialist Hospital and Research Center in Jeddah, Saudi Arabia worked on reducing the reports turnaround time of the neurophysiology lab from more than two weeks to only five working days for 90% of cases. The main quality improvement methodology used was the FOCUS PDCA. Using root cause analysis, Pareto analysis and qualitative survey methods, the main factors contributing to the delay of turnaround time and the suggested improvement strategies were identified and implemented, through restructuring transcriptionists daily tasks, rescheduling physicians time and alerting for new reports, engaging consultants, consistent coordination and prioritizing critical reports. After implementation; 92% of reports are verified within 5 days compared to only 6% before implementation. 7% of reports were verified in 5 days to 2 weeks and only 1% of reports needed more than 2 weeks compared to 76% before implementation. PMID:27350505

  4. Emergency care center turnaround time--an improvement story.

    PubMed

    Gelrud, Joan; Burroughs, Helen; Koterwas, Joanne

    2008-01-01

    Emergency department overcrowding is a nationally recognized barrier to patient safety. Other obstacles to efficiency and adequate care in emergency rooms include lengthy patient waits and side-tracked ambulances. This article explores one community hospital's approach to significantly decreasing emergency visit turnaround times while increasing patient satisfaction.

  5. Turnaround Time and Market Capacity in Contract Cheating

    ERIC Educational Resources Information Center

    Wallace, Melisa J.; Newton, Philip M.

    2014-01-01

    Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…

  6. Monitoring operating room turnaround time: a retrospective analysis.

    PubMed

    Scagliarini, Michele; Apreda, Mariarosaria; Wienand, Ulrich; Valpiani, Giorgia

    2016-04-18

    Purpose - Operating room (OR) turnaround time is a key process indicator for hospital business management: delays lead to a reduced surgical interventions per day with a consequent increase in costs and decrease in efficiency. The purpose of this paper is to increase understanding by assessing the process' steady-state behaviour and identifying changes that indicate either improvement or deterioration in quality. Design/methodology/approach - With this purpose, the authors retrospectively applied Shewhart control charts and exponentially weighted moving average control charts to data extracted from an hospital information system. Findings - The results showed that statistical process control is able to identify steady-state behaviour process and to detect positive or negative changes in process performance. In particular the authors detected a deterioration in the process performance coinciding with the change in the operating room patient transfer staff. Practical implications - This study showed that statistical quality control is a valuable tool for monitoring performance indicators. Currently, hospital managers are designing an OR dashboard which also includes the control charts. Originality/value - The paper highlights the control chart application to organizational indicators allowing an objective OR system performance assessment. PMID:27120511

  7. Improving medical imaging report turnaround times: the role of technolgy.

    PubMed

    Marquez, Luis O; Stewart, Howard

    2005-01-01

    At Southern Ohio Medical Center (SOMC), the medical imaging department and the radiologists expressed a strong desire to improve workflow. The improved workflow was a major motivating factor toward implementing a new RIS and speech recognition technology. The need to monitor workflow in a real-time fashion and to evaluate productivity and resources necessitated that a new solution be found. A decision was made to roll out both the new RIS product and speech recognition to maximize the resources to interface and implement the new solution. Prior to implementation of the new RIS, the medical imaging department operated in a conventional electronic-order-entry to paper request manner. The paper request followed the study through exam completion to the radiologist. SOMC entered into a contract with its PACS vendor to participate in beta testing and clinical trials for a new RIS product for the US market. Backup plans were created in the event the product failed to function as planned--either during the beta testing period or during clinical trails. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. Speech recognition enhanced the RIS technology as soon as it was implemented. The results show that the project has been a success. The new RIS, combined with speech recognition and the PACS, makes for a very effective solution to patient, exam, and results management in the medical imaging department.

  8. Turnaround Time (TAT): Difference in Concept for Laboratory and Clinician.

    PubMed

    Pati, Hara P; Singh, Gurmeet

    2014-06-01

    Lab investigations are essential in patient management and qualities of the tests reports are emphasized. But there is another aspect of quality which is often overlooked and that is timeliness which is expressed as turnaround time (TAT). Mostly the laboratory services are directed at providing a rapid, reliable report at a reasonable cost. However, most laboratories put undue stress on only reliability, where as the clinician gives more stress on how soon (TAT) a report would be available to them. There is no clear definition of TAT, as to which period should be included in determining TAT for a specific test. For laboratory personnel, it would be from the time of receipt of sample in laboratory till report is generated. However, for a clinician, it would appropriate from the time of his/her requisition of a test till the report reaches him/her. The TAT would not be similar for routine tests versus in STAT/urgent tests. TAT would be different for ICU/emergency services. The causes of poor satisfaction level from lab users includes stat and routine test TAT and stat test TAT is considered by majority as the most important indicator of laboratories functioning. Hospital computerization with record of time from test request, sample collection, report generation and receipt of report by clinician would help in generating TAT. Analyzing outliers in TAT in a lab gives insight of causes delay in TAT and the areas need improvement. Laboratories in developing countries are yet to use TAT and analyze them for laboratory improvement.

  9. Turnaround Time (TAT): Difference in Concept for Laboratory and Clinician.

    PubMed

    Pati, Hara P; Singh, Gurmeet

    2014-06-01

    Lab investigations are essential in patient management and qualities of the tests reports are emphasized. But there is another aspect of quality which is often overlooked and that is timeliness which is expressed as turnaround time (TAT). Mostly the laboratory services are directed at providing a rapid, reliable report at a reasonable cost. However, most laboratories put undue stress on only reliability, where as the clinician gives more stress on how soon (TAT) a report would be available to them. There is no clear definition of TAT, as to which period should be included in determining TAT for a specific test. For laboratory personnel, it would be from the time of receipt of sample in laboratory till report is generated. However, for a clinician, it would appropriate from the time of his/her requisition of a test till the report reaches him/her. The TAT would not be similar for routine tests versus in STAT/urgent tests. TAT would be different for ICU/emergency services. The causes of poor satisfaction level from lab users includes stat and routine test TAT and stat test TAT is considered by majority as the most important indicator of laboratories functioning. Hospital computerization with record of time from test request, sample collection, report generation and receipt of report by clinician would help in generating TAT. Analyzing outliers in TAT in a lab gives insight of causes delay in TAT and the areas need improvement. Laboratories in developing countries are yet to use TAT and analyze them for laboratory improvement. PMID:24839360

  10. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  11. Predictable turn-around time for post tape-out flow

    NASA Astrophysics Data System (ADS)

    Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya

    2012-03-01

    A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.

  12. Mapping Turnaround Times (TAT) to a Generic Timeline: A Systematic Review of TAT Definitions in Clinical Domains

    PubMed Central

    2011-01-01

    Background Assessing turnaround times can help to analyse workflows in hospital information systems. This paper presents a systematic review of literature concerning different turnaround time definitions. Our objectives were to collect relevant literature with respect to this kind of process times in hospitals and their respective domains. We then analysed the existing definitions and summarised them in an appropriate format. Methods Our search strategy was based on Pubmed queries and manual reviews of the bibliographies of retrieved articles. Studies were included if precise definitions of turnaround times were available. A generic timeline was designed through a consensus process to provide an overview of these definitions. Results More than 1000 articles were analysed and resulted in 122 papers. Of those, 162 turnaround time definitions in different clinical domains were identified. Starting and end points vary between these domains. To illustrate those turnaround time definitions, a generic timeline was constructed using preferred terms derived from the identified definitions. The consensus process resulted in the following 15 terms: admission, order, biopsy/examination, receipt of specimen in laboratory, procedure completion, interpretation, dictation, transcription, verification, report available, delivery, physician views report, treatment, discharge and discharge letter sent. Based on this analysis, several standard terms for turnaround time definitions are proposed. Conclusion Using turnaround times to benchmark clinical workflows is still difficult, because even within the same clinical domain many different definitions exist. Mapping of turnaround time definitions to a generic timeline is feasible. PMID:21609424

  13. Real time application of whole genome sequencing for outbreak investigation - What is an achievable turnaround time?

    PubMed

    McGann, Patrick; Bunin, Jessica L; Snesrud, Erik; Singh, Seema; Maybank, Rosslyn; Ong, Ana C; Kwak, Yoon I; Seronello, Scott; Clifford, Robert J; Hinkle, Mary; Yamada, Stephen; Barnhill, Jason; Lesho, Emil

    2016-07-01

    Whole genome sequencing (WGS) is increasingly employed in clinical settings, though few assessments of turnaround times (TAT) have been performed in real-time. In this study, WGS was used to investigate an unfolding outbreak of vancomycin resistant Enterococcus faecium (VRE) among 3 patients in the ICU of a tertiary care hospital. Including overnight culturing, a TAT of just 48.5 h for a comprehensive report was achievable using an Illumina Miseq benchtop sequencer. WGS revealed that isolates from patient 2 and 3 differed from that of patient 1 by a single nucleotide polymorphism (SNP), indicating nosocomial transmission. However, the unparalleled resolution provided by WGS suggested that nosocomial transmission involved two separate events from patient 1 to patient 2 and 3, and not a linear transmission suspected by the time line. Rapid TAT's are achievable using WGS in the clinical setting and can provide an unprecedented level of resolution for outbreak investigations. PMID:27185645

  14. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    PubMed

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  15. Reduction in order turnaround time, telephone calls, and trips to the pharmacy by means of facsimile transceivers.

    PubMed

    Newberg, D F; Stevens, J

    1988-02-01

    The facsimile transceiver (FAX) was trialed at Beverly Hospital, Beverly Massachusetts, on the Intensive Care Unit and the Pediatric Floor from March 13 to April 27, 1987. Beverly Hospital is a 233 bed non profit community hospital. The facsimile unit was used to transmit medication orders to the pharmacy over existing telephone lines. The purpose of the trial was to determine the time saved between the writing of the medication order and the arrival of the medication on the nursing unit. This time was considered as medication turnaround time. During the trial period, a total of 1767 medication order sheets were processed via the FAX machine. The results of the trial showed that medication turnaround time has improved by an average of 90 minutes for each order placed. In addition, it was found that a total of 60 phone calls and 14 trips to the pharmacy, per day, were eliminated from both trial units combined. This resulted in an estimated time savings of 6.5 staff hours per day on these two units alone. On a hospital-wide basis, this would result in considerable labor savings for nursing staff on the floors. In addition, supply costs of medication order sheets will be reduced by approximately $3200 per year if the FAX units were used throughout the hospital. As a result of the trial, Beverly Hospital purchased 9 units, at a cost of $17,325 and implemented a hospital-wide policy of sending medication orders to the pharmacy using this equipment. This will result in improved ability of the pharmacy to respond to new orders in a more timely manner, supply cost reductions and improved use of staff.

  16. Implementation and Operational Research: Expedited Results Delivery Systems Using GPRS Technology Significantly Reduce Early Infant Diagnosis Test Turnaround Times.

    PubMed

    Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh

    2015-09-01

    The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays.

  17. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  18. Additional technician tasks and turnaround time in the clinical Stat laboratory

    PubMed Central

    Salinas, Maria; López-Garrigós, Maite; Flores, Emilio; Leiva-Salinas, Maria; Lillo, Rosa; Leiva-Salinas, Carlos

    2016-01-01

    Introduction Many additional tasks in the Stat laboratory (SL) increase the workload. It is necessary to control them because they can affect the service provided by the laboratory. Our aim is to calculate these tasks, study their evolution over a 10 year period, and compare turnaround times (TAT) in summer period to the rest of the year. Materials and methods Additional tasks were classified as “additional test request” and “additional sample”. We collected those incidences from the laboratory information system (LIS), and calculated their evolution over time. We also calculated the monthly TAT for troponin for Emergency department (ED) patients, as the difference between the verification and LIS registration time. A median time of 30 minutes was our indicator target. TAT results and tests workload in summer were compared to the rest of the year. Results Over a 10-year period, the technologists in the SL performed 51,385 additional tasks, a median of 475 per month. The workload was significantly higher during the summer (45,496 tests) than the rest of the year (44,555 tests) (P = 0.019). The troponin TAT did not show this variation between summer and the rest of the year, complying always with our 30 minutes indicator target. Conclusion The technicians accomplished a significant number of additional tasks, and the workload kept increasing over the period of 10 years. That did not affect the TAT results. PMID:27346970

  19. Total Automation for the Core Laboratory: Improving the Turnaround Time Helps to Reduce the Volume of Ordered STAT Tests.

    PubMed

    Ialongo, Cristiano; Porzio, Ottavia; Giambini, Ilio; Bernardini, Sergio

    2016-06-01

    The transition to total automation represents the greatest leap for a clinical laboratory, characterized by a totally new philosophy of process management. We have investigated the impact of total automation on core laboratory efficiency and its effects on the clinical services related to STAT tests. For this purpose, a 47-month retrospective study based on the analysis of 44,212 records of STAT cardiac troponin I (CTNI) tests was performed. The core laboratory reached a new efficiency level 3 months after the implementation of total automation. Median turnaround time (TAT) was reduced by 14.9±1.5 min for the emergency department (p < 0.01), reaching 41.6±1.2 min. In non-emergency departments, median TAT was reduced by 19.8±2.2 min (p < 0.01), reaching 52±1.3 min. There was no change in the volume of ordered STAT CTNI tests by the emergency department (p = 0.811), whereas for non-emergency departments there was a reduction of 115.7±50 monthly requests on average (p = 0.026). The volume of ordered tests decreased only in time frames of the regular shift following the morning round. Thus, total automation significantly improves the core laboratory efficiency in terms of TAT. As a consequence, the volume of STAT tests ordered by hospital departments (except for the emergency department) decreased due to reduced duplicated requests.

  20. The Impact of a Health IT Changeover on Medical Imaging Department Work Processes and Turnaround Times

    PubMed Central

    Georgiou, A.; Lymer, S.; Hordern, A.; Ridley, L.; Westbrook, J.

    2015-01-01

    Summary Objectives To assess the impact of introducing a new Picture Archiving and Communication System (PACS) and Radiology Information System (RIS) on: (i) Medical Imaging work processes; and (ii) turnaround times (TATs) for x-ray and CT scan orders initiated in the Emergency Department (ED). Methods We employed a mixed method study design comprising: (i) semi-structured interviews with Medical Imaging Department staff; and (ii) retrospectively extracted ED data before (March/April 2010) and after (March/April 2011 and 2012) the introduction of a new PACS/RIS. TATs were calculated as: processing TAT (median time from image ordering to examination) and reporting TAT (median time from examination to final report). Results Reporting TAT for x-rays decreased significantly after introduction of the new PACS/RIS; from a median of 76 hours to 38 hours per order (p<.0001) for patients discharged from the ED, and from 84 hours to 35 hours (p<.0001) for patients admitted to hospital. Medical Imaging staff reported that the changeover to the new PACS/RIS led to gains in efficiency, particularly regarding the accessibility of images and patient-related information. Nevertheless, assimilation of the new PACS/RIS with existing Departmental work processes was considered inadequate and in some instances unsafe. Issues highlighted related to the synchronization of work tasks (e.g., porter arrangements) and the material set up of the work place (e.g., the number and location of computers). Conclusions The introduction of new health IT can be a “double-edged sword” providing improved efficiency but at the same time introducing potential hazards affecting the effectiveness of the Medical Imaging Department. PMID:26448790

  1. Policy change to improve pathology turnaround time and reduce costs – possible to do both?

    PubMed Central

    Dimeski, Goce; Silvester, Breeann; Ungerer, Jacobus; Johnson, Leslie; Martin, Jennifer H.

    2013-01-01

    Background: Overcrowding and prolonged length of stay in emergency departments (ED) are increasing problems in hospitals. Rapid availability of all laboratory results has an impact on clinical decision-making, admissions or discharge decisions and resource utilisation. Increasing number of our urinary drugs of abuse (DOA) screens had a turnaround time (TAT) of up to 33 days after the discharge of the patient. Materials and methods: Following an audit and a consultation period with clinicians using the service, a policy change was implemented to reduce the use of gas chromatography mass spectroscopy (GCMS): all requests would have a standard immunoassay (IA) test panel undertaken unless specifically they requested GCMS (including medico-legal) analysis. Results: Almost all of the clinicians interviewed had no understanding of the DOA screening or the difference in the information generated between a confirmatory GCMS urine toxicology screen and IA DOA panel. It appeared none of the patients surveyed in the audit would have had a different clinical decision made if a GCMS had not been undertaken. Post change audit showed only 4.3% of drug requests for IA also received a confirmatory GCMS testing. The estimated saving post change implementation was $127,000 (AU $) in test costs alone over a two year period. The TAT of GCMS results was reduced to 3–4 days. Conclusion: A laboratory-led behavioural change in test requesting is possible and sustainable provided the reason is clinically sound and accompanied by consultation and availability of advice by phone when requested on test requesting or interpretation. PMID:24266298

  2. Preparing printed circuit boards for rapid turn-around time on a plotter

    SciTech Connect

    Hawtree, J.

    1998-01-01

    This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab`s Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day`s time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter.

  3. A quality initiative of postoperative radiographic imaging performed on mastectomy specimens to reduce histology cost and pathology report turnaround time.

    PubMed

    Kallen, Michael E; Sim, Myung S; Radosavcev, Bryan L; Humphries, Romney M; Ward, Dawn C; Apple, Sophia K

    2015-10-01

    Breast pathology relies on gross dissection for accurate diagnostic work, but challenges can necessitate submission of high tissue volumes resulting in excess labor, laboratory costs, and delays. To address these issues, a quality initiative was created through implementation of the Faxitron PathVision specimen radiography system as part of the breast gross dissection protocol; this report documents its impact on workflow and clinical care. Retrospective data from 459 patients who underwent simple or modified radical mastectomy at our institution between May 2012 and December 2014 were collected. Comparison was made between the mastectomy specimen control group before radiography use (233 patients, 340 breasts) and Faxitron group that underwent postoperative radiography (226 patients, 338 breasts). We observed a statistically significant decrease in mean number of blocks between control and Faxitron groups (47.0 vs 39.7 blocks; P<.0001), for calculated cost savings of US $146 per mastectomy. A statistically significant decrease in pathology report turnaround time was also observed (4.2 vs 3.8days; P=.038). Postoperative mastectomy specimen radiography has increased workflow efficiency and decreased histology costs and pathology report turnaround time. These findings may underestimate actual benefits and highlight the importance of quality improvement projects in anatomical pathology.

  4. Time-averaging water quality assessment

    SciTech Connect

    Reddy, L.S.; Ormsbee, L.E.; Wood, D.J.

    1995-07-01

    While reauthorization of the Safe Drinking Water Act is pending, many water utilities are preparing to monitor and regulate levels of distribution system constituents that affect water quality. Most frequently, utilities are concerned about average concentrations rather than about tracing a particular constituent`s path. Mathematical and computer models, which provide a quick estimate of average concentrations, could play an important role in this effort. Most water quality models deal primarily with isolated events, such as tracing a particular constituent through a distribution system. This article proposes a simple, time-averaging model that obtains average, maximum, and minimum constituent concentrations and ages throughout the network. It also computes percentage flow contribution and percentage constituent concentration. The model is illustrated using two water distribution systems, and results are compared with those obtained using a dynamic water quality model. Both models predict average water quality parameters with no significant deviations; the time-averaging approach is a simple and efficient alternative to the dynamic model.

  5. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  6. Laboratory Automation and Intra-Laboratory Turnaround Time: Experience at the University Hospital Campus Bio-Medico of Rome.

    PubMed

    Angeletti, Silvia; De Cesaris, Marina; Hart, Jonathan George; Urbano, Michele; Vitali, Massimiliano Andrea; Fragliasso, Fulvio; Dicuonzo, Giordano

    2015-12-01

    Intra-laboratory turnaround time (TAT) is a key indicator of laboratory performance. Improving TAT is a complex task requiring staff education, equipment acquisition, and adequate TAT monitoring. The aim of the present study was to evaluate the intra-laboratory TAT after laboratory automation implementation (June 2013-June 2014) and to compare it to that in the preautomation period (July 2012-May 2013). Intra-laboratory TAT was evaluated both as the mean TAT registered and the percentage of outlier (OP) exams. The mean TAT was 36, 38, and 34 min during the study periods, respectively. These values respected the goal TAT established at 45 min. The OP, calculated at 45 min as well as at 60 min, decreased from 26 to 21 and from 11 to 5, respectively. From a focused analysis on blood count cell, troponin I, and prothrombin (PT) test, TAT improvement was more evident for tests requiring longer preanalytical process. The follow-up of TAT from June 2013 to June 2014 revealed the reduction of the mean TAT as well as of the OP exams after automation implementation and that automation more strongly affects the test in the preanalytical phase including centrifugation of the sample, such as troponin I and PT.

  7. 24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and a new lease takes effect for units re-occupied during the PHA's assessed fiscal year, is less than... associated with a vacant unit after construction work has been completed or after the time period for placing... conditions of being a vacant unit undergoing modernization: and (B) The date on which construction work...

  8. Short Planning Turn-Around Time and High Flexibility of the Swedish Astronomy/Aeronomy Satellite Odin

    NASA Astrophysics Data System (ADS)

    Jakobsson, B.; Karlsson, T.; Nylund, M.; Olsson, T.; Vinterhav, E.

    2002-01-01

    The Swedish small satellite Odin combines two different scientific disciplines, astronomy and atmospheric research. It has a 3-axis stabilized, zero momentum, attitude control system that operates in two different modes, an inertial pointing astronomy mode and an atmospheric mode providing pointing/scanning of the Earth limb. The reference attitude for the atmospheric mode is created by a versatile reference attitude generator, providing also capability for observing and tracking any celestial or solar system objects. The absolute pointing requirements in the scientific modes are 15 arc-seconds inertial pointing and 1.2 arc-minutes in atmospheric mode reconstructed attitude. One part of the Odin mission is to observe the formation of the ozone holes at the poles in spring and fall and another is to observe moving celestial objects such as comets. The unpredictability of when such observation opportunities occur together with rapidly changing scientific demands put high requirements on a short turn- around time for the Operations Cycle - science demand, planning and command generation, reconstruction of attitude history, feedback and delivery to scientists. A small efficient team, including also members that was deeply involved with and responsible for the design and development of the system, has been a key to provide this a short turn-around time for the Operations Cycle. It is possible to safely reconfigure the satellite for either of two disciplines with short notice and have a reconstructed attitude available to the users shortly after the observations have been done. The small efficient team together with modular software based on MATLAB also facilitates in rapidly meeting new demands on the planning and attitude reconstruction from the users.

  9. The Turnaround Fallacy

    ERIC Educational Resources Information Center

    Smarick, Andy

    2010-01-01

    For as long as there have been struggling schools in America's cities, there have been efforts to turn them around. But overall, school turnaround efforts have consistently fallen far short of hopes and expectations. Quite simply, turnarounds are not a scalable strategy for fixing America's troubled urban school systems. Fortunately, findings from…

  10. Laboratory-based clinical audit as a tool for continual improvement: an example from CSF chemistry turnaround time audit in a South-African teaching hospital

    PubMed Central

    Imoh, Lucius C; Mutale, Mubanga; Parker, Christopher T; Erasmus, Rajiv T; Zemlin, Annalise E

    2016-01-01

    Introduction Timeliness of laboratory results is crucial to patient care and outcome. Monitoring turnaround times (TAT), especially for emergency tests, is important to measure the effectiveness and efficiency of laboratory services. Laboratory-based clinical audits reveal opportunities for improving quality. Our aim was to identify the most critical steps causing a high TAT for cerebrospinal fluid (CSF) chemistry analysis in our laboratory. Materials and methods A 6-month retrospective audit was performed. The duration of each operational phase across the laboratory work flow was examined. A process-mapping audit trail of 60 randomly selected requests with a high TAT was conducted and reasons for high TAT were tested for significance. Results A total of 1505 CSF chemistry requests were analysed. Transport of samples to the laboratory was primarily responsible for the high average TAT (median TAT = 170 minutes). Labelling accounted for most delays within the laboratory (median TAT = 71 minutes) with most delays occurring after regular work hours (P < 0.05). CSF chemistry requests without the appropriate number of CSF sample tubes were significantly associated with delays in movement of samples from the labelling area to the technologist’s work station (caused by a preference for microbiological testing prior to CSF chemistry). Conclusion A laboratory-based clinical audit identified sample transportation, work shift periods and use of inappropriate CSF sample tubes as drivers of high TAT for CSF chemistry in our laboratory. The results of this audit will be used to change pre-analytical practices in our laboratory with the aim of improving TAT and customer satisfaction. PMID:27346964

  11. Turnaround time of positive blood cultures after the introduction of matrix-assisted laser desorption-ionization time-of-flight mass spectrometry.

    PubMed

    Angeletti, Silvia; Dicuonzo, Giordano; D'Agostino, Alfio; Avola, Alessandra; Crea, Francesca; Palazzo, Carlo; Dedej, Etleva; De Florio, Lucia

    2015-07-01

    A comparative evaluation of the turnaround time (TAT) of positive blood culture before and after matrix-assisted laser desorption-ionization time-of-flight mass spectrometry (MALDI-TOF MS) introduction in the laboratory routine was performed. A total of 643 positive blood cultures, of which 310 before and 333 after MALDI-TOF technique introduction, were collected. In the post MALDI-TOF period, blood culture median TAT decreased from 73.53 hours to 71.73 for Gram-positive, from 64.09 hours to 63.59 for Gram-negative and from 115.7 hours to 47.62 for anaerobes. MALDI-TOF significantly decreased the TAT of anaerobes, for which antimicrobial susceptibility test is not routinely performed. Furthermore, the major advantage of MALDI-TOF introduction was the decrease of the time for pathogen identification (TID) independently from the species with an improvement of 93% for Gram-positive, 86% for Gram-negative and 95% for anaerobes. In addition, high species-level identification rates and cost savings than conventional methods were achieved after MALDI-TOF introduction.

  12. Phase II of a Six sigma Initiative to Study DWPF SME Analytical Turnaround Times: SRNL's Evaluation of Carbonate-Based Dissolution Methods

    SciTech Connect

    Edwards, Thomas

    2005-09-01

    The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3

  13. Turnaround Momentum Still Fragile

    ERIC Educational Resources Information Center

    Klein, Alyson

    2012-01-01

    The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a…

  14. Random time averaged diffusivities for Lévy walks

    NASA Astrophysics Data System (ADS)

    Froemberg, D.; Barkai, E.

    2013-07-01

    We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ⟨x2⟩ ∝ t2, the latter to enhanced diffusion with ⟨x2⟩ ∝ tν, 1 < ν < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.

  15. Time average vibration fringe analysis using Hilbert transformation

    SciTech Connect

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-10-20

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  16. Acute Infections, Cost per Infection and Turnaround Time in Three United States Hospital Laboratories Using Fourth-Generation Antigen-Antibody Human Immunodeficiency Virus Immunoassays.

    PubMed

    Wesolowski, Laura G; Nasrullah, Muazzam; Coombs, Robert W; Rosenberg, Eric; Ethridge, Steven F; Hutchinson, Angela B; Dragavon, Joan; Rychert, Jennifer; Nolte, Frederick S; Madory, James E; Werner, Barbara G

    2016-01-01

    Background.  To improve clinical and public health outcomes through early human immunodeficiency virus (HIV) detection, fourth-generation antigen/antibody immunoassay (4IA) and supplemental testing results must be returned rapidly. Methods.  We examined HIV testing data at Harborview Medical Center (HMC), Massachusetts General Hospital (MGH), and the Medical University of South Carolina (MUSC), which used 4IA and supplemental antibody and nucleic acid tests (NATs). At MGH and MUSC, HIV-1 Western blot (WB) and HIV-2 testing were conducted at a reference laboratory. We compared time from specimen collection to laboratory result for established (positive WB) and acute infections (reactive 4IA, negative/indeterminate WB, detectable NAT), and we calculated testing cost per positive-test result. Results.  From 3731 (MUSC) to 19 774 (MGH) tests were conducted; 0.01% (MGH) to 0.05% (HMC) were acute infections. Each laboratory had reactive 4IA, WB-negative, or indeterminate specimens without NAT (ie, potential acute infections). Time to result was 1.5 (HMC) to 5.2 days (MGH) for acute and 1.0 (HMC) to 5.2 days (MGH) for established infections. Costs were $1054 (MGH) to $1521 (MUSC). Conclusions.  Conducting supplemental testing in-house lowered turnaround times, which may be further reduced with rapid HIV-1/HIV-2 differentiation tests. Hospitals may benefit from quantitative NATs not requiring physician orders, so all potential acute infections receive NAT.

  17. Acute Infections, Cost per Infection and Turnaround Time in Three United States Hospital Laboratories Using Fourth-Generation Antigen-Antibody Human Immunodeficiency Virus Immunoassays.

    PubMed

    Wesolowski, Laura G; Nasrullah, Muazzam; Coombs, Robert W; Rosenberg, Eric; Ethridge, Steven F; Hutchinson, Angela B; Dragavon, Joan; Rychert, Jennifer; Nolte, Frederick S; Madory, James E; Werner, Barbara G

    2016-01-01

    Background.  To improve clinical and public health outcomes through early human immunodeficiency virus (HIV) detection, fourth-generation antigen/antibody immunoassay (4IA) and supplemental testing results must be returned rapidly. Methods.  We examined HIV testing data at Harborview Medical Center (HMC), Massachusetts General Hospital (MGH), and the Medical University of South Carolina (MUSC), which used 4IA and supplemental antibody and nucleic acid tests (NATs). At MGH and MUSC, HIV-1 Western blot (WB) and HIV-2 testing were conducted at a reference laboratory. We compared time from specimen collection to laboratory result for established (positive WB) and acute infections (reactive 4IA, negative/indeterminate WB, detectable NAT), and we calculated testing cost per positive-test result. Results.  From 3731 (MUSC) to 19 774 (MGH) tests were conducted; 0.01% (MGH) to 0.05% (HMC) were acute infections. Each laboratory had reactive 4IA, WB-negative, or indeterminate specimens without NAT (ie, potential acute infections). Time to result was 1.5 (HMC) to 5.2 days (MGH) for acute and 1.0 (HMC) to 5.2 days (MGH) for established infections. Costs were $1054 (MGH) to $1521 (MUSC). Conclusions.  Conducting supplemental testing in-house lowered turnaround times, which may be further reduced with rapid HIV-1/HIV-2 differentiation tests. Hospitals may benefit from quantitative NATs not requiring physician orders, so all potential acute infections receive NAT. PMID:26798766

  18. Assessing School Turnaround: Evidence from Ohio

    ERIC Educational Resources Information Center

    Player, Daniel; Katz, Veronica

    2016-01-01

    Policy makers have struggled to find successful approaches to address concentrated, persistent low school achievement. While NCLB and the School Improvement Grant (SIG) program have devoted significant time and attention to turnaround, very little empirical evidence substantiates whether and how these efforts work. This study employs a comparative…

  19. Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports

    ERIC Educational Resources Information Center

    Silva, Elena

    2012-01-01

    If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposed…

  20. Improving the Average Response Time in Collective I/O

    SciTech Connect

    Jin, Chen; Sehrish, Saba; Liao, Wei-keng; Choudhary, Alok; Schuchardt, Karen L.

    2011-09-21

    In collective I/O, MPI processes exchange requests so that the rearranged requests can result in the shortest file system access time. Scheduling the exchange sequence determines the response time of participating processes. Existing implementations that simply follow the increasing order of file ofsets do not necessary produce the best performance. To minimize the average response time, we propose three scheduling algorithms that consider the number of processes per file stripe and the number of accesses per process. Our experimental results demonstrate improvements of up to 50% in the average response time using two synthetic benchmarks and a high-resolution climate application.

  1. Human Capital in Turnaround Schools

    ERIC Educational Resources Information Center

    Ferris, Kristen

    2012-01-01

    Finding, keeping and supporting great educators presents the single biggest challenge to successful school turnarounds. Without teachers and administrators who bring the needed combination of skills and passion, nothing else will achieve the desired effect. The turnaround model supported by the U.S. Department of Education School Improvement Grant…

  2. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  3. Time-averaged photon-counting digital holography.

    PubMed

    Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario

    2015-09-15

    Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907

  4. Average waiting time in FDDI networks with local priorities

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.

  5. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  6. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  7. Time-average TV holography for vibration fringe analysis

    SciTech Connect

    Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2009-06-01

    Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

  8. An Advanced Time Averaging Modelling Technique for Power Electronic Circuits

    NASA Astrophysics Data System (ADS)

    Jankuloski, Goce

    For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.

  9. H∞ control of switched delayed systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Li, Zhicheng; Gao, Huijun; Agarwal, Ramesh; Kaynak, Okyay

    2013-12-01

    This paper considers the problems of stability analysis and H∞ controller design of time-delay switched systems with average dwell time. In order to obtain less conservative results than what is seen in the literature, a tighter bound for the state delay term is estimated. Based on the scaled small gain theorem and the model transformation method, an improved exponential stability criterion for time-delay switched systems with average dwell time is formulated in the form of convex matrix inequalities. The aim of the proposed approach is to reduce the minimal average dwell time of the systems, which is made possible by a new Lyapunov-Krasovskii functional combined with the scaled small gain theorem. It is shown that this approach is able to tolerate a smaller dwell time or a larger admissible delay bound for the given conditions than most of the approaches seen in the literature. Moreover, the exponential H∞ controller can be constructed by solving a set of conditions, which is developed on the basis of the exponential stability criterion. Simulation examples illustrate the effectiveness of the proposed method.

  10. Reduced turn-around time for Mycobacterium tuberculosis drug susceptibility testing with a proportional agar microplate assay.

    PubMed

    Nguyen, V A T; Nguyen, H Q; Vu, T T; Nguyen, N A T; Duong, C M; Tran, T H T; Nguyen, H V; Dang, D A; Bañuls, A-L

    2015-12-01

    Multidrug-resistant tuberculosis is a major issue worldwide; however, accessibility to drug susceptibility testing (DST) is still limited in developing countries, owing to high costs and complexity. We developed a proportion method on 12-well microplates for DST. The assay reduced the time to results to <12 days and <10 days when bacterial growth was checked with the naked eye or a microscope, respectively. Comparison with the Canetti-Grosset method showed that the results of the two assays almost overlapped (kappa index 0.98 (95% CI 0.91-1.00) for isoniazid, rifampicin, streptomycin; and kappa index 0.92 (95% CI 0.85-0.99) for ethambutol). The sequencing of genes involved in drug resistance showed similar level of phenotype-genotype agreement between techniques. Finally, measurement of the MICs of rifampicin and ethambutol suggests that the currently used critical ethambutol concentration should be revised, and that the current molecular drug susceptibility tests for rifampicin need to be re-evaluated, as in vitro rifampicin-sensitive isolates could harbour drug resistance-associated mutation(s). PMID:26348263

  11. National survey on intra-laboratory turnaround time for some most common routine and stat laboratory analyses in 479 laboratories in China

    PubMed Central

    Fei, Yang; Zeng, Rong; Wang, Wei; He, Falin; Zhong, Kun

    2015-01-01

    Introduction To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. Materials and methods 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. Results The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479 / 1307), 38% (228 / 598), and 36% (449 / 1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Conclusions Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them. PMID:26110033

  12. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... 40 Protection of Environment 8 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic... convert my 1-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation...

  13. Forecasts of time averages with a numerical weather prediction model

    NASA Technical Reports Server (NTRS)

    Roads, J. O.

    1986-01-01

    Forecasts of time averages of 1-10 days in duration by an operational numerical weather prediction model are documented for the global 500 mb height field in spectral space. Error growth in very idealized models is described in order to anticipate various features of these forecasts and in order to anticipate what the results might be if forecasts longer than 10 days were carried out by present day numerical weather prediction models. The data set for this study is described, and the equilibrium spectra and error spectra are documented; then, the total error is documented. It is shown how forecasts can immediately be improved by removing the systematic error, by using statistical filters, and by ignoring forecasts beyond about a week. Temporal variations in the error field are also documented.

  14. Making Sense of School Turnarounds

    ERIC Educational Resources Information Center

    Hess, Frederick M.

    2012-01-01

    Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…

  15. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic averages into appropriate averaging times and units? 60.1755 Section 60.1755 Protection of Environment... or Before August 30, 1999 Model Rule-Continuous Emission Monitoring § 60.1755 How do I convert my...

  16. Mexico's population policy turnaround.

    PubMed

    Nagel, J S

    1978-12-01

    Until 1972 Mexico's officials seemed to believe that the annual population growth rate of 3.5% was not really a problem as long as the economic development rate could be kept ahead of the population growth rate. The General Law of Population of 1947 was actually promoting population growth. It was only during the 1960s that elite opinion shifted, and privately funded family planning programs became increasingly active. By 1972 the population had reached 54 million, from 20 million in 1940. President Echevarria, till then a traditional pronatalist, announced in 1972 the creation of a national family planning program, and, in 1974, the creation of a national population council. Since then the Mexican government has embarked on ambitious programs of mass communication and sex education to make the population receptive to its new attitudes. The plans have worked: by mid-1979 estimates put the population growth rate at 2.9%, within sight of the 2.5% target set for 1982 by new president Lopez Portillo. According to a survey conducted by the National Fertility Survey, by late 1976 41.50% of Mexican women of childbearing age were practicing contraception. Within the 41.50% of active contraceptors, 14.9% use the pill, and 7.8% the IUD. New channels of information and of contraceptive delivery are being explored to reach Mexico's widely scattered rural population, and to bring the average family size down to 2 children from the average of 6.5 children per woman for 1973-1975. The government goal is of a 1% population increase by the year 2000.

  17. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  18. Sample Language of Modified Contract Elements from Existing CBAs, MOUs, or EWAs to Support Turnaround

    ERIC Educational Resources Information Center

    Mass Insight Education (NJ1), 2011

    2011-01-01

    Organized by the key conditions areas for turnaround, "People, Program, Time and Money," this tool offers sample language for each contract element to serve as a model for modifications from a traditional CBA that may support a district's turnaround efforts. Sample language is offered from existing provisions in district-wide collective bargaining…

  19. Combining Quick-Turnaround and Batch Workloads at Scale

    NASA Technical Reports Server (NTRS)

    Matthews, Gregory A.

    2012-01-01

    NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

  20. Series Overview. Sustaining School Turnaround at Scale. Brief 1

    ERIC Educational Resources Information Center

    Education Resource Strategies, 2012

    2012-01-01

    Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…

  1. Turnaround Schools and the Leadership They Require

    ERIC Educational Resources Information Center

    Leithwood, Kenneth; Strauss, Tiiu

    2008-01-01

    "Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…

  2. Finite-frequency model reduction of continuous-time switched linear systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Ding, Da-Wei; Du, Xin

    2016-11-01

    This paper deals with the model reduction problem of continuous-time switched linear systems with finite-frequency input signals. The objective of the paper is to propose a finite-frequency model reduction method for such systems. A finite-frequency ? performance index is first defined in frequency domain, and then a finite-frequency performance analysis condition is derived by Parseval's theorem. Combined with the average dwell time approach, sufficient conditions for the existence of exponentially stable reduced-order models are derived. An algorithm is proposed to construct the desired reduced-order models. The effectiveness of the proposed method is illustrated by a numerical example.

  3. Rapid PCR amplification protocols decrease the turn-around time for detection of antibiotic resistance genes in Gram-negative pathogens.

    PubMed

    Geyer, Chelsie N; Hanson, Nancy D

    2013-10-01

    A previously designed end-point multiplex PCR assay and singleplex assays used to detect β-lactamase genes were evaluated using rapid PCR amplification methodology. Amplification times were 16-18 minutes with an overall detection time of 1.5 hours. Rapid PCR amplifications could decrease the time required to identify resistance mechanisms in Gram-negative organisms.

  4. Sustainment of Fine Particle Cloud by Means of Time-Averaged Particle Driving Force in Plasmas

    SciTech Connect

    Gohda, Takuma; Iizuka, Satoru

    2008-09-07

    We have succeeded in sustaining fine particle cloud by using a time-averaged particle driving (TAPD) method in the RF discharge plasma. The particles feel only time-averaged force when the period of pulses applied to those point-electrodes is shorter than the particle response time. The particles are transported to a middle point between two point-electrodes.

  5. Impulsive synchronization schemes of stochastic complex networks with switching topology: average time approach.

    PubMed

    Li, Chaojie; Yu, Wenwu; Huang, Tingwen

    2014-06-01

    In this paper, a novel impulsive control law is proposed for synchronization of stochastic discrete complex networks with time delays and switching topologies, where average dwell time and average impulsive interval are taken into account. The side effect of time delays is estimated by Lyapunov-Razumikhin technique, which quantitatively gives the upper bound to increase the rate of Lyapunov function. By considering the compensation of decreasing interval, a better impulsive control law is recast in terms of average dwell time and average impulsive interval. Detailed results from a numerical illustrative example are presented and discussed. Finally, some relevant conclusions are drawn.

  6. The consequences of time averaging for measuring temporal species turnover in the fossil record

    NASA Astrophysics Data System (ADS)

    Tomašových, Adam; Kidwell, Susan

    2010-05-01

    Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and

  7. The Impact of Turnaround Reform on Student Outcomes: Evidence and Insights from the Los Angeles Unified School District

    ERIC Educational Resources Information Center

    Strunk, Katharine O.; Marsh, Julie A.; Hashim, Ayesha K.; Bush-Mecenas, Susan; Weinstein, Tracey

    2016-01-01

    We examine the Los Angeles Unified School District's Public School Choice Initiative (PSCI), which sought to turnaround the district's lowest-performing schools. We ask whether school turnaround impacted student outcomes, and what explains variations in outcomes across reform cohorts. We use a Comparative Interrupted Time Series approach using…

  8. Prospective evaluation of the VITEK MS for the routine identification of bacteria and yeast in the clinical microbiology laboratory: assessment of accuracy of identification and turnaround time.

    PubMed

    Charnot-Katsikas, Angella; Tesic, Vera; Boonlayangoor, Sue; Bethel, Cindy; Frank, Karen M

    2014-02-01

    This study assessed the accuracy of bacterial and yeast identification using the VITEK MS, and the time to reporting of isolates before and after its implementation in routine clinical practice. Three hundred and sixty-two isolates of bacteria and yeast, consisting of a variety of clinical isolates and American Type Culture Collection strains, were tested. Results were compared with reference identifications from the VITEK 2 system and with 16S rRNA sequence analysis. The VITEK MS provided an acceptable identification to species level for 283 (78 %) isolates. Considering organisms for which genus-level identification is acceptable for routine clinical care, 315 isolates (87 %) had an acceptable identification. Six isolates (2 %) were identified incorrectly, five of which were Shigella species. Finally, the time for reporting the identifications was decreased significantly after implementation of the VITEK MS for a total mean reduction in time of 10.52 h (P<0.0001). Overall, accuracy of the VITEK MS was comparable or superior to that from the VITEK 2. The findings were also comparable to other studies examining the accuracy of the VITEK MS, although differences exist, depending on the diversity of species represented as well as on the versions of the databases used. The VITEK MS can be incorporated effectively into routine use in a clinical microbiology laboratory and future expansion of the database should provide improved accuracy for the identification of micro-organisms.

  9. Unocal restarts Parachute Creek plant after turnaround

    SciTech Connect

    Not Available

    1986-12-01

    In late November, Unocal Corporation restarted its Parachute Creek shale oil project after a two-month turnaround, or overhaul operation. The plant is again operating at about 50% of its 10,000 barrel/day design capacity. This is the maximum rate which has been achieved for long periods of time. Unocal has been continuing to explore the possibility of adding a fluidized bed combustor to the retorting system. A decision on whether to proceed with building the FBC is expected to be made by July, 1987. The Colorado Mined Land Reclamation Board has allowed Unocal to make changes at Parachute Creek to better accommodate the handling of spent shale. These changes are described. Operating problems at the plant from start-up and modifications required are reviewed. Upgrading of the shale oil syncrude cannot be done at the Gary, Indiana refinery as planned. Current plans are to truck it to Utah, place it in a pipeline to Midland, Texas, then pipe it through Cushing, Oklahoma to the Unocal refinery in Lemont, Illinois.

  10. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  11. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  12. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  13. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...

  14. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  15. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference...

  16. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? 60.1265 Section 60.1265 Protection of Environment... Continuous Emission Monitoring § 60.1265 How do I convert my 1-hour arithmetic averages into the...

  17. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  18. Local volume-time averaged equations of motion for dispersed, turbulent, multiphase flows

    SciTech Connect

    Sha, W.T.; Slattery, J.C.

    1980-11-01

    In most flows of liquids and their vapors, the phases are dispersed randomly in both space and time. These dispersed flows can be described only statistically or in terms of averages. Local volume-time averaging is used here to derive a self-consistent set of equations governing momentum and energy transfer in dispersed, turbulent, multiphase flows. The empiricisms required for use with these equations are the subject of current research.

  19. Correction method for averaging slowly time-variant room impulse response measurements.

    PubMed

    Postma, Barteld N J; Katz, Brian F G

    2016-07-01

    Various methods exist for room acoustic measurements. To increase the signal-to-noise-ratio (SNR), averaging of repeated room impulse responses (RIR) can be performed under the assumption of time-invariant systems. Associated with a study of the Paris Notre Dame cathedral, time-variances due to minute temperature changes were observed which confounded results of averaged RIRs, producing significantly shorter reverberation times. A correction method for such time-variance of the acoustic system is proposed, based on time-stretching. Following correction, reverberation times of averaged RIRs were comparable to expected tolerance ranges for measurements, well within one just noticeable difference, while also exhibiting a mid-frequency SNR gain of 8 dB. PMID:27475209

  20. Correction method for averaging slowly time-variant room impulse response measurements.

    PubMed

    Postma, Barteld N J; Katz, Brian F G

    2016-07-01

    Various methods exist for room acoustic measurements. To increase the signal-to-noise-ratio (SNR), averaging of repeated room impulse responses (RIR) can be performed under the assumption of time-invariant systems. Associated with a study of the Paris Notre Dame cathedral, time-variances due to minute temperature changes were observed which confounded results of averaged RIRs, producing significantly shorter reverberation times. A correction method for such time-variance of the acoustic system is proposed, based on time-stretching. Following correction, reverberation times of averaged RIRs were comparable to expected tolerance ranges for measurements, well within one just noticeable difference, while also exhibiting a mid-frequency SNR gain of 8 dB.

  1. School Turnaround: Cristo Rey Boston High School Case Study

    ERIC Educational Resources Information Center

    Thielman, Jeff

    2012-01-01

    The mandates of the federal No Child Left Behind Law, including the threat of closing a school for underperformance, have led to multiple public school turnaround attempts. Because turnaround is a relatively new area of focus in education, there is limited research on what does and does not work, and even the definition of turnaround is a work in…

  2. Stability analysis of switched cellular neural networks: A mode-dependent average dwell time approach.

    PubMed

    Huang, Chuangxia; Cao, Jie; Cao, Jinde

    2016-10-01

    This paper addresses the exponential stability of switched cellular neural networks by using the mode-dependent average dwell time (MDADT) approach. This method is quite different from the traditional average dwell time (ADT) method in permitting each subsystem to have its own average dwell time. Detailed investigations have been carried out for two cases. One is that all subsystems are stable and the other is that stable subsystems coexist with unstable subsystems. By employing Lyapunov functionals, linear matrix inequalities (LMIs), Jessen-type inequality, Wirtinger-based inequality, reciprocally convex approach, we derived some novel and less conservative conditions on exponential stability of the networks. Comparing to ADT, the proposed MDADT show that the minimal dwell time of each subsystem is smaller and the switched system stabilizes faster. The obtained results extend and improve some existing ones. Moreover, the validness and effectiveness of these results are demonstrated through numerical simulations.

  3. Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio

    NASA Astrophysics Data System (ADS)

    Li, Shenghong; Bi, Guoan

    2014-12-01

    Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.

  4. Neural Networks Used to Compare Designed and Measured Time-Average Patterns

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1999-01-01

    Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.

  5. The reversal of the migration turnaround: resumption of traditional trends?

    PubMed

    Champion, A G

    1988-01-01

    "Cochrane and Vining's study of recent trends in core-periphery migration, while suffering from some weaknesses in methodology and interpretation, confirms very clearly that the reversal of the metropolitan migration turnaround is a widespread phenomenon in the developed non-Communist world. Evidence from the United Kingdom also supports this observation. An examination of the possible factors responsible for the rise and fall of counterurbanization over the past two decades suggests that two major forces--population deconcentration and regional restructuring--are operating simultaneously but relatively independently and that they both fluctuate in their nature and strength over time in response to prevailing demographic and economic circumstances."

  6. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my...

  7. 40 CFR 60.2943 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? 60.2943 Section 60.2943 Protection of Environment... SOURCES Operator Training and Qualification Monitoring § 60.2943 How do I convert my 1-hour...

  8. An upper bound to time-averaged space-charge limited diode currents

    NASA Astrophysics Data System (ADS)

    Griswold, M. E.; Fisch, N. J.; Wurtele, J. S.

    2010-11-01

    The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.

  9. An upper bound to time-averaged space-charge limited diode currents

    SciTech Connect

    Griswold, M. E.; Fisch, N. J.; Wurtele, J. S.

    2010-11-15

    The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.

  10. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  11. Testing ΛCDM cosmology at turnaround: where to look for violations of the bound?

    NASA Astrophysics Data System (ADS)

    Tanoglidis, D.; Pavlidou, V.; Tomaras, T. N.

    2015-12-01

    In ΛCDM cosmology, structure formation is halted shortly after dark energy dominates the mass/energy budget of the Universe. A manifestation of this effect is that in such a cosmology the turnaround radius—the non-expanding mass shell furthest away from the center of a structure— has an upper bound. Recently, a new, local, test for the existence of dark energy in the form of a cosmological constant was proposed based on this turnaround bound. Before designing an experiment that, through high-precision determination of masses and —independently— turnaround radii, will challenge ΛCDM cosmology, we have to answer two important questions: first, when turnaround-scale structures are predicted to be close enough to their maximum size, so that a possible violation of the bound may be observable. Second, which is the best mass scale to target for possible violations of the bound. These are the questions we address in the present work. Using the Press-Schechter formalism, we find that turnaround structures have in practice already stopped forming, and consequently, the turnaround radius of structures must be very close to the maximum value today. We also find that the mass scale of ~ 1013 Msolar characterizes the turnaround structures that start to form in a statistically important number density today —and even at an infinite time in the future, since structure formation has almost stopped. This mass scale also separates turnaround structures with qualitatively different cosmological evolution: smaller structures are no longer readjusting their mass distribution inside the turnaround scale, they asymptotically approach their ultimate abundance from higher values, and they are common enough to have, at some epoch, experienced major mergers with structures of comparable mass; larger structures exhibit the opposite behavior. We call this mass scale the transitional mass scale and we argue that it is the optimal for the purpose outlined above. As a corollary, we

  12. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  13. Real-time fluid transport characterization through direct acquisition of the averaged propagator

    NASA Astrophysics Data System (ADS)

    Kittler, Wilfred; Hunter, Mark; Galvosas, Petrik

    2015-08-01

    Brownian motion (diffusion) and coherent flow are fundamental for many natural and artificial processes. Therefore, its accurate measurement and description is highly desirable in many areas of science, engineering, and technology. Currently available methods commonly involve multiple experiments and substantial processing of acquired data. This contribution proposes a theoretical and experimental framework that enables one to directly examine the dynamics of fluid matter subject to diffusion and flow through the acquisition of the so-called averaged propagator. This statistical function holds all information on particle mobility due to flow and diffusion averaged over the observed fluid. The proposed method is based on a single instantaneous nuclear magnetic resonance measurement event. It also removes the need for data postprocessing by capturing the averaged propagator directly as the acquired signal, which enables the monitoring of diffusion and flow in real time.

  14. Effects of time-averaging climate parameters on predicted multicompartmental fate of pesticides and POPs.

    PubMed

    Lammel, Gerhard

    2004-01-01

    With the aim to investigate the justification of time-averaging of climate parameters in multicompartment modelling the effects of various climate parameters and different modes of entry on the predicted substances' total environmental burdens and the compartmental fractions were studied. A simple, non-steady state zero-dimensional (box) mass-balance model of intercompartmental mass exchange which comprises four compartments was used for this purpose. Each two runs were performed, one temporally unresolved (time-averaged conditions) and a time-resolved (hourly or higher) control run. In many cases significant discrepancies are predicted, depending on the substance and on the parameter. We find discrepancies exceeding 10% relative to the control run and up to an order of magnitude for prediction of the total environmental burden from neglecting seasonalities of the soil and ocean temperatures and the hydroxyl radical concentration in the atmosphere and diurnalities of atmospheric mixing depth and the hydroxyl radical concentration in the atmosphere. Under some conditions it was indicated that substance sensitivity could be explained by the magnitude of the sink terms in the compartment(s) with parameters varying. In general, however, any key for understanding substance sensitivity seems not be linked in an easy manner to the properties of the substance, to the fractions of its burden or to the sink terms in either of the compartments with parameters varying. Averaging of diurnal variability was found to cause errors of total environmental residence time of different sign for different substances. The effects of time-averaging of several parameters are in general not additive but synergistic as well as compensatory effects occur. An implication of these findings is that the ranking of substances according to persistence is sensitive to time resolution on the scale of hours to months. As a conclusion it is recommended to use high temporal resolution in multi

  15. Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry

    NASA Astrophysics Data System (ADS)

    Decker, A. J.

    1984-06-01

    The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.

  16. Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1984-01-01

    The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.

  17. Exposing local symmetries in distorted driven lattices via time-averaged invariants

    NASA Astrophysics Data System (ADS)

    Wulf, T.; Morfonios, C. V.; Diakonos, F. K.; Schmelcher, P.

    2016-05-01

    Time-averaged two-point currents are derived and shown to be spatially invariant within domains of local translation or inversion symmetry for arbitrary time-periodic quantum systems in one dimension. These currents are shown to provide a valuable tool for detecting deformations of a spatial symmetry in static and driven lattices. In the static case the invariance of the two-point currents is related to the presence of time-reversal invariance and/or probability current conservation. The obtained insights into the wave functions are further exploited for a symmetry-based convergence check which is applicable for globally broken but locally retained potential symmetries.

  18. Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1984-01-01

    The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three-dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed. Previously announced in STAR as N84-21849

  19. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  20. Time-average-based Methods for Multi-angular Scale Analysis of Cosmic-Ray Data

    NASA Astrophysics Data System (ADS)

    Iuppa, R.; Di Sciascio, G.

    2013-04-01

    Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10°, disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

  1. Leading a supply chain turnaround.

    PubMed

    Slone, Reuben E

    2004-10-01

    Just five years ago, salespeople at Whirlpool were in the habit of referring to their supply chain organization as the "sales disablers." Now the company excels at getting products to the right place at the right time--while managing to keep inventories low. How did that happen? In this first-person account, Reuben Slone, Whirlpool's vice president of Global Supply Chain, describes how he and his colleagues devised the right supply chain strategy, sold it internally, and implemented it. Slone insisted that the right focal point for the strategy was the satisfaction of consumers at the end of the supply chain. Most supply chain initiatives do the opposite: They start with the realities of a company's manufacturing base and proceed from there. Through a series of interviews with trade customers large and small, his team identified 27 different capabilities that drove industry perceptions of Whirlpool's performance. Knowing it was infeasible to aim for world-class performance across all of them, Slone weighed the costs of excelling at each and found the combination of initiatives that would provide overall competitive advantage. A highly disciplined project management office and broad training in project management were key to keeping work on budget and on benefit. Slone set an intense pace--three "releases" of new capabilities every month--that the group maintains to this day. Lest this seem like a technology story, however, Slone insists it is just as much a "talent renaissance." People are proud today to be part of Whirlpool's supply chain organization, and its new generation of talent will give the company a competitive advantage for years to come. PMID:15559580

  2. Leading a supply chain turnaround.

    PubMed

    Slone, Reuben E

    2004-10-01

    Just five years ago, salespeople at Whirlpool were in the habit of referring to their supply chain organization as the "sales disablers." Now the company excels at getting products to the right place at the right time--while managing to keep inventories low. How did that happen? In this first-person account, Reuben Slone, Whirlpool's vice president of Global Supply Chain, describes how he and his colleagues devised the right supply chain strategy, sold it internally, and implemented it. Slone insisted that the right focal point for the strategy was the satisfaction of consumers at the end of the supply chain. Most supply chain initiatives do the opposite: They start with the realities of a company's manufacturing base and proceed from there. Through a series of interviews with trade customers large and small, his team identified 27 different capabilities that drove industry perceptions of Whirlpool's performance. Knowing it was infeasible to aim for world-class performance across all of them, Slone weighed the costs of excelling at each and found the combination of initiatives that would provide overall competitive advantage. A highly disciplined project management office and broad training in project management were key to keeping work on budget and on benefit. Slone set an intense pace--three "releases" of new capabilities every month--that the group maintains to this day. Lest this seem like a technology story, however, Slone insists it is just as much a "talent renaissance." People are proud today to be part of Whirlpool's supply chain organization, and its new generation of talent will give the company a competitive advantage for years to come.

  3. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by

  4. Quantitative vibration analysis using a single fringe pattern in time-average speckle interferometry.

    PubMed

    Deepan, B; Quan, C; Tay, C J

    2016-08-01

    In this paper, a novel technique for quantitative vibration analysis using time-average electronic speckle pattern interferometry is proposed. An amplitude-varied time-average refreshing reference frame method is used to capture a fringe pattern with a better fringe contrast than the conventional reference frame technique. The recorded fringe patterns with improved contrast provide better mode shape visibility and are easier to process. A derivative-based regularized phase tracker model is used to retrieve vibration amplitudes from a single fringe pattern. The method does not require a phase shifter to obtain the mode shape or amplitude. The method provides unwrapped amplitude and amplitude derivatives maps directly, so a separate phase unwrapping process is not required. Experimental work is carried out using a circular aluminum plate test specimen and the results are compared with a finite element method modal analysis. Both experimental and numerical results show that the proposed method is robust and accurate. PMID:27505366

  5. Real Time Study Of Vibrations By Means Of An Instrument Recording Time-Average Holograms On A TV-Vidicon

    NASA Astrophysics Data System (ADS)

    Ek, Leif; Molin, Nils-Erik; Biedermann, Klaus

    1985-06-01

    A number of experiments in vibration analysis are discussed where different methods like hologram interferometry, impact testing, acoustical recording, FEM-calculations are compared to results obtained by means of an instrument based on image-plane, time-average hologram recording on a TV-vidicon like ESPI (electronic speckle pattern interferometry). The instrument, "VibraVision", offers distinct advantages in vibration studies.

  6. Conditional sampling schemes based on the Variable Interval Time Averaging (VITA) algorithm

    NASA Astrophysics Data System (ADS)

    Morrison, J. F.; Tsai, H. M.; Bradshaw, P.

    1986-08-01

    The variable interval time averaging (VITA) algorithm was tested in a variety of boundary layers for its ability to detect motions principally involved in the production of shear stress. A VITA+LEVEL scheme (which uses a variance and level criterion) was devised and is shown to produce length scale statistics that are independent of the conditioning criteria, where those from the VITA scheme are not.

  7. Effects of Time Averaging on Optical Scintillation in a Ground-to-Satellite Atmospheric Propagation

    NASA Astrophysics Data System (ADS)

    Toyoshima, Morio; Araki, Kenichi

    2000-04-01

    Temporal natures for a variance of turbulence-induced log-intensity fluctuations are obtained. The variance of the optical fluctuation is reduced when the optical signals are integrated in a photodetector, and we express the index of reduction (called the time-averaging factor) by using an autocovariance function of the optical fluctuation. The optical fluctuations for a ground-to-satellite path are caused by both atmospheric turbulence and the beam-pointing jitter error of the optical transmitter. The turbulence-induced optical scintillation can be discriminated from the fluctuation that is due to the beam-pointing jitter error. The compared result from the probability density function of the optical signal reveals good agreement. The temporal autocovariance functions of optical scintillation are obtained and used to calculate the time-averaging factor. The analytically expected effects of time averaging are verified by the experimental results. The estimations contribute to the link budget design for the optical tracking channel through atmospheric turbulence.

  8. Effects of time averaging on optical scintillation in a ground-to-satellite atmospheric propagation.

    PubMed

    Toyoshima, M; Araki, K

    2000-04-20

    Temporal natures for a variance of turbulence-induced log-intensity fluctuations are obtained. The variance of the optical fluctuation is reduced when the optical signals are integrated in a photodetector, and we express the index of reduction (called the time-averaging factor) by using an autocovariance function of the optical fluctuation. The optical fluctuations for a ground-to-satellite path are caused by both atmospheric turbulence and the beam-pointing jitter error of the optical transmitter. The turbulence-induced optical scintillation can be discriminated from the fluctuation that is due to the beam-pointing jitter error. The compared result from the probability density function of the optical signal reveals good agreement. The temporal autocovariance functions of optical scintillation are obtained and used to calculate the time-averaging factor. The analytically expected effects of time averaging are verified by the experimental results. The estimations contribute to the link budget design for the optical tracking channel through atmospheric turbulence. PMID:18345087

  9. Mixed semiclassical initial value representation time-averaging propagator for spectroscopic calculations.

    PubMed

    Buchholz, Max; Grossmann, Frank; Ceotto, Michele

    2016-03-01

    A mixed semiclassical initial value representation expression for spectroscopic calculations is derived. The formulation takes advantage of the time-averaging filtering and the hierarchical properties of different trajectory based propagation methods. A separable approximation is then introduced that greatly reduces (about an order of magnitude) the computational cost compared with a full Herman-Kluk time-averaging semiclassical calculation for the same systems. The expression is exact for the harmonic case and it is tested numerically for a Morse potential coupled to one or two additional harmonic degrees of freedom. Results are compared to full Herman-Kluk time-averaging calculations and exact quantum wavepacket propagations. We found the peak positions of the mixed semiclassical approximations to be always in very good agreement with full quantum calculations, while overtone peak intensities are lower with respect to the exact ones. Given the reduced computational effort required by this new mixed semiclassical approximation, we believe the present method to make spectroscopic calculations available for higher dimensional systems than accessible before. PMID:26957152

  10. Mixed semiclassical initial value representation time-averaging propagator for spectroscopic calculations

    NASA Astrophysics Data System (ADS)

    Buchholz, Max; Grossmann, Frank; Ceotto, Michele

    2016-03-01

    A mixed semiclassical initial value representation expression for spectroscopic calculations is derived. The formulation takes advantage of the time-averaging filtering and the hierarchical properties of different trajectory based propagation methods. A separable approximation is then introduced that greatly reduces (about an order of magnitude) the computational cost compared with a full Herman-Kluk time-averaging semiclassical calculation for the same systems. The expression is exact for the harmonic case and it is tested numerically for a Morse potential coupled to one or two additional harmonic degrees of freedom. Results are compared to full Herman-Kluk time-averaging calculations and exact quantum wavepacket propagations. We found the peak positions of the mixed semiclassical approximations to be always in very good agreement with full quantum calculations, while overtone peak intensities are lower with respect to the exact ones. Given the reduced computational effort required by this new mixed semiclassical approximation, we believe the present method to make spectroscopic calculations available for higher dimensional systems than accessible before.

  11. Manned Orbital Transfer Vehicle (MOTV). Volume 5: Turnaround analysis

    NASA Technical Reports Server (NTRS)

    Boyland, R. E.; Sherman, S. W.; Morfin, H. W.

    1979-01-01

    The development of a low-cost reliable turnaround process to employ the MOTV in enhancing the utilization of the geosynchronous space region is analyzed. It is indicated that a routine effective turnaround/maintenance plan must make maximum use of flight data for maintenance planning, a high degree of test automation, and MOTV maintainability features in order to minimize tests, facilitate repair, and reduce manpower requirements. An effective turnaround plan provides a payback of reduced risks.

  12. Investigation of Average Prediction Time for Different Meteorological Variables By Using Chaotic Approach

    NASA Astrophysics Data System (ADS)

    Özgür, Evren; Koçak, Kasım

    2016-04-01

    According to nonlinear dynamical system approach, it is possible that the time evolution of a system can be represented by its trajectories in phase space. This phase space is spanned by the state variables which are necessary to determine the time evolution of the system. Atmospheric processes can not be represented by linear approaches because of their dependency on numerous independent variables. Since a small changes in initial conditions lead to significant differences in prediction, long term prediction of meteorological variables is not possible. This situation can be explained by the term "sensitive dependence on initial conditions". In the study, it was tried to determine the average prediction time for different atmospheric variables by applying nonlinear approach. In order to apply the method, the first step is to reconstruct the phase space. Phase space has two parameters which are time delay and embedding dimension. Mutual Information Function (MIF) can be used to determine optimum time delay. MIF considers both linear and nonlinear inner-dependencies in a given time series. To define phase space, embedding dimension must be identified correctly. Embedding dimesion is the number of necessary state variables which describe the dynamics of a system. The algorithm to define embedding dimension is False Nearest Neighbors (FNN). After constructing the phase space by using time delay and embedding dimension, the maximum Lyapunov exponent was introduced. Lyapunov exponent is related to the exponential divergence or convergence of nearby orbits in the phase space. A dynamical system which has positive Lyapunov exponent is defined as chaotic system. Because meteorological variables can be controlled with large numbers of independent variables, time series of meteorological variables might be produced by a chaotic dynamical system. By using phase space and maximum Lyapunov exponent value, average prediction times of different parameters were calculated

  13. School Turnaround Teachers: Selection Toolkit. Part of the School Turnaround Collection from Public Impact

    ERIC Educational Resources Information Center

    Public Impact, 2008

    2008-01-01

    This toolkit includes these separate sections: (1) Selection Preparation Guide; (2) Day-of-Interview Tools; (3) Candidate Rating Tools; and (4) Candidate Comparison and Decision Tools. Each of the sections is designed to be used at different stages of the selection process. The first section provides turnaround teacher competencies that are the…

  14. Asymmetric multifractal detrending moving average analysis in time series of PM2.5 concentration

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Li, Jingming; Zhou, Longfei

    2016-09-01

    In this paper, we propose the asymmetric multifractal detrending moving average analysis (A-MFDMA) method to explore the asymmetric correlation in non-stationary time series. The proposed method is applied to explore the asymmetric correlation of PM2.5 daily average concentration with uptrends or downtrends in China. In addition, shuffling and phase randomization procedures are applied to detect the sources of multifractality. The results show that existences of asymmetric correlations, and the asymmetric correlations are multifractal. Further, the multifractal scaling behavior in the Chinese PM2.5 is caused not only by long-range correlation but also by fat-tailed distribution, but the major source of multifractality is fat-tailed distribution.

  15. On simulating flow with multiple time scales using a method of averages

    SciTech Connect

    Margolin, L.G.

    1997-12-31

    The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his new method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.

  16. TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA

    SciTech Connect

    Iuppa, R.; Di Sciascio, G. E-mail: giuseppe.disciascio@roma2.infn.it

    2013-04-01

    Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

  17. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  18. Memory efficient and constant time 2D-recursive spatial averaging filter for embedded implementations

    NASA Astrophysics Data System (ADS)

    Gan, Qifeng; Seoud, Lama; Ben Tahar, Houssem; Langlois, J. M. Pierre

    2016-04-01

    Spatial Averaging Filters (SAF) are extensively used in image processing for image smoothing and denoising. Their latest implementations have already achieved constant time computational complexity regardless of kernel size. However, all the existing O(1) algorithms require additional memory for temporary data storage. In order to minimize memory usage in embedded systems, we introduce a new two-dimensional recursive SAF. It uses previous resultant pixel values along both rows and columns to calculate the current one. It can achieve constant time computational complexity without using any additional memory usage. Experimental comparisons with previous SAF implementations shows that the proposed 2D-Recursive SAF does not require any additional memory while offering a computational time similar to the most efficient existing SAF algorithm. These features make it especially suitable for embedded systems with limited memory capacity.

  19. A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

    1992-01-01

    A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

  20. Time-averages for Plane Travelling Waves—The Effect of Attenuation: I, Adiabatic Approach

    NASA Astrophysics Data System (ADS)

    Makarov, S. N.

    1993-05-01

    The analysis of the effect of attenuation on the time-averages for a plane travelling wave is presented. The barotropic equation of state is considered: i.e., acoustic heating is assumed to be negligible. The problem statement consists of calculating means in a finite region bounded by a transducer surface as well as by a perfectly absorbing surface, respectively. Although the simple wave approximation cannot be used throughout the field it is still valid near the perfect absorber. The result for radiation pressure is different from the conclusions given previously by Beyer and Livett, Emery and Leeman.

  1. Probe shapes that measure time-averaged streamwise momentum and cross-stream turbulence intensity

    NASA Technical Reports Server (NTRS)

    Rossow, Vernon J. (Inventor)

    1993-01-01

    A method and apparatus for directly measuring the time-averaged streamwise momentum in a turbulent stream use a probe which has total head response which varies as the cosine-squared of the angle of incidence. The probe has a nose with a slight indentation on its front face for providing the desired response. The method of making the probe incorporates unique design features. Another probe may be positioned in a side-by-side relationship to the first probe to provide a direct measurement of the total pressure. The difference between the two pressures yields the sum of the squares of the cross-stream components of the turbulence level.

  2. Study of distribution and characteristics of the time average of pressure of a water cushion pool

    NASA Astrophysics Data System (ADS)

    Guo, Y. H.; Fu, J. F.

    2016-08-01

    When a dam discharges flood water, the plunging flow with greater kinetic energy, will scour the riverbed, resulting in erosion damage. In order to improve the anti-erosion capacity of a riverbed, the cushion pool created. This paper is based on turbulent jet theoryto deduce the semi-empirical formula of the time average of pressure in the impinging portion of the cushion pool. Additionally, MATLAB numerical is used to conduct a simulation analysis according to turbulent jet energy and watercushion depth when water floods into the water cushion pool, to determine the regularities of distribution and related characteristics.

  3. Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay

    USGS Publications Warehouse

    Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.

    2000-01-01

    Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.

  4. Modal identification based on Gaussian continuous time autoregressive moving average model

    NASA Astrophysics Data System (ADS)

    Xiuli, Du; Fengquan, Wang

    2010-09-01

    A new time-domain modal identification method of the linear time-invariant system driven by the non-stationary Gaussian random force is presented in this paper. The proposed technique is based on the multivariate continuous time autoregressive moving average (CARMA) model. This method can identify physical parameters of a system from the response-only data. To do this, we first transform the structural dynamic equation into the CARMA model, and subsequently rewrite it in the state-space form. Second, we present the exact maximum likelihood estimators of parameters of the continuous time autoregressive (CAR) model by virtue of the Girsanov theorem, under the assumption that the uniformly modulated function is approximately equal to a constant matrix over a very short period of time. Then, based on the relation between the CAR model and the CARMA model, we present the exact maximum likelihood estimators of parameters of the CARMA model. Finally, the modal parameters are identified by the eigenvalue analysis method. Numerical results show that the method we introduced here not only has high precision and robustness, but also has very high computing efficiency. Therefore, it is suitable for real-time modal identification.

  5. Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance

    NASA Astrophysics Data System (ADS)

    Kidwell, Susan M.

    2002-09-01

    Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.

  6. Autocorrelation-based time synchronous averaging for condition monitoring of planetary gearboxes in wind turbines

    NASA Astrophysics Data System (ADS)

    Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho

    2016-03-01

    We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.

  7. Average weighted trapping time of the node- and edge- weighted fractal networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Ye, Dandan; Hou, Jie; Xi, Lifeng; Su, Weiyi

    2016-10-01

    In this paper, we study the trapping problem in the node- and edge- weighted fractal networks with the underlying geometries, focusing on a particular case with a perfect trap located at the central node. We derive the exact analytic formulas of the average weighted trapping time (AWTT), the average of node-to-trap mean weighted first-passage time over the whole networks, in terms of the network size Ng, the number of copies s, the node-weight factor w and the edge-weight factor r. The obtained result displays that in the large network, the AWTT grows as a power-law function of the network size Ng with the exponent, represented by θ(s , r , w) =logs(srw2) when srw2 ≠ 1. Especially when srw2 = 1 , AWTT grows with increasing order Ng as log Ng. This also means that the efficiency of the trapping process depend on three main parameters: the number of copies s > 1, node-weight factor 0 < w ≤ 1, and edge-weight factor 0 < r ≤ 1. The smaller the value of srw2 is, the more efficient the trapping process is.

  8. Estimation of the path-averaged atmospheric refractive index structure constant from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    Basu, Santasri; McCrae, Jack E.; Fiorino, Steven T.

    2015-05-01

    A time-lapse imaging experiment was conducted to monitor the effects of the atmosphere over some period of time. A tripod-mounted digital camera captured images of a distant building every minute. Correlation techniques were used to calculate the position shifts between the images. Two factors causing shifts between the images are: atmospheric turbulence, causing the images to move randomly and quickly, plus changes in the average refractive index gradient along the path which cause the images to move vertically, more slowly and perhaps in noticeable correlation with solar heating and other weather conditions. A technique for estimating the path-averaged C 2n from the random component of the image motion is presented here. The technique uses a derived set of weighting functions that depend on the size of the imaging aperture and the patch size in the image whose motion is being tracked. Since this technique is phase based, it can be applied to strong turbulence paths where traditional irradiance based techniques suffer from saturation effects.

  9. Asynchronous H∞ filtering for linear switched systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Zhang, Hongbin; Wang, Gang; Dang, Chuangyin

    2016-09-01

    This paper is concerned with the H∞ filtering problem for a class of continuous-time linear switched systems with the asynchronous behaviours, where 'asynchronous' means that the switching of the filters to be designed has a lag to the switching of the system modes. By using the Lyapunov-like functions and the average dwell time technique, a sufficient condition is obtained to guarantee the asymptotic stability with a weighted H∞ performance index for the filtering error system. Moreover, the results are formulated in the form of linear matrix inequalities that are numerical feasible. As a result, the filter design problem is solved. Finally, an illustrative numerical example is presented to show the effectiveness of the results.

  10. The State Role in School Turnaround: Emerging Best Practices

    ERIC Educational Resources Information Center

    Rhim, Lauren Morando, Ed.; Redding, Sam, Ed.

    2014-01-01

    This publication explores the role of the state education agency (SEA) in school turnaround efforts. An emphasis is placed on practical application of research and best practices related to the SEA's critical leadership role in driving and supporting successful school turnaround efforts. The publication is organized around the four goals of…

  11. Using Competencies to Improve School Turnaround Principal Success

    ERIC Educational Resources Information Center

    Steiner, Lucy; Hassel, Emily Ayscue

    2011-01-01

    This paper aims first to shed light on one element of leadership: the characteristics--or "competencies"--of turnaround leaders who succeed in driving rapid, dramatic change. Second, it recounts the elements of support that districts must provide these leaders to enable and sustain a portfolio of successful school turnarounds. Fortunately, decades…

  12. "Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform

    ERIC Educational Resources Information Center

    Johnson, Amanda Walker

    2013-01-01

    "Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas. The neoliberal…

  13. The Effect of Locus of Control on School Turnaround

    ERIC Educational Resources Information Center

    Walston, Bradford

    2012-01-01

    This research focused on the school turnaround process in six turnaround elementary schools located in urban and rural areas of the state of North Carolina. The purpose of the study was to learn about the challenges facing the six schools, the process of improving student achievement, and, more specifically, the degree to which adaptive leadership…

  14. Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?

    ERIC Educational Resources Information Center

    Mette, Ian M.

    2013-01-01

    This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…

  15. Coherent averaging of the passive fathometer response using short correlation time.

    PubMed

    Traer, James; Gerstoft, Peter

    2011-12-01

    The passive fathometer algorithm was applied to data from two drifting array experiments in the Mediterranean, Boundary 2003 and 2004. The passive fathometer response was computed with correlation times from 0.34 to 90 s and, for correlation times less than a few seconds, the observed signal-to-noise ratio (SNR) agrees with a 1D model of SNR of the passive fathometer response in an ideal waveguide. In the 2004 experiment, the fathometer response showed the array depth varied periodically with an amplitude of 1 m and a period of 7 s consistent with wave driven motion of the array. This introduced a destructive interference, which prevents the SNR growing with increasing correlation time. A peak-tracking algorithm applied to the fathometer response of experimental data was used to remove this motion allowing the coherent passive fathometer response to be averaged over several minutes without destructive interference. Multirate adaptive beamforming, using 90 s correlation time to form adaptive steer vectors which were applied to 0.34 s data snapshots, increases the SNR of the passive fathometer response. PMID:22225020

  16. Average time spent by Lévy flights and walks on an interval with absorbing boundaries.

    PubMed

    Buldyrev, S V; Havlin, S; Kazakov, A Y; da Luz, M G; Raposo, E P; Stanley, H E; Viswanathan, G M

    2001-10-01

    We consider a Lévy flyer of order alpha that starts from a point x(0) on an interval [O,L] with absorbing boundaries. We find a closed-form expression for the average number of flights the flyer takes and the total length of the flights it travels before it is absorbed. These two quantities are equivalent to the mean first passage times for Lévy flights and Lévy walks, respectively. Using fractional differential equations with a Riesz kernel, we find exact analytical expressions for both quantities in the continuous limit. We show that numerical solutions for the discrete Lévy processes converge to the continuous approximations in all cases except the case of alpha-->2, and the cases of x(0)-->0 and x(0)-->L. For alpha>2, when the second moment of the flight length distribution exists, our result is replaced by known results of classical diffusion. We show that if x(0) is placed in the vicinity of absorbing boundaries, the average total length has a minimum at alpha=1, corresponding to the Cauchy distribution. We discuss the relevance of this result to the problem of foraging, which has received recent attention in the statistical physics literature.

  17. Windows of National Opportunity: An Excerpt from the Center on School Turnaround's Report on State Supports for Turnaround

    ERIC Educational Resources Information Center

    Scott, Caitlin; Lasley, Nora

    2014-01-01

    In 2014, state and national leaders found many aspects of turning around America's low-performing schools even more daunting than in the previous year. These views were revealed in the Center on School Turnaround's (CST's) 2014 February/March survey of school turnaround leaders in State Education Agencies (SEA) and directors of the nation's…

  18. Effect of spatial averaging on multifractal properties of meteorological time series

    NASA Astrophysics Data System (ADS)

    Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika

    2016-04-01

    Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased

  19. Shear banding in a lyotropic lamellar phase. I. Time-averaged velocity profiles

    NASA Astrophysics Data System (ADS)

    Salmon, Jean-Baptiste; Manneville, Sébastien; Colin, Annie

    2003-11-01

    Using velocity profile measurements based on dynamic light scattering and coupled to structural and rheological measurements in a Couette cell, we present evidences for a shear banding scenario in the shear flow of the onion texture of a lyotropic lamellar phase. Time-averaged measurements clearly show the presence of structural shear banding in the vicinity of a shear-induced transition, associated with the nucleation and growth of a highly sheared band in the flow. Our experiments also reveal the presence of slip at the walls of the Couette cell. Using a simple mechanical approach, we demonstrate that our data confirm the classical assumption of the shear banding picture, in which the interface between bands lies at a given stress σ*. We also outline the presence of large temporal fluctuations of the flow field, which are the subject of the second part of this paper [Salmon et al., Phys. Rev. E 68, 051504 (2003)].

  20. Non-linear adaptive sliding mode switching control with average dwell-time

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Zhang, Maoqing; Fei, Shumin

    2013-03-01

    In this article, an adaptive integral sliding mode control scheme is addressed for switched non-linear systems in the presence of model uncertainties and external disturbances. The control law includes two parts: a slide mode controller for the reduced model of the plant and a compensation controller to deal with the non-linear systems with parameter uncertainties. The adaptive updated laws have been derived from the switched multiple Lyapunov function method, also an admissible switching signal with average dwell-time technique is given. The simplicity of the proposed control scheme facilitates its implementation and the overall control scheme guarantees the global asymptotic stability in the Lyapunov sense such that the sliding surface of the control system is well reached. Simulation results are presented to demonstrate the effectiveness and the feasibility of the proposed approach.

  1. ARECIBO MULTI-FREQUENCY TIME-ALIGNED PULSAR AVERAGE-PROFILE AND POLARIZATION DATABASE

    SciTech Connect

    Hankins, Timothy H.; Rankin, Joanna M. E-mail: Joanna.Rankin@uvm.edu

    2010-01-15

    We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.

  2. Applicability of time-averaged holography for micro-electro-mechanical system performing non-linear oscillations.

    PubMed

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-21

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.

  3. Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations

    PubMed Central

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-01

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

  4. Time-dependent density functional theory with twist-averaged boundary conditions

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Nazarewicz, W.; Reinhard, P.-G.

    2016-05-01

    Background: Time-dependent density functional theory is widely used to describe excitations of many-fermion systems. In its many applications, three-dimensional (3D) coordinate-space representation is used, and infinite-domain calculations are limited to a finite volume represented by a spatial box. For finite quantum systems (atoms, molecules, nuclei, hadrons), the commonly used periodic or reflecting boundary conditions introduce spurious quantization of the continuum states and artificial reflections from boundary; hence, an incorrect treatment of evaporated particles. Purpose: The finite-volume artifacts for finite systems can be practically cured by invoking an absorbing potential in a certain boundary region sufficiently far from the described system. However, such absorption cannot be applied in the calculations of infinite matter (crystal electrons, quantum fluids, neutron star crust), which suffer from unphysical effects stemming from a finite computational box used. Here, twist-averaged boundary conditions (TABC) have been used successfully to diminish the finite-volume effects. In this work, we extend TABC to time-dependent modes. Method: We use the 3D time-dependent density functional framework with the Skyrme energy density functional. The practical calculations are carried out for small- and large-amplitude electric dipole and quadrupole oscillations of 16O. We apply and compare three kinds of boundary conditions: periodic, absorbing, and twist-averaged. Results: Calculations employing absorbing boundary conditions (ABC) and TABC are superior to those based on periodic boundary conditions. For low-energy excitations, TABC and ABC variants yield very similar results. With only four twist phases per spatial direction in TABC, one obtains an excellent reduction of spurious fluctuations. In the nonlinear regime, one has to deal with evaporated particles. In TABC, the floating nucleon gas remains in the box; the amount of nucleons in the gas is found to be

  5. Fault detection and isolation for discrete-time switched linear systems based on average dwell-time method

    NASA Astrophysics Data System (ADS)

    Li, Jian; Yang, Guang-Hong

    2013-12-01

    This article is concerned with the problem of fault detection and isolation (FDI) for discrete-time switched linear systems based on the average dwell-time method. The proposed FDI framework consists of a bank of FDI filters, which are divided into N groups for N subsystems. The FDI filters belonging to one group correspond to the faults for a subsystem, and generate a residual signal to guarantee the fault sensitivity performance for the subsystem, the fault attenuation performance for other subsystems and the disturbance attenuation performance for all subsystems. Different form employing the weighting matrices to restrict the frequency ranges of faults for each subsystem, the finite-frequency H - performance for switched systems is first defined. Sufficient conditions are established by linear matrix inequalities (LMIs), and the filter gains are characterised in terms of the solution of a convex optimisation problem. Two examples are used to demonstrate the effectiveness of the proposed design method.

  6. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  7. Distributional behaviors of time-averaged observables in the Langevin equation with fluctuating diffusivity: Normal diffusion but anomalous fluctuations

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-06-01

    We consider the Langevin equation with dichotomously fluctuating diffusivity, where the diffusion coefficient changes dichotomously over time, in order to study fluctuations of time-averaged observables in temporally heterogeneous diffusion processes. We find that the time-averaged mean-square displacement (TMSD) can be represented by the occupation time of a state in the asymptotic limit of the measurement time and hence occupation time statistics is a powerful tool for calculating the TMSD in the model. We show that the TMSD increases linearly with time (normal diffusion) but the time-averaged diffusion coefficients are intrinsically random when the mean sojourn time for one of the states diverges, i.e., intrinsic nonequilibrium processes. Thus, we find that temporally heterogeneous environments provide anomalous fluctuations of time-averaged diffusivity, which have relevance to large fluctuations of the diffusion coefficients obtained by single-particle-tracking trajectories in experiments.

  8. Modified box dimension and average weighted receiving time on the weighted fractal networks

    PubMed Central

    Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi

    2015-01-01

    In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355

  9. Modified box dimension and average weighted receiving time on the weighted fractal networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi

    2015-12-01

    In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is.

  10. Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi

    2015-03-01

    In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ∈ (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by θ(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.

  11. Computation of instantaneous and time-averaged active acoustic intensity field around rotating source

    NASA Astrophysics Data System (ADS)

    Mao, Yijun; Xu, Chen; Qi, Datong

    2015-02-01

    A vector aeroacoustics method is developed to analyze the acoustic energy flow path from the rotating source. In this method, the instantaneous and time-averaged active acoustic intensity vectors are evaluated from the time-domain and frequency-domain acoustic pressure and acoustic velocity formulations, respectively. With the above method, the acoustic intensity vectors and the acoustic energy streamlines are visualized to investigate the propagation feature of the noise radiated from the monopole and dipole point sources and the rotor in subsonic rotation. The result reveals that a portion of the acoustic energy spirals many circles before moving towards the far field, and another portion of the acoustic energy firstly flows inward along the radial direction and then propagates along the axial direction. Further, an acoustic black hole exists in the plane of source rotation, from which the acoustic energy cannot escape once the acoustic energy flows into it. Moreover, by visualizing the acoustic intensity field around the rotating sources, the acoustic-absorption performance of the acoustic liner built in the casing and centerbody is discussed.

  12. Respiratory sinus arrhythmia: time domain characterization using autoregressive moving average analysis

    NASA Technical Reports Server (NTRS)

    Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.

    1995-01-01

    Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.

  13. Evaluation of time-average dispersion models for estimating pheromone concentration in a deciduous forest.

    PubMed

    Elkinton, J S; Cardé, R T; Mason, C J

    1984-07-01

    The Sutton and more recent Gaussian plume models of atmospheric dispersion were used to estimate downwind concentrations of pheromone in a deciduous forest. Wind measurements from two bivane anemometers were recorded every 12 sec and the pheromone was emitted from a point source 1.6 m above ground level at known rates. The wingfanning response of individually caged male gypsy moths (Lymantria dispar) at 15 sites situated 20 to 80 m downwind was used to monitor when pheromone levels were above threshold over a 15-min interval. Predicted concentrations from these Gaussian-type models at locations where wing fanning occurred were often several orders of magnitude below the known behavioral thresholds determined from wind tunnel tests. Probit analyses of dose-response relationships with these models showed no relationship between predicted dose and actual response. The disparity between the predictions of concentration from these models and the actual response patterns of the male gypsy moth in the field was not unexpected. These time-average models predict concentrations for a fixed position over 3-min or longer intervals, based upon the dispersion coefficients. Thus the models estimate pheromone concentrations for time intervals appreciably longer than required for behavioral response.

  14. New device for time-averaged measurement of volatile organic compounds (VOCs).

    PubMed

    Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio

    2014-07-01

    Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes

  15. New device for time-averaged measurement of volatile organic compounds (VOCs).

    PubMed

    Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio

    2014-07-01

    Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes

  16. Detecting macroeconomic phases in the Dow Jones Industrial Average time series

    NASA Astrophysics Data System (ADS)

    Wong, Jian Cheng; Lian, Heng; Cheong, Siew Ann

    2009-11-01

    In this paper, we perform statistical segmentation and clustering analysis of the Dow Jones Industrial Average (DJI) time series between January 1997 and August 2008. Modeling the index movements and log-index movements as stationary Gaussian processes, we find a total of 116 and 119 statistically stationary segments respectively. These can then be grouped into between five and seven clusters, each representing a different macroeconomic phase. The macroeconomic phases are distinguished primarily by their volatilities. We find that the US economy, as measured by the DJI, spends most of its time in a low-volatility phase and a high-volatility phase. The former can be roughly associated with economic expansion, while the latter contains the economic contraction phase in the standard economic cycle. Both phases are interrupted by a moderate-volatility market correction phase, but extremely-high-volatility market crashes are found mostly within the high-volatility phase. From the temporal distribution of various phases, we see a high-volatility phase from mid-1998 to mid-2003, and another starting mid-2007 (the current global financial crisis). Transitions from the low-volatility phase to the high-volatility phase are preceded by a series of precursor shocks, whereas the transition from the high-volatility phase to the low-volatility phase is preceded by a series of inverted shocks. The time scale for both types of transitions is about a year. We also identify the July 1997 Asian Financial Crisis to be the trigger for the mid-1998 transition, and an unnamed May 2006 market event related to corrections in the Chinese markets to be the trigger for the mid-2007 transition.

  17. Uncertainty-weighted time averaging of mercury vapour concentrations in ambient air: application to measurements in the United kingdom.

    PubMed

    Brown, Richard J C; Muhunthan, Dharsheni

    2011-02-03

    Uncertainty-weighted time averaging of total gaseous mercury concentrations in ambient air, with associated robust uncertainties, has been performed for concentrations measured by the U.K. Heavy Metals Monitoring Network between 2007 and 2009. The results have been compared with averages produced using standard time-averaging methods with a view to investigating the properties of the new method and whether it represents an improvement over current practice.

  18. Scaling of the Average Receiving Time on a Family of Weighted Hierarchical Networks

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang

    2016-08-01

    In this paper, based on the un-weight hierarchical networks, a family of weighted hierarchical networks are introduced, the weight factor is denoted by r. The weighted hierarchical networks depend on the number of nodes in complete bipartite graph, denoted by n1, n2 and n = n1 + n2. Assume that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the weight of edge linking them. We deduce the analytical expression of the average receiving time (ART). The obtained remarkable results display two conditions. In the large network, when nr > n1n2, the ART grows as a power-law function of the network size |V (Gk)| with the exponent, represented by θ =logn( nr n1n2 ), 0 < θ < 1. This means that the smaller the value of θ, the more efficient the process of receiving information. When nr ≤ n1n2, the ART grows with increasing order |V (Gk)| as logn|V (Gk)| or (logn|V (Gk)|)2.

  19. Static and dynamic micro deformable mirror characterization by phase-shifting and time-averaged interferometry

    NASA Astrophysics Data System (ADS)

    Liotard, Arnaud; Zamkotsian, Frederic

    2004-09-01

    Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing and characterizing blocks of this key-component. An in-house designed tiltable mirror (170*100 μm2) has been processed by surface micromachining in the Cronos foundry, and a dedicated characterization bench has been developed for the complete analysis of building blocks as well as operational deformable mirrors. This modular Twyman-Green interferometer allows high in-plane resolution (4μm) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps. Range is increased without loosing accuracy by using two-wavelength phase-shifting interferometry authorizing large steps measurements such as 590 nm print-through steps caused by the Cronos process. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometry. Rotation mode frequency of 31-3kHz of the micro tiltable mirror, and a resonance with a tuned damping at 1.1kHz of the commercial OKO deformable mirror are revealed.

  20. Statistical significance of trends and trend differences in layer-average atmospheric temperature time series

    NASA Astrophysics Data System (ADS)

    Santer, B. D.; Wigley, T. M. L.; Boyle, J. S.; Gaffen, D. J.; Hnilo, J. J.; Nychka, D.; Parker, D. E.; Taylor, K. E.

    2000-03-01

    This paper examines trend uncertainties in layer-average free atmosphere temperatures arising from the use of different trend estimation methods. It also considers statistical issues that arise in assessing the significance of individual trends and of trend differences between data sets. Possible causes of these trends are not addressed. We use data from satellite and radiosonde measurements and from two reanalysis projects. To facilitate intercomparison, we compute from reanalyses and radiosonde data temperatures equivalent to those from the satellite-based Microwave Sounding Unit (MSU). We compare linear trends based on minimization of absolute deviations (LA) and minimization of squared deviations (LS). Differences are generally less than 0.05°C/decade over 1959-1996. Over 1979-1993, they exceed 0.10°C/decade for lower tropospheric time series and 0.15°C/decade for the lower stratosphere. Trend fitting by the LA method can degrade the lower-tropospheric trend agreement of 0.03°C/decade (over 1979-1996) previously reported for the MSU and radiosonde data. In assessing trend significance we employ two methods to account for temporal autocorrelation effects. With our preferred method, virtually none of the individual 1979-1993 trends in deep-layer temperatures are significantly different from zero. To examine trend differences between data sets we compute 95% confidence intervals for individual trends and show that these overlap for almost all data sets considered. Confidence intervals for lower-tropospheric trends encompass both zero and the model-projected trends due to anthropogenic effects. We also test the significance of a trend in d(t), the time series of differences between a pair of data sets. Use of d(t) removes variability common to both time series and facilitates identification of small trend differences. This more discerning test reveals that roughly 30% of the data set comparisons have significant differences in lower-tropospheric trends

  1. Time-dependent wave packet averaged vibrational frequencies from femtosecond stimulated Raman spectra

    NASA Astrophysics Data System (ADS)

    Wu, Yue-Chao; Zhao, Bin; Lee, Soo-Y.

    2016-02-01

    Femtosecond stimulated Raman spectroscopy (FSRS) on the Stokes side arises from a third order polarization, P(3)(t), which is given by an overlap of a first order wave packet, |" separators=" Ψ2 ( 1 ) ( p u , t ) > , prepared by a narrow band (ps) Raman pump pulse, Epu(t), on the upper electronic e2 potential energy surface (PES), with a second order wave packet, <" separators=" Ψ1 ( 2 ) ( p r ∗ , p u , t ) | , that is prepared on the lower electronic e1 PES by a broadband (fs) probe pulse, Epr(t), acting on the first-order wave packet. In off-resonant FSRS, |" separators=" Ψ2 ( 1 ) ( p u , t ) > resembles the zeroth order wave packet |" separators=" Ψ1 ( 0 ) ( t ) > on the lower PES spatially, but with a force on |" separators=" Ψ2 ( 1 ) ( p u , t ) > along the coordinates of the reporter modes due to displacements in the equilibrium position, so that <" separators=" Ψ1 ( 2 ) ( p r ∗ , p u , t ) | will oscillate along those coordinates thus giving rise to similar oscillations in P(3)(t) with the frequencies of the reporter modes. So, by recovering P(3)(t) from the FSRS spectrum, we are able to deduce information on the time-dependent quantum-mechanical wave packet averaged frequencies, ω ¯ j ( t ) , of the reporter modes j along the trajectory of |" separators=" Ψ1 ( 0 ) ( t ) > . The observable FSRS Raman gain is related to the imaginary part of P(3)(ω). The imaginary and real parts of P(3)(ω) are related by the Kramers-Kronig relation. Hence, from the FSRS Raman gain, we can obtain the complex P(3)(ω), whose Fourier transform then gives us the complex P(3)(t) to analyze for ω ¯ j ( t ) . We apply the theory, first, to a two-dimensional model system with one conformational mode of low frequency and one reporter vibrational mode of higher frequency with good results, and then we apply it to the time-resolved FSRS spectra of the cis-trans isomerization of retinal in rhodopsin [P. Kukura et al., Science 310, 1006 (2005)]. We obtain the vibrational

  2. Selection of Ka-Band Transponder Turnaround Frequency Ratio

    NASA Technical Reports Server (NTRS)

    Koukos, J.

    1993-01-01

    The Consultative Committee for Space Data Systems has issued recommendations specifying Transponder Turn-around Frequency Ratios for S-band and X-band coherent earth-to-space and space-to earth links.

  3. Turnaround operations analysis for OTV. Volume 2: Detailed technical report

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.

  4. 34. BOILER HOUSE, COAL CONVEYOR AND TURNAROUND TRACK FOR COAL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA

  5. Leadership and the psychology of turnarounds.

    PubMed

    Kanter, Rosabeth Moss

    2003-06-01

    Turnaround champions--those leaders who manage to bring distressed organizations back from the brink of failure--are often acclaimed for their canny financial and strategic decision making. But having studied their work closely, Harvard Business School's Rosabeth Moss Kanter emphasizes another aspect of their achievement. These leaders reverse the cycle of corporate decline through deliberate interventions that increase the level of communication, collaboration, and respect among their managers. Ailing companies descend into what Kanter calls a "death spiral," which typically works this way: After an initial blow to the company's fortunes, people begin pointing fingers and deriding colleagues in other parts of the business. Tensions rise and collaboration declines. Once they are no longer acting in concert, people find themselves less able to effect change. Eventually, many come to believe they are helpless. Passivity sets in. Finally, the ultimate pathology of troubled companies takes hold: denial. Rather than volunteer an opinion that no one else seems to share, people engage in collective pretense to ignore what they individually know. To counter these dynamics, Kanter says, and reverse the company's slide, the CEO needs to apply certain psychological interventions--specifically, replacing secrecy and denial with dialogue, blame and scorn with respect, avoidance and turf protection with collaboration, and passivity and helplessness with initiative. The author offers in-depth accounts of how the CEOs at Gillette, Invensys, and the BBC used these interventions to guide their employees out of corporate free fall and onto a more productive path.

  6. Paleosecular Variation and Time-Averaged Field Behavior: Global and Regional Signatures

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Cromwell, G.; Tauxe, L.; Constable, C.

    2012-12-01

    We use an updated global dataset of directional and intensity data from lava flows to investigate time-averaged field (TAF) and paleosecular variation (PSV) signatures regionally and globally. The data set includes observations from the past 10 Ma, but we focus our investigations on the field structure over past 5 Ma, in particular during the Brunhes and Matuyama. We restrict our analyses to sites with at least 5 samples (all of which have been stepwise demagnetized), and for which the estimate of the Fisher precision parameter, k, is at least 50. The data set comprises 1572 sites from the past 5 Ma that span latitudes 78oS to 71oN; of these ˜40% are from the Brunhes chron and ˜20% are from the Matuyama chron. Age control at the site level is variable because radiometric dates are available for only about one third of our sites. New TAF models for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling, and the limited age information available for many sites. Results from Hawaii indicate that resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but a PSV estimate (virtual geomagnetic pole dispersion) that is increased relative to that obtained from the unevenly sampled data. The global distribution of sites in our dataset allows us to investigate possible hemispheric asymmetries in field structure, in particular differences between north and south high latitude field behavior and low latitude differences between the Pacific and Atlantic hemispheres.

  7. Redshift-space equal-time angular-averaged consistency relations of the gravitational dynamics

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Valageas, Patrick

    2015-12-01

    We present the redshift-space generalization of the equal-time angular-averaged consistency relations between (ℓ+n )- and n -point polyspectra (i.e., the Fourier counterparts of correlation functions) of the cosmological matter density field. Focusing on the case of the ℓ=1 large-scale mode and n small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of N -body simulations, we find that our relation on the bispectrum in the squeezed limit (i.e., the limit where one wave number is much smaller than the other two) is valid to better than 20% up to 1 h Mpc-1 , for both the monopole and quadrupole at z =0.35 , in a Λ CDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come from the breakdown of the approximate symmetry of the gravitational dynamics. For practical applications, we introduce a simple ansatz to estimate the new derivative terms in the relation using only observables. Although the relation holds worse after using this ansatz, we can still recover it within 20% up to 1 h Mpc-1 , at z =0.35 for the monopole. On larger scales, k =0.2 h Mpc-1 , it still holds within the statistical accuracy of idealized simulations of volume ˜8 h-3Gpc3 without shot-noise error.

  8. Where the world stands still: turnaround as a strong test of ΛCDM cosmology

    SciTech Connect

    Pavlidou, V.; Tomaras, T.N. E-mail: tomaras@physics.uoc.gr

    2014-09-01

    Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ΛCDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ΛCDM model is, for a given mass M, equal to (3GM/Λ c{sup 2}){sup 1/3}, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ΛCDM cosmology. Current data appear to favor ΛCDM over alternatives with local inhomogeneities and no Λ. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ΛCDM cosmology.

  9. Time-Averaged Velocity, Temperature and Density Surveys of Supersonic Free Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.; Mielke, Amy F.

    2005-01-01

    A spectrally resolved molecular Rayleigh scattering technique was used to simultaneously measure axial component of velocity U, static temperature T, and density p in unheated free jets at Mach numbers M = 0.6,0.95, 1.4 and 1.8. The latter two conditions were achieved using contoured convergent-divergent nozzles. A narrow line-width continuous wave laser was passed through the jet plumes and molecular scattered light from a small region on the beam was collected and analyzed using a Fabry-Perot interferometer. The optical spectrum analysis air density at the probe volume was determined by monitoring the intensity variation of the scattered light using photo-multiplier tubes. The Fabry-Perot interferometer was operated in the imaging mode, whereby the fringe formed at the image plane was captured by a cooled CCD camera. Special attention was given to remove dust particles from the plume and to provide adequate vibration isolation to the optical components. The velocity profiles from various operating conditions were compared with that measured by a Pitot tube. An excellent comparison within 5m's demonstrated the maturity of the technique. Temperature was measured least accurately, within 10K, while density was measured within 1% uncertainty. The survey data consisted of centerline variations and radial profiles of time-averaged U, T and p. The static temperature and density values were used to determine static pressure variations inside the jet. The data provided a comparative study of jet growth rates with increasing Mach number. The current work is part of a data-base development project for Computational Fluid Dynamics and Aeroacoustics codes that endeavor to predict noise characteristics of high speed jets. A limited amount of far field noise spectra from the same jets are also presented. Finally, a direct experimental validation was obtained for the Crocco-Busemann equation which is commonly used to predict temperature and density profiles from known velocity

  10. Diagnostic quality of time-averaged ECG-gated CT data

    NASA Astrophysics Data System (ADS)

    Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan

    2009-02-01

    Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.

  11. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    USGS Publications Warehouse

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this

  12. Time-Average Measurement of Velocity, Density, Temperature, and Turbulence Using Molecular Rayleigh Scattering

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta

    2004-01-01

    Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature

  13. Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain

    NASA Technical Reports Server (NTRS)

    Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.

    1992-01-01

    The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (aspect ratios and slope factors).

  14. Decomposition-order effects of time integrator on ensemble averages for the Nosé-Hoover thermostat.

    PubMed

    Itoh, Satoru G; Morishita, Tetsuya; Okumura, Hisashi

    2013-08-14

    Decomposition-order dependence of time development integrator on ensemble averages for the Nosé-Hoover dynamics is discussed. Six integrators were employed for comparison, which were extensions of the velocity-Verlet or position-Verlet algorithm. Molecular dynamics simulations by these integrators were performed for liquid-argon systems with several different time steps and system sizes. The obtained ensemble averages of temperature and potential energy were shifted from correct values depending on the integrators. These shifts increased in proportion to the square of the time step. Furthermore, the shifts could not be removed by increasing the number of argon atoms. We show the origin of these ensemble-average shifts analytically. Our discussion can be applied not only to the liquid-argon system but also to all MD simulations with the Nosé-Hoover thermostat. Our recommended integrators among the six integrators are presented to obtain correct ensemble averages.

  15. An Integrated Gate Turnaround Management Concept Leveraging Big Data/Analytics for NAS Performance Improvements

    NASA Technical Reports Server (NTRS)

    Chung, William; Chachad, Girish; Hochstetler, Ronald

    2016-01-01

    The Integrated Gate Turnaround Management (IGTM) concept was developed to improve the gate turnaround performance at the airport by leveraging relevant historical data to support optimization of airport gate operations, which include: taxi to the gate, gate services, push back, taxi to the runway, and takeoff, based on available resources, constraints, and uncertainties. By analyzing events of gate operations, primary performance dependent attributes of these events were identified for the historical data analysis such that performance models can be developed based on uncertainties to support descriptive, predictive, and prescriptive functions. A system architecture was developed to examine system requirements in support of such a concept. An IGTM prototype was developed to demonstrate the concept using a distributed network and collaborative decision tools for stakeholders to meet on time pushback performance under uncertainties.

  16. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...

  17. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...

  18. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...

  19. Analytical study of flow phenomena in SSME turnaround duct geometries

    NASA Astrophysics Data System (ADS)

    McLallin, K. L.

    1985-04-01

    The SSME fuel turbopump hot gas manifold was identified as a source of loss and flow distortion which significantly affects the performance and durability of both the drive turbine and the LOX injector area of the main combustion chamber. Two current SSME geometries were studied, the full power level (FPL) and the first manned orbital flight (FMOF) configuration. The effects of turnaround duct geometry on flow losses and distortions, by varying wall curvature and flow area variation in the 180 deg turnaround region were examined. The effects of the duct inlet flow phenomena such as the radial distortion of the inlet flow and inlet swirl level on turnaround duct performance were also investigated. It is shown that of the two current geometries, the FMOF configuration had lower pressure losses and generated less flow distortion, but had a small flow separation bubble at the 180 deg turnaround exit. It is found that by optimizing wall curvature and flow diffusion in the turnaround, improved duct performance can be achieved.

  20. Conditional-sampling schemes for turbulent flow, based on the variable-interval time averaging (VITA) algorithm

    NASA Astrophysics Data System (ADS)

    Morrison, J. F.; Tsai, H. M.; Bradshaw, P.

    1988-12-01

    The variable-interval time-averaging (“VITA”) algorithm has been tested in a variety of turbulent boundary layers for its ability to detect shear-stress-producing motions from hot-wire signals. A “VITA + LEVEL” scheme (which uses criteria for both short-time variance and short-time average, i.e.“level”) has been devised, and used in several different boundary layers. This scheme yields length-scale statistics that are acceptably independent of the conditioning criteria, which the VITA scheme does not.

  1. Conditional-sampling schemes for turbulent flow, based on the variable-interval time averaging (VITA) algorithm

    NASA Astrophysics Data System (ADS)

    Morrison, J. F.; Tsai, H. M.; Bradshaw, P.

    The variable-interval time-averaging ('VITA') algorithm has been tested in a variety of turbulent boundary layers for its ability to detect shear-stress-producing motions from hot-wire signals. A 'VITA+LEVEL' scheme (which uses criteria for both short-time variance and short-time average, i.e., 'level') has been devised, and used in several different boundary layers. This scheme yields length-scale statistics that are acceptably independent of the conditioning criteria, which the VITA scheme does not.

  2. Infinite-time average of local fields in an integrable quantum field theory after a quantum quench.

    PubMed

    Mussardo, G

    2013-09-01

    The infinite-time average of the expectation values of local fields of any interacting quantum theory after a global quench process are key quantities for matching theoretical and experimental results. For quantum integrable field theories, we show that they can be obtained by an ensemble average that employs a particular limit of the form factors of local fields and quantities extracted by the generalized Bethe ansatz.

  3. Exploring Granger causality between global average observed time series of carbon dioxide and temperature

    SciTech Connect

    Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R

    2010-01-01

    Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

  4. Empirical wind model for the middle and lower atmosphere. Part 1: Local time average

    NASA Technical Reports Server (NTRS)

    Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.

    1993-01-01

    The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.

  5. Necessary and sufficient condition for average consensus of networked multi-agent systems with heterogeneous time delays

    NASA Astrophysics Data System (ADS)

    Sakurama, Kazunori; Nakano, Kazushi

    2015-04-01

    This paper deals with an average-consensus problem of networked multi-agent systems with heterogeneous time delays. First, a necessary and sufficient condition is derived for multi-agent systems to achieve average consensus. This condition is represented in the form of an algebraic condition by using the delay-transition and the delay-weighted Laplacians. Second, a performance criterion is introduced to evaluate multi-agent systems that do not achieve average consensus. The criterion is represented in terms of the norms with respect to the above Laplacians. These results imply that the introduced Laplacians play important roles in analysing multi-agent systems with heterogeneous time delays. The validity of the analysis results is demonstrated by simulations with six-agent systems.

  6. Can Principals' Emotional Intelligence Matter to School Turnarounds?

    ERIC Educational Resources Information Center

    Cai, Qijie

    2011-01-01

    Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: the…

  7. The Turnaround Mindset: Aligning Leadership for Student Success

    ERIC Educational Resources Information Center

    Fairchild, Tierney Temple; DeMary, Jo Lynne

    2011-01-01

    This book provides a valuable balance between what one must know and what one must do to turn around low-performing schools. The 3-E framework simplifies this complex process by focusing resources on the environment, the executive, and the execution of the turnaround plan. Central to each of these components is a spotlight on the values supporting…

  8. Importing Leaders for School Turnarounds: Lessons and Opportunities

    ERIC Educational Resources Information Center

    Kowal, Julie; Hassel, Emily Ayscue

    2011-01-01

    One of the biggest challenges in education today is identifying talented candidates to successfully lead turnarounds of persistently low-achieving schools. Evidence suggests that the traditional principal pool is already stretched to capacity and cannot supply enough leaders to fix failing schools. But potentially thousands of leaders capable of…

  9. Policy Perspective: School Turnaround in England. Utilizing the Private Sector

    ERIC Educational Resources Information Center

    Corbett, Julie

    2014-01-01

    This paper, written by strategic partner of the Center on School Turnaround (CST), Julie Corbett, provides research and examples on England's approach to turning around its lowest performing schools. The English education system utilizes private vendors to support chronically low-performing schools and districts. The introduction is followed by…

  10. Democratic School Turnarounds: Pursuing Equity and Learning from Evidence

    ERIC Educational Resources Information Center

    Trujillo, Tina; Renée, Michelle

    2013-01-01

    the report "Democratic School Turnarounds" considers the democratic tensions inherent in the federal School Improvement Grant (SIG) policy's market-based school reforms and critiques the research base that many of these reforms are based on. It concludes with a set of recommendations that re-center the purposes of public education…

  11. California Turnaround Schools: An Analysis of School Improvement Grant Effectiveness

    ERIC Educational Resources Information Center

    Graham, Khalil N.

    2013-01-01

    The purpose of this study was to evaluate the effectiveness of School Improvement Grants (SIGs) in the state of California (CA) in increasing student achievement using the turnaround implementation model. The American Recovery and Reinvestment Act of 2009 (ARRA) included educational priorities focused on fixing America's lowest achieving schools.…

  12. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  13. Negotiation Areas for "Transformation" and "Turnaround" Intervention Models

    ERIC Educational Resources Information Center

    Mass Insight Education (NJ1), 2011

    2011-01-01

    To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…

  14. Portrait of a Turnaround Leader in a High Needs District

    ERIC Educational Resources Information Center

    Hewitt, Kimberly Kappler; Reitzug, Ulrich

    2015-01-01

    Using portraiture methodology involving interview, observation, and artifact data, this study portrays a turnaround leader, Dr. Susan Gray, in a high needs, rural district in the Southeast. In three years, Gray led Lincoln Elementary from nearly being reconstituted to being an award-winning school. Gray has subsequently been assigned other…

  15. Fluctuation analysis of time-averaged mean-square displacement for the Langevin equation with time-dependent and fluctuating diffusivity

    NASA Astrophysics Data System (ADS)

    Uneyama, Takashi; Miyaguchi, Tomoshige; Akimoto, Takuma

    2015-09-01

    The mean-square displacement (MSD) is widely utilized to study the dynamical properties of stochastic processes. The time-averaged MSD (TAMSD) provides some information on the dynamics which cannot be extracted from the ensemble-averaged MSD. In particular, the relative standard deviation (RSD) of the TAMSD can be utilized to study the long-time relaxation behavior. In this work, we consider a class of Langevin equations which are multiplicatively coupled to time-dependent and fluctuating diffusivities. Various interesting dynamics models such as entangled polymers and supercooled liquids can be interpreted as the Langevin equations with time-dependent and fluctuating diffusivities. We derive a general formula for the RSD of the TAMSD for the Langevin equation with the time-dependent and fluctuating diffusivity. We show that the RSD can be expressed in terms of the correlation function of the diffusivity. The RSD exhibits the crossover at the long time region. The crossover time is related to a weighted average relaxation time for the diffusivity. Thus the crossover time gives some information on the relaxation time of fluctuating diffusivity which cannot be extracted from the ensemble-averaged MSD. We discuss the universality and possible applications of the formula via some simple examples.

  16. Rebuilding Organizational Capacity in Turnaround Schools: Insights from the Corporate, Government, and Non-Profit Sectors

    ERIC Educational Resources Information Center

    Murphy, Joseph; Meyers, Coby V.

    2009-01-01

    In this article, we provide a grounded narrative of capacity building in the turnaround equation by exploring the turnaround literature outside of education and applying it to troubled schools. Our analysis is based upon reviews of: (1) 14 comprehensive, historical volumes that examine the turnaround phenomenon; (2) 16 book-length analyses of…

  17. Time-dependent local and average structural evolution of δ-phase 239Pu-Ga alloys

    DOE PAGES

    Smith, Alice I.; Page, Katharine L.; Siewenie, Joan E.; Losko, Adrian S.; Vogel, Sven C.; Gourdon, Olivier A.; Richmond, Scott; Saleh, Tarik A.; Ramos, Michael; Schwartz, Daniel S.

    2016-08-05

    Here, plutonium metal is a very unusual element, exhibiting six allotropes at ambient pressure, between room temperature and its melting point, a complicated phase diagram, and a complex electronic structure. Many phases of plutonium metal are unstable with changes in temperature, pressure, chemical additions, or time. This strongly affects structure and properties, and becomes of high importance, particularly when considering effects on structural integrity over long periods of time [1]. This paper presents a time-dependent neutron total scattering study of the local and average structure of naturally aging δ-phase239Pu-Ga alloys, together with preliminary results on neutron tomography characterization.

  18. Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter

    NASA Astrophysics Data System (ADS)

    Johnson, W. R.; Nilsen, J.

    2016-03-01

    The influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity and also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.

  19. Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter.

    PubMed

    Johnson, W R; Nilsen, J

    2016-03-01

    The influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity and also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.

  20. Time-resolved and time-averaged stereo-PIV measurements of a unit-ratio cavity

    NASA Astrophysics Data System (ADS)

    Immer, Marc; Allegrini, Jonas; Carmeliet, Jan

    2016-06-01

    An experimental setup was developed to perform wind tunnel measurements on a unit-ratio, 2D open cavity under perpendicular incident flow. The open cavity is characterized by a mixing layer at the cavity top, that divides the flow field into a boundary layer flow and a cavity flow. Instead of precisely replicating a specific type of inflow, such as a turbulent flat plate boundary layer or an atmospheric boundary layer, the setup is capable of simulating a wide range of inflow profiles. This is achieved by using triangular spires as upstream turbulence generators, which can modify the otherwise laminar inflow boundary layer to be moderately turbulent and stationary, or heavily turbulent and intermittent. Measurements were performed by means of time-resolved stereo PIV. The cavity shear layer is analyzed in detail using flow statistics, spectral analysis, and space-time plots. The ability of the setup to generate typical cavity flow cases is demonstrated for characteristic inflow boundary layers, laminar and turbulent. Each case is associated with a distinct shear layer flow phenomena, self-sustained oscillations for the former and Kelvin-Helmholtz instabilities for the latter. Additionally, large spires generate a highly turbulent wake flow, resulting in a significantly different cavity flow. Large turbulent sweep and ejection events in the wake flow suppress the typical shear layer and sporadic near wall sweep events generate coherent vortices at the upstream edge.

  1. Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet

    NASA Astrophysics Data System (ADS)

    Hsu, C. M.; Huang, R. F.

    2013-07-01

    The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.

  2. Long-time averaged dynamics of a Bose-Einstein condensate in a bichromatic optical lattice with external harmonic confinement

    NASA Astrophysics Data System (ADS)

    Sakhel, Asaad R.

    2016-07-01

    The dynamics of a Bose-Einstein condensate are examined numerically in the presence of a one-dimensional bichromatic optical lattice (BCOL) with external harmonic confinement in the strongly interacting regime. The condensate is excited by a focusing stirring red laser. Two realizations of the BCOL are considered, one with a rational and the other with an irrational ratio of the two constituting wave lengths. The system is simulated by the time-dependent Gross Pitaevskii equation that is solved using the Crank Nicolson method in real time. It is found that for a weak BCOL, the long-time averaged physical observables of the condensate respond only very weakly (or not at all) to changes in the secondary OL depth V1 showing that under these conditions the harmonic trap plays a dominant role in governing the dynamics. However, for a much larger strength of the BCOL, the response is stronger as it begins to compete with the external harmonic trap, such that the frequency of Bloch oscillations of the bosons rises with V1 yielding higher time-averages. Qualitatively there is no difference between the dynamics of the condensate resulting from the use of a rational or irrational ratio of the wavelengths since the external harmonic trap washes it out. It is further found that in the presence of an external harmonic trap, the BCOL acts in favor of superflow.

  3. The effect of selection of time steps and average assumptions on the continuous simulation of rainwater harvesting strategies.

    PubMed

    Coombes, P J; Barry, M E

    2007-01-01

    The use of domestic rainwater tanks with back up from mains water supplies in urban areas can produce considerable reductions in mains water demands and stormwater runoff. It is commonplace to analyse the performance of rainwater tanks using continuous simulation with daily time steps and average water use assumptions. This paper compares this simplistic analysis to more detailed analysis that employs 6 minute time steps and climate dependent water demand. The use of daily time steps produced considerable under-estimation of annual rainwater yields that were dependent on tank size, rain depth, seasonal distribution of rainfall, water demand and tank configuration. It is shown that analysis of the performance of rainwater tanks is critically dependent on detailed inputs.

  4. Output regulation of switched linear multi-agent systems: an agent-dependent average dwell time method

    NASA Astrophysics Data System (ADS)

    Jia, Hongwei; Zhao, Jun

    2016-08-01

    The output regulation problem of switched linear multi-agent systems with stabilisable and unstabilisable subsystems is investigated in this paper. A sufficient condition for the solvability of the problem is given. Owing to the characteristics of switched multi-agent systems, even if each agent has its own dwell time, the multi-agent systems, if viewed as an overall switched system, may not have a dwell time. To overcome this difficulty, we present a new approach, called an agent-dependent average dwell time method. Due to the limited information exchange between agents, a distributed dynamic observer network for agents is provided. Further, a distributed dynamic controller based on observer is designed. Finally, simulation results show the effectiveness of the proposed solutions.

  5. Relativistic X-ray reverberation modelling of the combined time-averaged and lag-energy spectra in AGN

    NASA Astrophysics Data System (ADS)

    Chainakun, P.; Young, A. J.; Kara, E.

    2016-08-01

    General relativistic ray tracing simulations of the time-averaged spectrum and energy-dependent time delays in active galactic nuclei (AGN) are presented. We model the lamp-post geometry in which the accreting gas is illuminated by an X-ray source located on the rotation axis of the black hole. The spectroscopic features imprinted in the reflection component are modelled using REFLIONX. The associated time delays after the direct continuum, known as reverberation lags, are computed including the full effects of dilution and ionization gradients on the disc. We perform, for the first time, simultaneous fitting of the time-averaged and lag-energy spectra in three AGN: Mrk 335, IRAS 13224-3809 and Ark 564 observed with XMM-Newton. The best-fitting source height and central mass of each AGN partly agree with those previously reported. We find that including the ionization gradient in the model naturally explains lag-energy observations in which the 3 keV and 7-10 keV bands precede other bands. To obtain the clear 3 keV and 7-10 keV dips in the lag-energy profile, the model requires either a source height >5 rg, or a disc that is highly ionized at small radii and is colder further out. We also show that fitting the lag or the mean spectra alone can lead to different results and interpretations. This is therefore important to combine the spectral and timing data in order to find the plausible but self-consistent fits which are achievable with our model.

  6. Reduction of time-averaged irradiation speckle nonuniformity in laser-driven plasmas due to target ablation

    NASA Astrophysics Data System (ADS)

    Epstein, R.

    1997-09-01

    In inertial confinement fusion (ICF) experiments, irradiation uniformity is improved by passing laser beams through distributed phase plates (DPPs), which produce focused intensity profiles with well-controlled, reproducible envelopes modulated by fine random speckle. [C. B. Burckhardt, Appl. Opt. 9, 695 (1970); Y. Kato and K. Mima, Appl. Phys. B 29, 186 (1982); Y. Kato et al., Phys. Rev. Lett. 53, 1057 (1984); Laboratory for Laser Energetics LLE Review 33, NTIS Document No. DOE/DP/40200-65, 1987 (unpublished), p. 1; Laboratory for Laser Energetics LLE Review 63, NTIS Document No. DOE/SF/19460-91, 1995 (unpublished), p. 1.] A uniformly ablating plasma atmosphere acts to reduce the contribution of the speckle to the time-averaged irradiation nonuniformity by causing the intensity distribution to move relative to the absorption layer of the plasma. This occurs most directly as the absorption layer in the plasma moves with the ablation-driven flow, but it is shown that the effect of the accumulating ablated plasma on the phase of the laser light also makes a quantitatively significant contribution. Analytical results are obtained using the paraxial approximation applied to the beam propagation, and a simple statistical model is assumed for the properties of DPPs. The reduction in the time-averaged spatial spectrum of the speckle due to these effects is shown to be quantitatively significant within time intervals characteristic of atmospheric hydrodynamics under typical ICF irradiation intensities.

  7. Two Stage Helical Gearbox Fault Detection and Diagnosis based on Continuous Wavelet Transformation of Time Synchronous Averaged Vibration Signals

    NASA Astrophysics Data System (ADS)

    Elbarghathi, F.; Wang, T.; Zhen, D.; Gu, F.; Ball, A.

    2012-05-01

    Vibration signals from a gearbox are usually very noisy which makes it difficult to find reliable symptoms of a fault in a multistage gearbox. This paper explores the use of time synchronous average (TSA) to suppress the noise and Continue Wavelet Transformation (CWT) to enhance the non-stationary nature of fault signal for more accurate fault diagnosis. The results obtained in diagnosis an incipient gear breakage show that fault diagnosis results can be improved by using an appropriate wavelet. Moreover, a new scheme based on the level of wavelet coefficient amplitudes of baseline data alone, without faulty data samples, is suggested to select an optimal wavelet.

  8. Turnaround Aid Raising Hopes, Also Concerns

    ERIC Educational Resources Information Center

    Klein, Alyson

    2009-01-01

    As the U.S. Department of Education prepares to throw $3 billion in one-time money on the table to improve perennially foundering schools, a gulf is emerging between what federal officials would like to see done with the funds and what many districts say is their capacity--and inclination--to deliver. While some districts say the federal largess…

  9. Effect of spectral time-lag correlation coefficient and signal averaging on airborne CO2 DIAL measurements

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Vanderbeek, Richard G.; Gotoff, Steven W.; D'Amico, Francis M.

    1997-10-01

    The effects of flight geometry, signal averaging and time- lag correlation coefficient on airborne CO2 dial lidar measurements are shown in simulations and field measurements. These factors have implications for multi- vapor measurements and also for measuring a shingle vapor with a wide absorption spectra for which one would like to make DIAL measurements at many wavelengths across the absorption spectra of the gas. Thus it is of interest to know how many wavelengths and how many groups of wavelengths can be used effectively in DIAL measurements. Our data indicate that for our lidar about 80 wavelengths can be used for DIAL measurements of a stationary vapor. The lidar signal is composed of fluctuations with three time scales: a very short time scale due to system noise which is faster than the data acquisition sampling rate of the receiver, a medium time scale due to atmospheric turbulence, and a long time scale due to slow atmospheric transmission drift from aerosol in homogeneities. The decorrelation time scale of fluctuations for airborne lidar measurements depends on the flight geometry.

  10. Vibration measurement of the tympanic membrane of guinea pig temporal bones using time-averaged speckle pattern interferometry

    NASA Astrophysics Data System (ADS)

    Wada, Hiroshi; Ando, Masayoshi; Takeuchi, Masataka; Sugawara, Hironori; Koike, Takuji; Kobayashi, Toshimitsu; Hozawa, Koji; Gemma, Takashi; Nara, Makoto

    2002-05-01

    ``Time-averaged holography'' and ``holographic interferometry'' enable recording of the complete vibration pattern of a surface within several seconds. The results appear in the form of fringes. Vibration amplitudes smaller than 100 nm are not readily measurable by these techniques, because such small amplitudes produce variations in gray level, but not fringes. In practice, to obtain clear fringes in these measurements, stimulus sound pressures higher than 100 dB SPL must be used. The phase of motion is also not obtainable from such fringe techniques. In this study, a sinusoidal phase modulation technique is described, which allows detection of both small amplitudes of motion and their phase from time-averaged speckle pattern interferometry. In this technique, the laser injection current is modulated and digital image processing is used to analyze the measured patterns. When the sound-pressure level of stimuli is between 70 and 85 dB SPL, this system is applied to measure the vibratory response of the tympanic membrane (TM) of guinea pig temporal bones at frequencies up to 4 kHz where complicated vibration modes are observed. The effect of the bulla on TM displacements is also quantified. Results indicate that this system is capable of measuring the nanometer displacements of the TM, produced by stimuli of 70 dB SPL.

  11. Occupational dimethylformamide exposure. 1. Diffusive sampling of dimethylformamide vapor for determination of time-weighted average concentration in air.

    PubMed

    Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M

    1992-01-01

    A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523

  12. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  13. On the turn-around phenomenon in n-MOS transistors under NBTI conditions

    NASA Astrophysics Data System (ADS)

    Benabdelmoumene, A.; Djezzar, B.; Chenouf, A.; Tahi, H.; Zatout, B.; Kechouane, M.

    2016-07-01

    We have experimentally analyzed negative bias temperature instability (NBTI) stress/recovery cycle on n-channel metal oxide semiconductor field effect transistors (n-MOSFET's). Data obtained by current-voltage (I-V) and charge pumping (CP) techniques have revealed a turn-around phenomenon in both threshold voltage shift (ΔVth) and maximum CP current shift (ΔICP-Max). This allows us to separate the evolution of interface traps (permanent) and near interfacial oxide traps "border traps" (recoverable) as a function of the stress time. The ability of separation comes from the fact that interface and oxide traps induce opposite shifts in ΔVth. Contrarily to NBTI/n-MOSFET, NBTI/p-MOSFET is unable to achieve trap separation because both trap types induce shifts in the same direction. Exploiting the turn-around effect, we have been able to follow the evolution of the degradation over the stress time. NBTI stress/relaxation cycle CP measurements on n-MOSFET have shown a progressive creation of traps; starting from the interface traps to near interfacial traps. This new and simple procedure will give a deeper insight into the dynamics of traps build up under NBTI conditions.

  14. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study

    PubMed Central

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang

    2016-01-01

    Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727

  15. Detection of turbulent coherent motions in a forest canopy part II: Time-scales and conditional averages

    NASA Astrophysics Data System (ADS)

    Collineau, Serge; Brunet, Yves

    1993-10-01

    Turbulent exchanges between plant canopies and the atmosphere are known to be strongly affected by intermittent coherent motions, which appear on time traces of turbulent variables as periodic, large-amplitude excursions from the mean. Detecting these features requires objective and powerful signal analysis techniques. We investigate here the possibilities offered by the recently developed wavelet transform, presented in a companion paper. For this purpose, a set of data acquired in a 13.5 m high pine forest in southwestern France was used, which provided time series of wind velocities and air temperature recorded at two levels simultaneously, under moderately unstable conditions. Firstly, a duration scale of the active part of coherent motions was estimated from the wavelet variance. Then, we focused on the detection itself of large-scale features; several wavelet functions were tested, and the results compared with those obtained from more classical conditional sampling methods such as VITA and WAG. A mean time interval Δ=1.8 h/u * ( h being the canopy height and u * the friction velocity) between contiguous coherent motions was obtained. The features extracted from the various traces and ensemble-averaged over 30 min periods appeared very similar throughout the four hours of data studied. They provided a dynamic description of the ejection-sweep process, readily observable at both levels. An alternate Reynolds decomposition of the instantaneous turbulent fields, using the conditionally averaged signals, allowed the relative importance of large- and small-scale contributions to momentum and heat fluxes to be estimated. The results were found to be in good agreement with comparable studies.

  16. Fluid trajectory evaluation based on an ensemble-averaged cross-correlation in time-resolved PIV

    NASA Astrophysics Data System (ADS)

    Jeon, Young Jin; Chatellier, Ludovic; David, Laurent

    2014-07-01

    A novel multi-frame particle image velocimetry (PIV) method, able to evaluate a fluid trajectory by means of an ensemble-averaged cross-correlation, is introduced. The method integrates the advantages of the state-of-art time-resolved PIV (TR-PIV) methods to further enhance both robustness and dynamic range. The fluid trajectory follows a polynomial model with a prescribed order. A set of polynomial coefficients, which maximizes the ensemble-averaged cross-correlation value across the frames, is regarded as the most appropriate solution. To achieve a convergence of the trajectory in terms of polynomial coefficients, an ensemble-averaged cross-correlation map is constructed by sampling cross-correlation values near the predictor trajectory with respect to an imposed change of each polynomial coefficient. A relation between the given change and corresponding cross-correlation maps, which could be calculated from the ordinary cross-correlation, is derived. A disagreement between computational domain and corresponding physical domain is compensated by introducing the Jacobian matrix based on the image deformation scheme in accordance with the trajectory. An increased cost of the convergence calculation, associated with the nonlinearity of the fluid trajectory, is moderated by means of a V-cycle iteration. To validate enhancements of the present method, quantitative comparisons with the state-of-arts TR-PIV methods, e.g., the adaptive temporal interval, the multi-frame pyramid correlation and the fluid trajectory correlation, were carried out by using synthetically generated particle image sequences. The performances of the tested methods are discussed in algorithmic terms. A high-rate TR-PIV experiment of a flow over an airfoil demonstrates the effectiveness of the present method. It is shown that the present method is capable of reducing random errors in both velocity and material acceleration while suppressing spurious temporal fluctuations due to measurement noise.

  17. Plio-Pleistocene paleomagnetic secular variation and time-averaged field: Ruiz-Tolima volcanic chain, Colombia

    NASA Astrophysics Data System (ADS)

    Sánchez-Duque, A.; Mejia, V.; Opdyke, N. D.; Huang, K.; Rosales-Rivera, A.

    2016-02-01

    Paleomagnetic results obtained from 47 Plio-Pleistocene volcanic flows from the Ruiz-Tolima Volcanic Chain (Colombia) are presented. The mean direction of magnetization among these flows, which comprise normal (n = 43) and reversed (n = 4) polarities, is Dec = 1.8°, Inc = 3.2°, α95 = 5.0°, and κ = 18.4. This direction of magnetization coincides with GAD plus a small persistent axial quadrupolar component (around 5%) at the site-average latitude (4.93°). This agreement is robust after applying several selection criteria (α95 < 10º α95 < 5.5º polarities: normal, reversed, and tentatively transitional). The data are in agreement with Model G proposed by McElhinny and McFadden (1997) and the fit is improved when sites tentatively identified as transitional (two that otherwise have normal polarity) are excluded from the calculations. Compliance observed with the above mentioned time-averaged field and paleosecular variation models, is also observed for many recent similar studies from low latitudes, with the exception of results from Galapagos Islands that coincide with GAD and tend to be near sided.

  18. An Integrated Gate Turnaround Management Concept Leveraging Big Data Analytics for NAS Performance Improvements

    NASA Technical Reports Server (NTRS)

    Chung, William W.; Ingram, Carla D.; Ahlquist, Douglas Kurt; Chachad, Girish H.

    2016-01-01

    "Gate Turnaround" plays a key role in the National Air Space (NAS) gate-to-gate performance by receiving aircraft when they reach their destination airport, and delivering aircraft into the NAS upon departing from the gate and subsequent takeoff. The time spent at the gate in meeting the planned departure time is influenced by many factors and often with considerable uncertainties. Uncertainties such as weather, early or late arrivals, disembarking and boarding passengers, unloading/reloading cargo, aircraft logistics/maintenance services and ground handling, traffic in ramp and movement areas for taxi-in and taxi-out, and departure queue management for takeoff are likely encountered on the daily basis. The Integrated Gate Turnaround Management (IGTM) concept is leveraging relevant historical data to support optimization of the gate operations, which include arrival, at the gate, departure based on constraints (e.g., available gates at the arrival, ground crew and equipment for the gate turnaround, and over capacity demand upon departure), and collaborative decision-making. The IGTM concept provides effective information services and decision tools to the stakeholders, such as airline dispatchers, gate agents, airport operators, ramp controllers, and air traffic control (ATC) traffic managers and ground controllers to mitigate uncertainties arising from both nominal and off-nominal airport gate operations. IGTM will provide NAS stakeholders customized decision making tools through a User Interface (UI) by leveraging historical data (Big Data), net-enabled Air Traffic Management (ATM) live data, and analytics according to dependencies among NAS parameters for the stakeholders to manage and optimize the NAS performance in the gate turnaround domain. The application will give stakeholders predictable results based on the past and current NAS performance according to selected decision trees through the UI. The predictable results are generated based on analysis of the

  19. Quantification of non-linearities as a function of time averaging in regional air quality modeling applications

    NASA Astrophysics Data System (ADS)

    Thunis, P.; Clappier, A.; Pisoni, E.; Degraeuwe, B.

    2015-02-01

    Air quality models which are nowadays used for a wide range of scopes (i.e. assessment, forecast, planning) see their intrinsic complexity progressively increasing as better knowledge of the atmospheric chemistry processes is gained. As a result of this increased complexity potential non-linearities are implicitly and/or explicitly incorporated in the system. These non-linearities represent a key and challenging aspect of air quality modeling, especially to assess the robustness of the model responses. In this work the importance of non-linear effects in air quality modeling is quantified, especially as a function of time averaging. A methodology is proposed to decompose the concentration change resulting from an emission reduction over a given domain into its linear and non-linear contributions for each precursor as well as in the contribution resulting from the interactions among precursors. Simulations with the LOTOS-EUROS model have been performed by TNO over three regional geographical areas in Europe for this analysis. In all three regions the non-linear effects for PM10 and PM2.5 are shown to be relatively minor for yearly and monthly averages whereas they become significant for daily average values. For Ozone non-linearities become important already for monthly averages in some regions. An approach which explicitly deals with monthly variations seems therefore more appropriate for O3. In general non-linearities are more important at locations where concentrations are the lowest, i.e. at urban locations for O3 and at rural locations for PM10 and PM2.5. Finally the impact of spatial resolution (tested by comparing coarse and fine resolution simulations) on the degree of non-linearity has been shown to be minor as well. The conclusions developed here are model dependent and runs should be repeated with the particular model of interest but the proposed methodology allows with a limited number of runs to identify where efforts should be focused in order to

  20. On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods

    NASA Astrophysics Data System (ADS)

    Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

    2012-04-01

    Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what

  1. A Simulation Based Approach for Contingency Planning for Aircraft Turnaround Operation System Activities in Airline Hubs

    NASA Technical Reports Server (NTRS)

    Adeleye, Sanya; Chung, Christopher

    2006-01-01

    Commercial aircraft undergo a significant number of maintenance and logistical activities during the turnaround operation at the departure gate. By analyzing the sequencing of these activities, more effective turnaround contingency plans may be developed for logistical and maintenance disruptions. Turnaround contingency plans are particularly important as any kind of delay in a hub based system may cascade into further delays with subsequent connections. The contingency sequencing of the maintenance and logistical turnaround activities were analyzed using a combined network and computer simulation modeling approach. Experimental analysis of both current and alternative policies provides a framework to aid in more effective tactical decision making.

  2. Meeting the challenge of a group practice turnaround.

    PubMed

    Porn, L M

    2001-03-01

    Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing.

  3. SSME 3-D Turnaround Duct flow analysis - CFD predictions

    NASA Technical Reports Server (NTRS)

    Brankovic, Andreja; Stowers, Steven T.; Mcconnaughey, Paul

    1988-01-01

    CFD analysis is presently employed to obtain an improved flowfield for an individual flowpath in the case of the Space Shuttle Main Engine's High Pressure Fuel Turbopump Turn-Around Duct (TAD), which conducts the flow exiting from the gas turbines into the fuel bowl. It is demonstrated that the application of CFD to TAD flow analysis, giving attention to the duct's configuration and to the number, shape, and alignment of the diffuser struts, can enhance understanding of flow physics and result in improved duct design and performance.

  4. Characteristic length scales and time-averaged transport velocities of suspended sediment in the mid-Atlantic Region, USA

    NASA Astrophysics Data System (ADS)

    Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis

    2014-02-01

    Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4-6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100-1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.

  5. Characteristic length scales and time-averaged transport velocities of suspended sediment in the mid-Atlantic Region, USA

    USGS Publications Warehouse

    Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis

    2014-01-01

    Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.

  6. An Exploration of Discontinuous Time Synchronous Averaging for Helicopter HUMS Using Cruise and Terminal Area Vibration Data

    NASA Technical Reports Server (NTRS)

    Huff, Edward M.; Mosher, Marianne; Barszcz, Eric

    2002-01-01

    Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local

  7. BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra

    NASA Technical Reports Server (NTRS)

    Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.

    1996-01-01

    We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.

  8. Paleosecular variation and time-averaged field analysis over the last 10 Ma from a new global dataset (PSV10)

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Johnson, C. L.; Tauxe, L.; Constable, C.; Jarboe, N.

    2015-12-01

    Previous paleosecular variation (PSV) and time-averaged field (TAF) models draw on compilations of paleodirectional data that lack equatorial and high latitude sites and use latitudinal virtual geomagnetic pole (VGP) cutoffs designed to remove transitional field directions. We present a new selected global dataset (PSV10) of paleodirectional data spanning the last 10 Ma. We include all results calculated with modern laboratory methods, regardless of site VGP colatitude, that meet statistically derived selection criteria. We exclude studies that target transitional field states or identify significant tectonic effects, and correct for any bias from serial correlation by averaging directions from sequential lava flows. PSV10 has an improved global distribution compared with previous compilations, comprising 1519 sites from 71 studies. VGP dispersion in PSV10 varies with latitude, exhibiting substantially higher values in the southern hemisphere than at corresponding northern latitudes. Inclination anomaly estimates at many latitudes are within error of an expected GAD field, but significant negative anomalies are found at equatorial and mid-northern latitudes. Current PSV models Model G and TK03 do not fit observed PSV or TAF latitudinal behavior in PSV10, or subsets of normal and reverse polarity data, particularly for southern hemisphere sites. Attempts to fit these observations with simple modifications to TK03 showed slight statistical improvements, but still exceed acceptable errors. The root-mean-square misfit of TK03 (and subsequent iterations) is substantially lower for the normal polarity subset of PSV10, compared to reverse polarity data. Two-thirds of data in PSV10 are normal polarity, most which are from the last 5 Ma, so we develop a new TAF model using this subset of data. We use the resulting TAF model to explore whether new statistical PSV models can better describe our new global compilation.

  9. Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration

    SciTech Connect

    Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F.

    2008-09-15

    for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.

  10. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  11. Paleosecular variation and time-averaged field recorded in late Pliocene-Holocene lava flows from Mexico

    NASA Astrophysics Data System (ADS)

    Mejia, V.; BöHnel, H.; Opdyke, N. D.; Ortega-Rivera, M. A.; Lee, J. K. W.; Aranda-Gomez, J. J.

    2005-07-01

    This paper presents results from 13 paleomagnetic sites from an area west of Mexico City and 7 sites from an area of dispersed monogenetic volcanism in the state of San Luis Potosi, accompanied by seven 40Ar/39Ar radiometric dates. An analysis of secular variation and time-averaged paleomagnetic field in the Trans-Mexican Volcanic Belt (TMVB), using compiled data both newly obtained and from the literature, is presented. Interpretation can best be constrained after excluding from the data set sites that appear to be tectonically affected. The selected data include 187 sites of late Pliocene-Holocene age. The mean direction among these sites is Dec = 358.8°, Inc = 31.6°, α95 = 2.0°, k = 29. This direction does not overlap the expected geocentric axial dipole (GAD) but is consistent with a GAD plus a 5% quadrupole. The virtual geomagnetic pole scatter of this group of sites (12.7°, with lower and upper 95% confidence limits of 11.9° and 14.1°) is consistent with the value expected from Model G (13.6°).

  12. Time average neutralized migma: A colliding beam/plasma hybrid physical state as aneutronic energy source — A review

    NASA Astrophysics Data System (ADS)

    Maglich, Bogdan C.

    1988-08-01

    A D + beam of kinetic energy Ti = 0.7 MeV was stored in a "simple mirror" magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times, ion energy distribution, nuclear reaction rate, and reaction product spectrum. Migma formed a disc 20 cm in diameter and 0.5 cm thick. Its ion density was sharply peaked in the center; the ion-to-electron temperature ratio was TiTe ˜ 10 3; ion-electron temperature equilibrium was never reached. The volume average and central D + density were n = 3.2 × 10 9 cm -3 and nc = 3 × 10 10 cm -3 respectively, compared to the space charge limit density nsc = 4 × 10 8 cm -3. The energy confinement time was τc = 20-30 s, limited by the change exchange reactions with the residual gas in the vacuum (5 × 10 -9 Torr). The ion energy loss rate was 1.4 keV/s. None of the instabilities that were observed in mirrors at several orders of magnitude lower density occurred. The proton energy spectrum for dd + d → T + p + 4 MeV shows that dd collided at an average crossing angle of 160°. Evidence for exponential density buildup has also been observed. Relative to Migma III results and measured in terms of the product of ion energy E, density n, and confinement time τ, device performance was improved by a factor of 500. Using the central fast ion density, we obtained the triple product: Tnτ ≅ 4 × 10 14 keV s cm -3, which is greater than that of the best fusion devices. The luminosity (collision rate per unit cross section) was ˜ 10 29 cm -2s -1, with o.7 A ion current through the migma center. The stabilizing features of migma are: (1) large Larmor radius; (2) small canonical angular momentum; (3) short axial length z (disc shape); (4) nonadiabatic motions in r and z

  13. Tunneling-injection-induced turnaround behavior of threshold voltage in thermally nitrided oxide n-channel metal-oxide-semiconductor field-effect transistors

    NASA Astrophysics Data System (ADS)

    Ma, Z. J.; Lai, P. T.; Liu, Z. H.; Fleischer, S.; Cheng, Y. C.

    1990-12-01

    The threshold voltage (VT) degradation metal-oxide-semiconductor field-effect transistors (MOSFETs) with thermally nitrided oxide or pure oxide as gate dielectric was determined under Fowler-Nordheim (FN) stressing. A typical VT turnaround behavior was observed for both kinds of devices. The VT for nitrided oxide MOSFETs shifts more negatively than that for pure oxide MOSFETs during the initial period of FN stressing whereas the opposite is true for the positive shift after the critical time at turnaround point. The discovery that the shift of substrate current peak exhibits similar turnaround behavior reinforces the above results. In the meantime, the field-effect electron mobility and the maximum transconductance in the channel for nitrided oxide MOSFETs are only slightly degraded by stressing as compared to that for pure oxide MOSFETs. The VT turnaround behavior can be explained as follows: Net trapped charges in the oxide are initially positive (due to hole traps in the oxide) and result in the negative shift of VT. With increasing injection time, trapped electrons in the oxide as well as acceptortype interface states increase. This results in the positive shift in VT. It is revealed that VT degradation in MOSFETs is dominated by the generation of acceptortype interface states rather than electron trapping in the oxide after the critical time.

  14. Low to Moderate Average Alcohol Consumption and Binge Drinking in Early Pregnancy: Effects on Choice Reaction Time and Information Processing Time in Five-Year-Old Children

    PubMed Central

    Kilburn, Tina R.; Eriksen, Hanne-Lise Falgreen; Underbjerg, Mette; Thorsen, Poul; Mortensen, Erik Lykke; Landrø, Nils Inge; Bakketeig, Leiv S.; Grove, Jakob; Sværke, Claus; Kesmodel, Ulrik Schiøler

    2015-01-01

    Background Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT) and information processing time (IPT) in young children. Method Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60–64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R) was administered. Results Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1–4. Conclusion This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring. PMID:26382068

  15. School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform

    ERIC Educational Resources Information Center

    Peck, Craig; Reitzug, Ulrich C.

    2014-01-01

    School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…

  16. On the Edge: A Study of Small Private Colleges That Have Made a Successful Financial Turnaround

    ERIC Educational Resources Information Center

    Carey, Amy Bragg

    2013-01-01

    This dissertation was a qualitative research study regarding two small private universities and their process of transformation from an institution headed toward closure to a successful turnaround. The primary questions that guided the study included the factors and persons that contributed to the institutional turnaround, the issues and…

  17. On the Edge: A Study of Small Private Colleges That Have Made a Successful Financial Turnaround

    ERIC Educational Resources Information Center

    Carey, Amy Bragg

    2014-01-01

    This article describes a qualitative study that involved two small private universities, examining their process of transformation from institutions headed toward closure to institutions that underwent a successful turnaround. The primary questions that guided the study included the issues and circumstances that led to the need for a turnaround,…

  18. School Turnarounds: Resisting the Hype, Giving Them Hope. Education Outlook No. 2

    ERIC Educational Resources Information Center

    Hess, Frederick M.; Gift, Thomas

    2009-01-01

    Education reformers are abuzz over school "turnarounds," a simple idea that has undeniable appeal. School turnarounds offer the opportunity to take familiar educational institutions and improve them through coaching, mentoring, capacity building, best practices, and other existing tools. Unlike most reform efforts, which focus on incremental…

  19. How One Historically Underperforming Rural and Highly Diverse High School Achieved a Successful Turnaround

    ERIC Educational Resources Information Center

    Maxwell, Gerri M.; Huggins, Kristin S.; Scheurich, James J.

    2010-01-01

    "Central High School," a rural school composed largely of students of color from low income homes, experienced a quick, remarkable turnaround of student academic success, measured by State and Federal accountability standards. This turnaround began with an external restructuring grant initiative that required a state-approved external consultant.…

  20. Time averaging and stratigraphic disorder of molluscan assemblages in the Holocene sediments in the NE Adriatic (Piran)

    NASA Astrophysics Data System (ADS)

    Tomasovych, Adam; Gallmetzer, Ivo; Haselmair, Alexandra; Kaufman, Darrell S.; Zuschin, Martin

    2016-04-01

    Stratigraphic changes in temporal resolution of fossil assemblages and the degree of their stratigraphic mixing in the Holocene deposits are of high importance in paleoecology, conservation paleobiology and paleoclimatology. However, few studies quantified downcore changes in time averaging and in stratigraphic disorder on the basis of dating of multiple shells occurring in individual stratigraphic layers. Here, we investigate downcore changes in frequency distribution of postmortem ages of the infaunal bivalve Gouldia minima in two, ~150 cm-thick piston cores (separated by more than 1 km) in the northern Adriatic Sea, close to the Slovenian city Piran at a depth of 24 m. We use radiocarbon-calibrated amino acid racemization to obtain postmortem ages of 564 shells, and quantify age-frequency distributions in 4-5 cm-thick stratigraphic intervals (with 20-30 specimens sampled per interval). Inter-quartile range for individual 4-5 cm-thick layers varies between 850 and 1,700 years, and range encompassing 95% of age data varies between 2,000 and 5,000 years in both cores. The uppermost sediments (20 cm) are age-homogenized and show that median age of shells is ~700-800 years. The interval between 20 and 90 cm shows a gradual increase in median age from ~2,000 to ~5,000 years, with maximum age ranging to ~8,000 years. However, the lowermost parts of both cores show a significant disorder, with median age of 3,100-3,300 years. This temporal disorder implies that many shells were displaced vertically by ~1 m. Absolute and proportional abundance of the bivalve Gouldia minima strongly increases towards the top of the both cores. We hypothesize that such increase in abundance, when coupled with depth-declining reworking, can explain stratigraphic disorder because numerically abundant young shells from the top of the core were more likely buried to larger sediment depths than less frequent shells at intermediate sediment depths.

  1. Time-averaged acoustic forces acting on a rigid sphere within a wide range of radii in an axisymmetric levitator

    NASA Astrophysics Data System (ADS)

    Foresti, Daniele; Nabavi, Majid; Poulikakos, Dimos

    2012-05-01

    Acoustic levitation is a physical phenomenon that arises when the acoustic radiation pressure is strong enough to overcome gravitational force. It is a nonlinear phenomenon which can be predicted only if higher order terms are included in the acoustic field calculation. The study of acoustic levitation is usually conducted by solving the linear acoustic equation and bridging the gap with an analytical solution. Only recently, the scientific community has shown interest in the full solution of the Navier-Stokes' equation with the aim of deeply investigating the acoustic radiation pressure. We present herein a numerical model based on Finite Volume Method (FVM) and Dynamic Mesh (DM) for the calculation of the acoustic radiation pressure acting on a rigid sphere inside an axisymmetric levitator which is the most widely used and investigated type of levitators. In this work, we focus on the third resonance mode. The use of DM is new in the field of acoustic levitation, allowing a more realistic simulation of the phenomenon, since no standing wave has to be necessarily imposed as boundary condition. The radiating plate is modeled as a rigid cylinder moving sinusoidally along the central axis. The time-averaged acoustic force exerting on the sphere is calculated for different radii Rs of the sphere (0.025 to 0.5 wavelengths). It is shown that the acoustic force increases proportional to Rs3 for small radii, then decreases when the standing wave condition is violated and finally rises again in the travelling wave radiation pressure configuration. The numerical model is validated for the inviscid case with a Finite Element Method model of the linear acoustic model based on King's approximation.

  2. Hybrid averaging offers high-flow contrast by cost apportionment among imaging time, axial, and lateral resolution in optical coherence tomography angiography.

    PubMed

    Li, Peng; Cheng, Yuxuan; Li, Pei; Zhou, Liping; Ding, Zhihua; Ni, Yang; Pan, Cong

    2016-09-01

    The current temporal, wavelength, angular, and spatial averaging approaches trade imaging time and resolution for multiple independent measurements that improve the flow contrast in optical coherence tomography angiography (OCTA). We find that these averaging approaches are equivalent in principle, offering almost the same flow contrast enhancement as the number of averages increases. Based on this finding, we propose a hybrid averaging strategy for contrast enhancement by cost apportionment. We demonstrate that, compared with any individual approach, the hybrid averaging is able to offer a desired flow contrast without severe degradation of imaging time and resolution. Making use of the extended range of a VCSEL-based swept-source OCT, an angular averaging approach by path length encoding is also demonstrated for flow contrast enhancement. PMID:27607943

  3. The Divisiveness of Diversity: President Bush's University of Michigan Commencement Speech as an Example of the Linguistic "Turnaround."

    ERIC Educational Resources Information Center

    Palczewski, Catherine Helen; Madsen, Arnie

    1993-01-01

    Uses President Bush's commencement speech to compare and contrast the "link and impact turnaround," an accepted form of argument in academic debate, with the "linguistic turnaround" as executed in political discourse. (NH)

  4. Time Averaging and Fitting of Nonlinear Metabolic Changes: The Issue of the Time Index Choice Applied to 31P MRS Investigation of Muscle Energetics

    NASA Astrophysics Data System (ADS)

    Simond, G.; Bendahan, D.; Cozzone, P. J.

    2001-03-01

    We present an exact analytical method dedicated to fitting time-dependent exponential-like changes in MR spectra. As an illustration, this method has been applied to fitting metabolic changes recorded by 31P MRS in human skeletal muscle occurring during a rest-exercise-recovery protocol. When recording metabolic changes with the accumulative method, the time averaging of the MR signals implies the choice of a time index for fitting any changes in the features of the associated MR spectra. A critical examination of the different ways (constant, linear, and exponential) of choosing the time index is reported. By numerical analysis, we have calculated the errors generated by the three methods and we have compared their sensitivity to noise. In the case of skeletal muscle, both constant and linear methods introduce large and uncontrolled errors for the whole set of metabolic parameters derived from [PCr] changes. In contrast, the exponential method affords a reliable estimation of critical parameters in muscle bioenergetics in both normal and pathological situations. This method is very easy to implement and provides an exact analytical solution to fitting changes in MR spectra recorded by the accumulative method.

  5. Time-averaged near-bed suspended sediment concentrations under waves and currents: Comparison of measured and model estimates

    NASA Astrophysics Data System (ADS)

    Cacchione, David A.; Thorne, Peter D.; Agrawal, Yogesh; Nidzieko, Nicholas J.

    2008-02-01

    Profiles of suspended sediment concentration and velocity were measured over a 15-day period at a near-shore site off Santa Cruz, CA in Monterey Bay. The concentration and velocity data were collected with an Acoustic Backscattering System (ABS) and Acoustic Current Profiler (ACP) that were mounted on a bottom tripod. High-resolution bottom scanning sonar was also attached to the tripod to provide images of bed features during the experiment. Hourly time-averaged near-bed concentrations of suspended sediment were calculated from three models and compared with the measurements. Surface waves and currents that were generated by a storm of moderate intensity caused bed stresses that exceeded threshold stress for D50=0.02 cm, the median size of the moderately well-sorted bottom sediment, over a period of about 7 days. Estimates of the concentration at 1 cm above the bottom, Ca1, were obtained using the ABS measurements. These observations have been compared with predictions for the concentration at 1 cm above the bottom, C1. Nielsen's models for reference concentration Co [Nielsen, P., 1986. Suspended sediment concentrations under waves. Coastal Engineering 10, 32-31; Nielsen, P., 1992. Coastal Bottom Boundary Layers and Sediment Transport, Advanced Series on Ocean Engineering. World Scientific, Hackensack, NJ.] are purely wave-based and do not include effects of bottom currents on bed stress and bedform scales. C1 calculated from this model compared well with measured Ca1 when currents were weak and small oscillatory ripples were observed in the sonar images. However, during the 3-day period of highest bottom stresses modeled C1 did not compare well to Ca1. The other two models for C1, Glenn and Grant [Glenn, S.M., Grant, W.D., 1987. A suspended sediment stratification correction for combined wave and current flows. Journal of Geophysical Research 92(C8), 8244-8264.] and van Rijn and Walstra [Van Rijn, L.C., Walstra, D.J.R., 2004. Description of TRANSPOR2004 and

  6. District Readiness to Support School Turnaround: A Users' Guide to Inform the Work of State Education Agencies and Districts

    ERIC Educational Resources Information Center

    Player, Daniel; Hambrick Hitt, Dallas; Robinson, William

    2014-01-01

    This guide provides state education agencies (SEAs) and districts (LEAs) with guidance about how to assess the district's readiness to support school turnaround initiatives. Often, school turnaround efforts focus only on the school's structure and leadership. Rarely do policymakers or practitioners think about school turnaround as a system-level…

  7. Study of modeling unsteady blade row interaction in a transonic compressor stage part 2: influence of deterministic correlations on time-averaged flow prediction

    NASA Astrophysics Data System (ADS)

    Liu, Yang-Wei; Liu, Bao-Jie; Lu, Li-Peng

    2012-04-01

    The average-passage equation system (APES) provides a rigorous mathematical framework for accounting for the unsteady blade row interaction through multistage compressors in steady state environment by introducing deterministic correlations (DC) that need to be modeled to close the equation system. The primary purpose of this study was to provide insight into the DC characteristics and the influence of DC on the time-averaged flow field of the APES. In Part 2 of this two-part paper, the influence of DC on the time-averaged flow field was systematically studied. Several time-averaging computations were conducted with various boundary conditions and DC for the downstream stator in a transonic compressor stage, by employing the CFD solver developed in Part 1 of this two-part paper. These results were compared with the time-averaged unsteady flow field and the steady one. The study indicated that the circumferential-averaged DC can take into account major part of the unsteady effects on spanwise redistribution of flow fields in compressors. Furthermore, it demonstrated that both deterministic stresses and deterministic enthalpy fluxes are necessary to reproduce the time-averaged flow field.

  8. Confidence intervals for time averages in the presence of long-range correlations, a case study on Earth surface temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, M.; Kantz, H.

    2016-09-01

    Time averages, a standard tool in the analysis of environmental data, suffer severely from long-range correlations. The sample size needed to obtain a desired small confidence interval can be dramatically larger than for uncorrelated data. We present quantitative results for short- and long-range correlated Gaussian stochastic processes. Using these, we calculate confidence intervals for time averages of surface temperature measurements. Temperature time series are well known to be long-range correlated with Hurst exponents larger than 1/2. Multidecadal time averages are routinely used in the study of climate change. Our analysis shows that uncertainties of such averages are as large as for a single year of uncorrelated data.

  9. Weaker axially dipolar time-averaged paleomagnetic field based on multidomain-corrected paleointensities from Galapagos lavas.

    PubMed

    Wang, Huapei; Kent, Dennis V; Rochette, Pierre

    2015-12-01

    The geomagnetic field is predominantly dipolar today, and high-fidelity paleomagnetic mean directions from all over the globe strongly support the geocentric axial dipole (GAD) hypothesis for the past few million years. However, the bulk of paleointensity data fails to coincide with the axial dipole prediction of a factor-of-2 equator-to-pole increase in mean field strength, leaving the core dynamo process an enigma. Here, we obtain a multidomain-corrected Pliocene-Pleistocene average paleointensity of 21.6 ± 11.0 µT recorded by 27 lava flows from the Galapagos Archipelago near the Equator. Our new result in conjunction with a published comprehensive study of single-domain-behaved paleointensities from Antarctica (33.4 ± 13.9 µT) that also correspond to GAD directions suggests that the overall average paleomagnetic field over the past few million years has indeed been dominantly dipolar in intensity yet only ∼ 60% of the present-day field strength, with a long-term average virtual axial dipole magnetic moment of the Earth of only 4.9 ± 2.4 × 10(22) A ⋅ m(2). PMID:26598664

  10. Weaker axially dipolar time-averaged paleomagnetic field based on multidomain-corrected paleointensities from Galapagos lavas

    PubMed Central

    Wang, Huapei; Kent, Dennis V.; Rochette, Pierre

    2015-01-01

    The geomagnetic field is predominantly dipolar today, and high-fidelity paleomagnetic mean directions from all over the globe strongly support the geocentric axial dipole (GAD) hypothesis for the past few million years. However, the bulk of paleointensity data fails to coincide with the axial dipole prediction of a factor-of-2 equator-to-pole increase in mean field strength, leaving the core dynamo process an enigma. Here, we obtain a multidomain-corrected Pliocene–Pleistocene average paleointensity of 21.6 ± 11.0 µT recorded by 27 lava flows from the Galapagos Archipelago near the Equator. Our new result in conjunction with a published comprehensive study of single-domain–behaved paleointensities from Antarctica (33.4 ± 13.9 µT) that also correspond to GAD directions suggests that the overall average paleomagnetic field over the past few million years has indeed been dominantly dipolar in intensity yet only ∼60% of the present-day field strength, with a long-term average virtual axial dipole magnetic moment of the Earth of only 4.9 ± 2.4 × 1022 A⋅m2. PMID:26598664

  11. The use of rapid turnaround heated headspace/gas chromatography to support regulatory soil cleanup standards

    SciTech Connect

    Atwell, J.; Evans, C.; Francoeur, T.L.; Guerra, R.

    1995-12-31

    This paper addresses the use of rapid turnaround, heated headspace/gas chromatography (GC) to support a soil remediation project in the state of New Jersey. In the past, rapid turnaround, heated head space/GC procedures have been used primarily as a screening tool to delineate areas of volatile organic compound (VOC) contamination on-site. For this project, the heated headspace/GC method was the primary analytical tool used to support a large soil remediation project. This paper reviews the project goals, presents analytical protocol, presents internal quality assurance/quality control (QA/QC), evaluates laboratory split data, and discusses the advantages and disadvantages of this rapid turnaround method.

  12. Teacher Pay-for-Performance in School Turnaround: How Bonuses and Differentiated Pay Scales Can Help Support School Turnaround. Meeting the Turnaround Challenge: Strategies, Resources & Tools to Transform a Framework into Practice

    ERIC Educational Resources Information Center

    Mass Insight Education (NJ1), 2009

    2009-01-01

    Given the importance of good teaching and leadership for school success, turnaround schools should think carefully about how to structure professional environments that reward and motivate excellence. A system of "Pay-for-Contribution" that includes tools such as hard-to-staff and skill shortage pay, performance pay, and/or retention pay, will…

  13. Silurian carbonate high-energy deposits of potential tsunami origin: Distinguishing lateral redeposition and time averaging using carbon isotope chemostratigraphy

    NASA Astrophysics Data System (ADS)

    Jarochowska, Emilia; Munnecke, Axel

    2015-01-01

    Stable carbon isotope curves are used as a precise stratigraphic tool in the Paleozoic, even though they are commonly based on shallow-water carbonate record, characterized by low stratigraphic completeness. Identification of episodes of large-scale redeposition and erosion may improve δ13Ccarb-based correlations. Here, a series of at least three episodes of high-energy onshore redeposition are described from the Makarivka Member (new unit) of the Ustya Formation from the Homerian (middle Silurian) of Podolia, Ukraine. The Makarivka Member is emplaced within a tidal flat succession. Its most prominent part is divided into a lower polymictic conglomerate of sand- to boulder-sized clasts representing a range of subtidal facies, and an upper heterolithic unit composed of grainstone and mudstone laminae. The aim of the study is to identify the mechanism of deposition of the allochthonous conglomeratic material in this Member. Based on analogies with recent tsunami deposits, the conglomerate is interpreted to reflect the strongest landward-directed current in the tsunami run-up phase, and the heterolith - alternating high-density landward currents, stagnant intervals allowing mud and land-derived debris to settle, and backwash flows. The tsunamite was deposited during an interval of decreasing isotopic values of the Mulde excursion, a global δ13C excursion reaching + 5.2‰ in the studied sections. Clast redeposition in an interval characterized by rapidly changing δ13Ccarb offers the opportunity to evaluate the degree of temporal and spatial averaging caused by the tsunami. The clasts in the polymictic conglomerate show scattered δ13Ccarb values (- 0.3‰ to + 2.1‰) compared to homogenous (1.3‰ to 1.6‰) values in the matrix. The presence of clasts characterized by low δ13Ccarb values is explained by their decrease with bathymetry rather than erosion of pre-excursion strata, whereas high values characterize material entrained from the sea-floor and strata

  14. Recursive Averaging

    ERIC Educational Resources Information Center

    Smith, Scott G.

    2015-01-01

    In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…

  15. Time-Averaged Indoor Radon Concentrations and Infiltration RatesSampled in Four U.S. Cities

    SciTech Connect

    Doyle, S.M.; Nazaroff, W.W.; Nero, A.V.

    1983-09-01

    Indoor radon concentrations, measured in 58 houses over a four to five month period during the winter and spring of 1981-1982, varied from 0.1 to 16 pCi 1{sup -1} (4-590 Bq m{sup -3}). Average infiltration rates were determined for each house over the same period, based on a measurement of the effective leakage area and an infiltration model, and found to range from 0.2 to 2.2 air changes per hour (hr{sup -1}). Indoor radon concentrations correlated poorly with infiltration rates for houses within each city as well as for the entire sample. Differences in radon entry rates among houses thus appear to be more important than differences in infiltration rates in determining whether a house has high indoor radon levels, consistent with previous indications from grab-sample measurements. Radon entry rates and indoor radon concentrations were generally higher in houses in Fargo, ND and Colorado Springs, CO than in houses in Portland, ME and Charleston, NC.

  16. A space variant maximum average correlation height (MACH) filter for object recognition in real time thermal images for security applications

    NASA Astrophysics Data System (ADS)

    Gardezi, Akber; Alkandri, Ahmed; Birch, Philip; Young, Rupert; Chatwin, Chris

    2010-10-01

    We propose a space variant Maximum Average Correlation Height (MACH) filter which can be locally modified depending upon its position in the input frame. This can be used to detect targets in an environment from varying ranges and in unpredictable weather conditions using thermal images. It enables adaptation of the filter dependant on background heat signature variances and also enables the normalization of the filter energy levels. The kernel can be normalized to remove a non-uniform brightness distribution if this occurs in different regions of the image. The main constraint in this implementation is the dependence on computational ability of the system. This can be minimized with the recent advances in optical correlators using scanning holographic memory, as proposed by Birch et al. [1] In this paper we describe the discrimination abilities of the MACH filter against background heat signature variances and tolerance to changes in scale and calculate the improvement in detection capabilities with the introduction of a nonlinearity. We propose a security detection system which exhibits a joint process where human and an automated pattern recognition system contribute to the overall solution for the detection of pre-defined targets.

  17. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    SciTech Connect

    Prevosto, L.; Mancinelli, B.; Kelly, H.

    2013-12-15

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  18. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe.

    PubMed

    Prevosto, L; Kelly, H; Mancinelli, B

    2013-12-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  19. 4D CT amplitude binning for the generation of a time-averaged 3D mid-position CT scan.

    PubMed

    Kruis, Matthijs F; van de Kamer, Jeroen B; Belderbos, José S A; Sonke, Jan-Jakob; van Herk, Marcel

    2014-09-21

    The purpose of this study was to develop a method to use amplitude binned 4D-CT (A-4D-CT) data for the construction of mid-position CT data and to compare the results with data created from phase-binned 4D-CT (P-4D-CT) data. For the latter purpose we have developed two measures which describe the regularity of the 4D data and we have tried to correlate these measures with the regularity of the external respiration signal. 4D-CT data was acquired for 27 patients on a combined PET-CT scanner. The 4D data were reconstructed twice, using phase and amplitude binning. The 4D frames of each dataset were registered using a quadrature-based optical flow method. After registration the deformation vector field was repositioned to the mid-position. Since amplitude-binned 4D data does not provide temporal information, we corrected the mid-position for the occupancy of the bins. We quantified the differences between the two mid-position datasets in terms of tumour offset and amplitude differences. Furthermore, we measured the standard deviation of the image intensity over the respiration after registration (σregistration) and the regularity of the deformation vector field (Delta J) to quantify the quality of the 4D-CT data. These measures were correlated to the regularity of the external respiration signal (σsignal).The two irregularity measures, Delta J and σregistration, were dependent on each other (p<0.0001, R2=0.80 for P-4D-CT, R2=0.74 for A-4D-CT). For all datasets amplitude binning resulted in lower Delta J and σregistration and large decreases led to visible quality improvements in the mid-position data. The quantity of artefact decrease was correlated to the irregularity of the external respiratory signal.The average tumour offset between the phase and amplitude binned mid-position without occupancy correction was 0.42 mm in the caudal direction (10.6% of the amplitude). After correction this was reduced to 0.16 mm in caudal direction (4.1% of the amplitude

  20. Interpreting Changes over Time in High School Average ACT® College Readiness Assessment Composite Scores and ACT College Readiness Benchmark Attainment Rates. ACT Research Report Series, 2013 (9)

    ERIC Educational Resources Information Center

    Sawyer, Richard

    2013-01-01

    Education officials and journalists frequently track changes over time in the average ACT® College Readiness Assessment Composite scores and ACT College Readiness Benchmark attainment rates of individual high schools. Using standard statistical methods, I examined how often changes in these statistics are unambiguously positive or negative, rather…

  1. Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis

    PubMed Central

    Gonzalez-Castillo, Javier; Saad, Ziad S.; Handwerker, Daniel A.; Inati, Souheil J.; Brenowitz, Noah; Bandettini, Peter A.

    2012-01-01

    The brain is the body's largest energy consumer, even in the absence of demanding tasks. Electrophysiologists report on-going neuronal firing during stimulation or task in regions beyond those of primary relationship to the perturbation. Although the biological origin of consciousness remains elusive, it is argued that it emerges from complex, continuous whole-brain neuronal collaboration. Despite converging evidence suggesting the whole brain is continuously working and adapting to anticipate and actuate in response to the environment, over the last 20 y, task-based functional MRI (fMRI) have emphasized a localizationist view of brain function, with fMRI showing only a handful of activated regions in response to task/stimulation. Here, we challenge that view with evidence that under optimal noise conditions, fMRI activations extend well beyond areas of primary relationship to the task; and blood-oxygen level-dependent signal changes correlated with task-timing appear in over 95% of the brain for a simple visual stimulation plus attention control task. Moreover, we show that response shape varies substantially across regions, and that whole-brain parcellations based on those differences produce distributed clusters that are anatomically and functionally meaningful, symmetrical across hemispheres, and reproducible across subjects. These findings highlight the exquisite detail lying in fMRI signals beyond what is normally examined, and emphasize both the pervasiveness of false negatives, and how the sparseness of fMRI maps is not a result of localized brain function, but a consequence of high noise and overly strict predictive response models. PMID:22431587

  2. Temporal averaging of phase measurements in the presence of spurious phase drift - Application to phase-stepped real-time holographic interferometry

    NASA Technical Reports Server (NTRS)

    Ovryn, B.; Haacke, E. M.

    1993-01-01

    A technique that compensates for low spatial frequency spurious phase changes during an interference experiment is developed; it permits temporal averaging of multiple-phase measurements, made before and after object displacement. The method is tested with phase-stepped real-time holographic interferometry applied to cantilever bending of a piezoelectric bimorph ceramic. Results indicate that temporal averaging of the corrected data significantly reduces the white noise in a phase measurement without incurring systematic errors or sacrificing spatial resolution. White noise is reduced from 3 deg to less than 1 deg using these methods.

  3. A Simple Instant-Estimation Method for Time-Average Quantities of Single-Phase Power and Application to Single-Phase Power Grid Connection by Inverter

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents and analyzes a new simple instant-estimation method for time-average quantities such as rms-values of voltage and current, active and reactive powers, and power factor for single-phase power with the fundamental component of constant or nearly-constant frequency by measuring instantaneous values of voltage and current. According to the analyses, the method can instantly estimate time average values with accuracy of the fundamental frequency, and estimation accuracy of power factor is about two times better than that of voltage, current, and powers. The instant-estimation method is simple and can be easily applied to single-phase power control systems that are expected to control instantly and continuously power factor on a single-phase grid by inverter. Based on the proposed instant-estimation method, two-methods for such power control systems are also proposed and their usefulness is verified through simulations.

  4. Measurement of Time-Averaged Particle-Wall Collision Properties Using Particle Tracking Velocimetry, CRADA PC93-006, Final Report

    SciTech Connect

    Shaffer, F.; Massah, H.; Sinclair, J.; Shahnam, M.

    1995-04-30

    This paper describes an experimental facility to measure time-averaged properties of particle-wall collisions. A particle tracking velocimetry (PTV) system measures particle rebound angles and velocities and impact angles and velocities for specific particles. From these measured properties, values of normal and tangential coefficient of restitution are derived. The PTV system rapidly measures thousands of particle-wall collisions enabling determination of time-averaged properties. In this initial work we use 191 {micro}m glass particles colliding with a glass plate at a fixed angle of 17.5{degree} and a velocity of 17.07 m/s. Observations are also made of aspherical particles colliding with gross sliding. In the future, this experimental system will be used with particles and wall materials of industrial significance.

  5. Taphonomic trade-offs in tropical marine death assemblages: Differential time averaging, shell loss, and probable bias in siliciclastic vs. carbonate facies

    NASA Astrophysics Data System (ADS)

    Kidwell, Susan M.; Best, Mairi M. R.; Kaufman, Darrell S.

    2005-09-01

    Radiocarbon-calibrated amino-acid racemization ages of individually dated bivalve mollusk shells from Caribbean reef, nonreefal carbonate, and siliciclastic sediments in Panama indicate that siliciclastic sands and muds contain significantly older shells (median 375 yr, range up to ˜5400 yr) than nearby carbonate seafloors (median 72 yr, range up to ˜2900 yr; maximum shell ages differ significantly at p < 0.02 using extreme-value statistics). The implied difference in shell loss rates is contrary to physicochemical expectations but is consistent with observed differences in shell condition (greater bioerosion and dissolution in carbonates). Higher rates of shell loss in carbonate sediments should lead to greater compositional bias in surviving skeletal material, resulting in taphonomic trade-offs: less time averaging but probably higher taxonomic bias in pure carbonate sediments, and lower bias but greater time averaging in siliciclastic sediments from humid-weathered accretionary arc terrains, which are a widespread setting of tropical sedimentation.

  6. Modeling the effect of unsteady flows on the time average flow field of a blade row embedded in an axial flow multistage turbomachine

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.

    1996-01-01

    The role of unsteady flow processes in establishing the performance of axial flow turbomachinery was investigated. The development of the flow model, as defined by the time average flow equations associated with the passage of the blade row embedded in a multistage configuration, is presented. The mechanisms for unsteady momentum and energy transport is outlined. The modeling of the unsteady momentum and energy transport are discussed. The procedure for simulating unsteady multistage turbomachinery flows is described.

  7. The average enzyme principle.

    PubMed

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-09-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This "average enzyme principle" provides a natural methodology for jointly studying metabolism and its regulation.

  8. Combining site occupancy, breeding population sizes and reproductive success to calculate time-averaged reproductive output of different habitat types: an application to Tricolored Blackbirds.

    PubMed

    Holyoak, Marcel; Meese, Robert J; Graves, Emily E

    2014-01-01

    In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we

  9. Combining site occupancy, breeding population sizes and reproductive success to calculate time-averaged reproductive output of different habitat types: an application to Tricolored Blackbirds.

    PubMed

    Holyoak, Marcel; Meese, Robert J; Graves, Emily E

    2014-01-01

    In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we

  10. Effect of delay time and grid voltage changes on the average molecular mass of polydisperse polymers and polymeric blends determined by delayed extraction matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Mineo, Placido; Vitalini, Daniele; Scamporrino, Emilio; Bazzano, Sebastiano; Alicata, Rossana

    2005-01-01

    The dependence of the calculated average molecular mass of a polyethylene glycol with a large polydispersity on the instrumental parameters adopted in the acquisition of mass spectra using delayed extraction matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (DE MALDI-TOFMS) was investigated. It has been shown that a combined effect of delay times and potential gradients can act on the ion cloud in the source chamber affecting both mass resolution and average molecular mass value of the analyzed polymeric sample. Also examined was a blend of two different polymers (a PEG and a PMMA commercial sample having a similar average molecular mass), which presents an additional problem concerning the discrimination among the different polymer species as a function of the experimental conditions. In this work, the best instrumental conditions to obtain both good resolution and a correct average molecular mass for the examined polydisperse sample are reported. PMID:16134231

  11. Sustaining Turnaround at the School and District Levels: The High Reliability Schools Project at Sandfields Secondary School

    ERIC Educational Resources Information Center

    Schaffer, Eugene; Reynolds, David; Stringfield, Sam

    2012-01-01

    Beginning from 1 high-poverty, historically low-achieving secondary school's successful turnaround work, this article provides data relative to a successful school turnaround, the importance of external and system-level supports, and the importance of building for sustainable institutionalization of improvements. The evidence suggests the…

  12. "I've Never Seen People Work So Hard!" Teachers' Working Conditions in the Early Stages of School Turnaround

    ERIC Educational Resources Information Center

    Cucchiara, Maia Bloomfield; Rooney, Erin; Robertson-Kraft, Claire

    2015-01-01

    School turnaround--a reform strategy that strives for quick and dramatic transformation of low-performing schools--has gained prominence in recent years. This study uses interviews and focus groups conducted with 86 teachers in 13 schools during the early stages of school turnaround in a large urban district to examine teachers' perceptions of the…

  13. Academic Turnarounds: Restoring Vitality to Challenged American Colleges and Universities. ACE/Praeger Series on Higher Education

    ERIC Educational Resources Information Center

    MacTaggart, Terrence, Ed.

    2007-01-01

    This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the true…

  14. Comparison of measured and calculated scattering from surface aerosols with an average, a size-dependent, and a time-dependent refractive index

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Montague, Derek C.; Deshler, Terry

    2011-01-01

    Midcontinental surface aerosols have been measured at a small, minimally polluted city in summer and winter and on a nearby remote mountain in summer. Aerosol scattering, absorption, size distribution, and composition were measured using a three-wavelength nephelometer, an aethalometer, a passive cavity aerosol spectrometer, a scanning mobility particle sizer, an Aerodyne quadrupole aerosol mass spectrometer, and conventional filter systems. Size-dependent, time-dependent, and averaged refractive indices are estimated from the aerosol composition measurements and then used to calculate time-dependent aerosol scattering. The calculated scattering values show differences that are generally less than 5% on average for all three refractive indices, suggesting that the average refractive index is adequate for scattering estimations from time- or size-dependent aerosol measurements. The calculated scattering (backscattering) at 550 nm ranges from 2% less to 23% greater (11-22% smaller) than that measured. These differences decrease at 450 nm and increase at 700 nm and significantly exceed these values if optical size distribution measurements are not corrected for an appropriate index of refraction. Optimal agreement between calculated and measured scattering is achieved on 4 of the 6 days investigated in detail, if the real refractive index of the aerosol organic species ranges from 1.45 ± 0.02 at 450 nm to 1.62 ± 0.05 at 700 nm. Single-scatter albedos are also calculated and found to be in good agreement with those derived from the experimental observations, ranging from 0.79 to 0.87 in the city and constant, near 0.95, on the mountain top.

  15. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  16. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required

  17. Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform

    ERIC Educational Resources Information Center

    Mette, Ian M.; Scribner, Jay P.

    2014-01-01

    This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in student…

  18. Chronically Low-Performing Schools and Turnaround: Evidence from Three States

    ERIC Educational Resources Information Center

    Hansen, Michael; Choi, Kilchan

    2012-01-01

    The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This paper…

  19. The Lay of the Land: State Practices and Needs for Supporting School Turnaround

    ERIC Educational Resources Information Center

    Scott, Caitlin; Lasley, Nora

    2013-01-01

    The goal of the Center on School Turnaround (CST) is to provide technical assistance on research-based practices and emerging promising practices that will increase the capacity of states to support their districts in turning around the lowest-performing schools. When the CST opened its doors in October 2012, it began its work by asking the…

  20. A Case Study of Change Strategies Implemented in a Turnaround Elementary School

    ERIC Educational Resources Information Center

    Colson, Jo Ann

    2012-01-01

    This case study examined the change strategies in a turnaround school at the elementary level to understand and describe how change occurred and was sustained at this campus. This study examined the factors which contributed to the change in academic success of students, examined beliefs about change that led to the change process, identified the…

  1. Achieving Exact and Constant Turnaround Ratio in a DDS-Based Coherent Transponder

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry R.

    2011-01-01

    A report describes a non-standard direct digital synthesizer (DDS) implementation that can be used as part of a coherent transponder so as to allow any rational turnaround ratio to be exactly achieved and maintained while the received frequency varies. (A coherent transponder is a receiver-transmitter in which the transmitted carrier is locked to a pre-determined multiple of the received carrier's frequency and phase. That multiple is called the turnaround ratio.) The report also describes a general model for coherent transponders that are partly digital. A partially digital transponder is one in which analog signal processing is used to convert the signals between high frequencies at which they are radiated and relatively low frequencies at which they are converted to or from digital form, with most of the complex processing performed digitally. There is a variety of possible architectures for such a transponder, and different ones can be selected by choosing different parameter values in the general model. Such a transponder uses a DDS to create a low-frequency quasi-sinusoidal signal that tracks the received carrier s phase, and another DDS to generate an IF or near-baseband version of the transmitted carrier. With conventional DDS implementations, a given turnaround ratio can be achieved only approximately, and the error varies slightly as the received frequency changes. The non-conventional implementation employed here allows any rational turnaround ratio to be exactly maintained.

  2. How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds

    ERIC Educational Resources Information Center

    Kowal, Julie; Ableidinger, Joe

    2011-01-01

    In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…

  3. CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success

    ERIC Educational Resources Information Center

    Gorman, Lynn

    2012-01-01

    Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…

  4. Turnaround radius in an accelerated universe with quasi-local mass

    SciTech Connect

    Faraoni, Valerio; Lapierre-Léonard, Marianne; Prain, Angus E-mail: mlapierre12@ubishops.ca

    2015-10-01

    We apply the Hawking-Hayward quasi-local energy construct to obtain in a rigorous way the turnaround radius of cosmic structures in General Relativity. A splitting of this quasi-local mass into local and cosmological parts describes the interplay between local attraction and cosmological expansion.

  5. Turnaround High School Principals: Recruit, Prepare and Empower Leaders of Change. High Schools That Work

    ERIC Educational Resources Information Center

    Schmidt-Davis, Jon; Bottoms, Gene

    2012-01-01

    Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apart…

  6. Turnaround Principals: An Unmistakable Conclusion--Site-Level Success Stems from Superintendent Support

    ERIC Educational Resources Information Center

    Burbach, Harold J.; Butler, Alfred R., IV

    2005-01-01

    The authors of this article were recently privileged to spend five days with a group of 10 talented principals who have been charged with turning around academically low-performing schools in their respective districts. These principals were selected as participants in a Turnaround Specialist Program initiated by Gov. Mark R. Warner of Virginia…

  7. Investigating the Role of Human Resources in School Turnaround: Evidence from Two States

    ERIC Educational Resources Information Center

    Hansen, Michael

    2012-01-01

    Teachers are generally recognized as the schooling factor accounting for the highest proportion of student learning outcomes (Aaronson et al., 2007; Hanushek, 1986). This implies the quick and dramatic improvement in school performance observed in turnaround (TA) schools was associated with a major change in the performance of its teachers. This…

  8. State Capacity to Support School Turnaround. NCEE Evaluation Brief. NCEE 2015-4012

    ERIC Educational Resources Information Center

    Tanenbaum, Courtney; Boyle, Andrea; Graczewski, Cheryl; James-Burdumy, Susanne; Dragoset, Lisa; Hallgren, Kristin

    2015-01-01

    One objective of the U.S. Department of Education's (ED) School Improvement Grants (SIG) and Race to the Top (RTT) program is to help states enhance their capacity to support the turnaround of low-performing schools. This capacity may be important, given how difficult it is to produce substantial and sustained achievement gains in low-performing…

  9. Operational Authority, Support, and Monitoring of School Turnaround. NCEE Evaluation Brief. NCEE 2014-4008

    ERIC Educational Resources Information Center

    Herman, Rebecca; Graczewski, Cheryl; James-Burdumy, Susanne; Murray, Matthew; Perez-Johnson, Irma; Tanenbaum, Courtney

    2013-01-01

    The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…

  10. Choosing a School Turnaround Provider. Lessons Learned. Volume 1, Issue 3

    ERIC Educational Resources Information Center

    Lockwood, Anne Turnbaugh; Fleischman, Steve

    2010-01-01

    Droves of school turnaround providers are chasing the massive federal infusion of funds flowing into failing schools. They arrive armed with glossy materials, impressive sounding claims, and, often, citing their prior relationships or experiences with one's school to support their promises of great service and impressive outcomes. But, are their…

  11. Participatory Democracy and Struggling Schools: Making Space for Youth in School Turnarounds

    ERIC Educational Resources Information Center

    Kirshner, Ben; Jefferson, Anton

    2015-01-01

    Background/Context:Federal policy, as codified in Race to the Top (RTT) funding guidelines, outlines four types of intervention: turnaround, restart, closure, and transformation. RTT has embraced a technocratic paradigm for school reform that frames choice less as the opportunity for the public to deliberate about what it wants from its schools…

  12. Tinkering and Turnarounds: Understanding the Contemporary Campaign to Improve Low-Performing Schools

    ERIC Educational Resources Information Center

    Duke, Daniel L.

    2012-01-01

    An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The article…

  13. A New Method of Deriving Time-Averaged Tropospheric Column Ozone over the Tropics Using Total Ozone Mapping Spectrometer (TOMS) Radiances: Intercomparison and Analysis Using TRACE A Data

    NASA Technical Reports Server (NTRS)

    Kim, J. H.; Hudson, R. D.; Thompson, A. M.

    1996-01-01

    Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass

  14. Sediment accumulation, stratigraphic order, and the extent of time-averaging in lagoonal sediments: a comparison of 210Pb and 14C/amino acid racemization chronologies

    NASA Astrophysics Data System (ADS)

    Kosnik, Matthew A.; Hua, Quan; Kaufman, Darrell S.; Zawadzki, Atun

    2015-03-01

    Carbon-14 calibrated amino acid racemization (14C/AAR) data and lead-210 (210Pb) data are used to examine sediment accumulation rates, stratigraphic order, and the extent of time-averaging in sediments collected from the One Tree Reef lagoon (southern Great Barrier Reef, Australia). The top meter of lagoonal sediment preserves a stratigraphically ordered deposit spanning the last 600 yrs. Despite different assumptions, the 210Pb and 14C/AAR chronologies are remarkably similar indicating consistency in sedimentary processes across sediment grain sizes spanning more than three orders of magnitude (0.1-10 mm). Estimates of long-term sediment accumulation rates range from 2.2 to 1.2 mm yr-1. Molluscan time-averaging in the taphonomically active zone is 19 yrs, whereas below the depth of final burial (~15 cm), it is ~110 yrs/5 cm layer. While not a high-resolution paleontological record, this reef lagoon sediment is suitable for paleoecological studies spanning the period of Western colonization and development. This sedimentary deposit, and others like it, should be useful, albeit not ideal, for quantifying anthropogenic impacts on coral reef systems.

  15. Verification and validation studies of the time-averaged velocity field in the very near-wake of a finite elliptical cylinder

    NASA Astrophysics Data System (ADS)

    Flynn, Michael R.; Eisner, Alfred D.

    2004-04-01

    This paper presents verification and validation results for the time-averaged, three-dimensional velocity field immediately downstream of a finite elliptic cylinder at a Reynolds number of 1.35 × 10 4. Numerical simulations were performed with the finite element package, Fidap, using the steady state, standard k-epsilon model. The ratio of the cylinder height to the major axis of the elliptical cross section is 5.0; the aspect ratio of the cross section is 0.5625. This particular geometry is selected as a crude surrogate for the human form in consideration of further applied occupational and environmental health studies. Predictions of the velocity and turbulence kinetic energy fields in the very near-wake are compared to measurements taken in a wind tunnel using laser Doppler anemometry. Results show that at all locations where a reliable grid convergence index can be calculated there is not a demonstrable difference between simulated and measured values. The overall topology of the time-averaged flow field is reasonably well predicted, although the simulated near-wake is narrower than the measured one.

  16. Time-synchronous-averaging of gear-meshing-vibration transducer responses for elimination of harmonic contributions from the mating gear and the gear pair

    NASA Astrophysics Data System (ADS)

    Mark, William D.

    2015-10-01

    The transmission-error frequency spectrum of meshing gear pairs, operating at constant speed and constant loading, is decomposed into harmonics arising from the fundamental period of the gear pair, rotational harmonics of the individual gears of the pair, and tooth-meshing harmonics. In the case of hunting-tooth gear pairs, no rotational harmonics from the individual gears, other than the tooth-meshing harmonics, are shown to occur at the same frequencies. Time-synchronous averages utilizing a number of contiguous revolutions of the gear of interest equal to an integer multiple of the number of teeth on the mating gear is shown to eliminate non-tooth-meshing transmission-error rotational-harmonic contributions from the mating gear, and those from the gear pair, in the case of hunting-tooth gear pairs, and to minimize these contributions in the case of non-hunting-tooth gear pairs. An example computation is shown to illustrate the effectiveness of the suggested time-synchronous-averaging procedure.

  17. Manipulation of a Bose-Einstein condensate by a time-averaged orbiting potential using phase jumps of the rotating field

    SciTech Connect

    Cleary, P. W.; Hijmans, T. W.; Walraven, J. T. M.

    2010-12-15

    We report on the manipulation of the center-of-mass motion ('sloshing') of a Bose-Einstein condensate in a time-averaged orbiting potential (TOP) trap. We start with a condensate at rest in the center of a static trapping potential. When suddenly replacing the static trap with a TOP trap centered about the same position, the condensate starts to slosh with an amplitude much larger than the TOP micromotion. We show, both theoretically and experimentally, that the direction of sloshing is related to the initial phase of the rotating magnetic field of the TOP. We show further that the sloshing can be quenched by applying a carefully timed and sized jump in the phase of the rotating field.

  18. The impact of time-averaged volcanic sulphur emissions on the global cloud condensation nuclei budget in the pre-industrial era

    NASA Astrophysics Data System (ADS)

    Schmidt, Anja; Carslaw, Kenneth; Mann, Graham; Merikanto, Joonas

    2010-05-01

    Volcanoes are a strong source of sulphur dioxide (SO2) with time-averaged emission inventories (e.g. Andres and Kasgnoc, 1998) indicating that volcanoes account for around 40% of the total annual SO2 flux in the pre-industrial atmosphere. We use a global aerosol microphysics model (GLOMAP-mode) to quantify the contribution of time-averaged volcanic sulphur emissions (from both continuous passive degassing and explosive volcanoes) on the global cloud condensation nuclei (CCN) budget. GLOMAP-mode is capable of simulating microphysical processes, such as binary homogeneous nucleation, hygroscopic growth, coagulation, condensation, cloud processing (oxidation of dissolved SO2 to SO4 in cloud droplets), as well as dry and wet deposition. For this study we use a sulphur chemistry scheme which includes 7 species (DMS, DMSO, MSA, SO2, H2SO4, COS, CS2). The runs were conducted using four internally mixed aerosol components, sulphate (SO4), sea salt, black carbon (BC) and organic carbon (OC). We simulated the impact of volcanic degassing in a pre-industrial setting (i.e. using 1750 BC and OC emissions in the absence of any anthropogenic emissions) using the volcanic emission inventory by Dentener et al. (2006). This volcanic inventory is based on datasets by Andres and Kasgnoc (1998) and Halmer et al. (2002) and accounts for an annual flux of ~13 Tg(S) of volcanic SO2. Our simulations suggest that volcanic degassing contributes on average ~50 CCN (>35nm in radius) per cubic centimetre to the annual zonal mean CCN concentrations in the tropical boundary layer. The simulations also reveal complex changes in annual zonal mean total particle concentrations (CN). CN concentrations are more than double in large parts of the tropical boundary layer when comparing the unperturbed run (i.e. without volcanic degassing) to the run featuring time-averaged volcanic degassing. However, the simulations also reveal that the additional SO2 and its subsequent conversion to sulphate aerosol

  19. Reducing barriers to timely MR imaging scheduling.

    PubMed

    Wessman, Brooke V; Moriarity, Andrew K; Ametlli, Vanda; Kastan, David J

    2014-01-01

    Scheduling a magnetic resonance (MR) imaging study at the authors' large health system in 2011 required considerable preparation before an appointment time was given to a patient. Difficulties in promptly scheduling appointments resulted from the varying time required for examinations, depending on the requested protocol, availability of appropriate MR imaging equipment, examination timing, prior insurance authorization verification, and proper patient screening. These factors contributed to a backlog of patients to schedule that regularly exceeded 300. A multidisciplinary process-improvement team was assembled to improve the turnaround time for scheduling an outpatient MR imaging examination (the interval between the time when the order was received and the time when the patient was informed about the MR imaging appointment). Process improvements targeted by the team included protocol turnaround time, schedule standardization, schedule intervals, examination timing, service standards, and scheduling redesign. Using lean methods and multiple plan-do-check-act cycles, the time to schedule an outpatient MR imaging examination improved from 117 hours to 33 hours, a 72% reduction, during the 9-month study period in 2011-2012. The number of patients in the scheduling queue was reduced by 90%. Overall MR imaging examinations within the specific patient population studied increased from 773 patient studies during the first month of intervention to 1444 studies the following month and averaged over 1279 patient studies per month throughout the study.

  20. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  1. Time-averaged aerodynamic loads on the vane sets of the 40- by 80-foot and 80- by 120-foot wind tunnel complex

    NASA Technical Reports Server (NTRS)

    Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.

    1987-01-01

    Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.

  2. Time-Average Molecular Rayleigh Scattering Technique for Measurement of Velocity, Denisty, Temperature, and Turbulence Intensity in High Speed Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta

    2004-01-01

    A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.

  3. Distributed temperature measurement using a dual-core fiber with an integrated miniature turn-around

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoguang; Li, Jie; Hines, Michael J.

    2016-05-01

    Double-ended configuration is commonly deployed in Raman-based distributed temperature sensing (DTS) systems to achieve a high accuracy in temperature measurement. To show the feasibility of multicore optical fiber (MCF) in this application, we will demonstrate distributed temperature measurements using a sensor consisting of a dual-core MCF and an integrated, distal end turn-around in a doubled-ended configuration. The dual-core fiber and turn-around, both coated with polyimide, are suitable for high temperature use. Additionally, the device is ideal for long length, distributed temperature detection in confined spaces, with a finished outer diameter of less than 300 μm. The results show that wavelength dependent loss (WDL) is easily removed in the setup and an accurate temperature measurement can be achieved reliably over a wide temperature range.

  4. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    PubMed

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428

  5. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    PubMed

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method.

  6. Urban-rural migration in Australia: a process view of the turnaround.

    PubMed

    Hugo, G J; Smailes, P J

    1985-01-01

    "This paper summarizes the evidence for the existence of a trend toward population deconcentration in Australia at the macro and meso (national and state) levels. It seeks to relate processes of population dynamics identified at macro, meso and micro levels to several hypotheses which have been put forward to explain the turnaround. A preliminary explanatory model which links causal mechanisms operating at different levels of the spatial and urban hierarchy is outlined."

  7. The challenge and the future of health care turnaround plans: evidence from the Italian experience.

    PubMed

    Ferrè, Francesca; Cuccurullo, Corrado; Lega, Federico

    2012-06-01

    Over the last two decades, health policy and governance in Italy have undergone decentralisation at the regional level. The central government was expected to play a guiding role in defining minimum care standards and controlling health expenditures at the regional level in order to keep the entire Italian National Health System (INHS) on track. Although health performance trends have been consistent across regions, public health expenditures have been variable and contributed to a cumulative deficit of 38 billion Euros from 2001 to 2010. To address the deficit, the government called for a resolution introducing a partial bail-out plan and later institutionalised a process to facilitate a turnaround. The upturn started with the development of a formal regional turnaround plan that proposed strategic actions to address the structural determinants of costs. The effectiveness of this tool was widely questioned, and many critics suggested that it was focused more on methods to address short-term issues than on the long-term strategic reconfiguration that is required for regional health systems to ultimately address the structural causes of deficits.We propose an interpretative framework to understand the advantages and disadvantages of turnaround plans, and we apply the findings to the development of policy recommendations for the structure, methods, processes and contexts of the implementation of this tool.

  8. Instantaneous and time-averaged flow structures around a blunt double-cone with or without supersonic film cooling visualized via nano-tracer planar laser scattering

    NASA Astrophysics Data System (ADS)

    Zhu, Yang-Zhu; Yi, Shi-He; He, Lin; Tian, Li-Feng; Zhou, Yong-Wei

    2013-01-01

    In a Mach 3.8 wind tunnel, both instantaneous and time-averaged flow structures of different scales around a blunt double-cone with or without supersonic film cooling were visualized via nano-tracer planar laser scattering (NPLS), which has a high spatiotemporal resolution. Three experimental cases with different injection mass flux rates were carried out. Many typical flow structures were clearly shown, such as shock waves, expansion fans, shear layers, mixing layers, and turbulent boundary layers. The analysis of two NPLS images with an interval of 5 μs revealed the temporal evolution characteristics of flow structures. With matched pressures, the laminar length of the mixing layer was longer than that in the case with a larger mass flux rate, but the full covered region was shorter. Structures like K—H (Kelvin—Helmholtz) vortices were clearly seen in both flows. Without injection, the flow was similar to the supersonic flow over a backward-facing step, and the structures were relatively simpler, and there was a longer laminar region. Large scale structures such as hairpin vortices were visualized. In addition, the results were compared in part with the schlieren images captured by others under similar conditions.

  9. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28

  10. Using corporate finance to engineer an organizational turnaround.

    PubMed

    Sussman, Jason H; Dziesinski, Ray R

    2002-11-01

    Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process.

  11. Using corporate finance to engineer an organizational turnaround.

    PubMed

    Sussman, Jason H; Dziesinski, Ray R

    2002-11-01

    Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process. PMID:12656032

  12. Adaptive beamforming at very low frequencies in spatially coherent, cluttered noise environments with low signal-to-noise ratio and finite-averaging times

    PubMed

    Nuttall; Wilson

    2000-11-01

    Realistic simulations with spatially coherent noise have been run in order to compare the performance of adaptive beamforming (ABF), inverse beamforming (IBF), and conventional beamforming (CBF) for the case of finite-averaging times, where the actual spatial coherence of the acoustic field, or covariance matrix, is not known a priori, but must be estimated. These estimation errors cause large errors in the ABF estimate of the directionality of the acoustic field, partly because ABF is a highly nonlinear algorithm. In addition, it is shown that ABF is fundamentally limited in its suppression capability at very low frequency (VLF), based on the sidelobe level of the conventional beampattern in the direction of the noise interferer [G. L. Mohnkern, "Effects of Errors and Limitations on Interference Suppression," NOSC Technical Document 1478, Naval Ocean Systems Center (1989)]. The simulations include a low-level plane wave signal of interest, a stronger noise plane wave interferer, and spatially random background noise. Both IBF and ABF performed significantly better than CBF, and IBF's performance was slightly better than ABF's performance. The performances of IBF and the ABF algorithm, the minimum variance distortionless response (MVDR) [A. H. Nuttall and D. W. Hyde, "Unified Approach to Optimum and Suboptimum Processing for Arrays," USL Report Number 992, Naval Underwater Systems Center, New London, CT (22 April 1969)] were recently compared independently [J. S. D. Solomon, A. J. Knight, and M. V. Greening, "Sonar Array Signal Processing for Sparse Linear Arrays," Defense Science and Technology Organization (DSTO) Technical Report (June 1999)] using measured data, with the result that IBF outperformed MVDR. This result is significant because MVDR requires orders of magnitude more processing power than IBF or CBF.

  13. Field demonstration of rapid turnaround, multilevel groundwater screening

    SciTech Connect

    Tingle, A.R.; Baker, L.; Long, D.D.; Miracle, M.

    1994-09-01

    A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.

  14. Fresh policies and procedures, transparency fuel ED turnaround.

    PubMed

    2016-05-01

    The ED at Banner Payson Medical Center in Payson, AZ, has charted dramatic improvements on key metrics through a range of staff and policy changes. In just a few months, the ED has halved wait times, patient satisfaction has improved, and daily volume is up. Administrators say the secret to the success of the effort is a move to be transparent by posting key metrics regarding patient flow, a tactic that has helped the team pull together and feel a sense of accomplishment when performance goals are achieved. Administrators adjusted staff schedules to better match patient volume patterns in the ED, and they added staff to help nurses during peak hours. A new provider group enlisted the assistance of scribes during peak hours to manage the documentation workload while also enabling physicians to focus more patient interaction. Physicians hold end-of-shift huddles to review successes and challenges, and to improve physician/nurse communication. The tactic also helps develop the staff from an educational standpoint. PMID:27266001

  15. Turnaround Principals

    ERIC Educational Resources Information Center

    McLester, Susan

    2011-01-01

    The Obama administration has grand hopes for turning around the nation's lowest-performing schools, in part by allocating $3.5 billion for School Improvement Grants. Unfortunately, there simply aren't enough qualified principals to replace those mandated to be fired under two of the four school improvement models that the federal government says…

  16. Signal-to-noise ratio improvements in laser flow diagnostics using time-resolved image averaging and high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Giassi, Davide; Long, Marshall B.

    2016-08-01

    Two alternative image readout approaches are demonstrated to improve the signal-to-noise ratio (SNR) in temporally resolved laser-based imaging experiments of turbulent phenomena. The first method exploits the temporal decay characteristics of the phosphor screens of image intensifiers when coupled to an interline-transfer CCD camera operated in double-frame mode. Specifically, the light emitted by the phosphor screen, which has a finite decay constant, is equally distributed and recorded over the two sequential frames of the detector so that an averaged image can be reconstructed. The characterization of both detector and image intensifier showed that the technique preserves the correct quantitative information, and its applicability to reactive flows was verified using planar Rayleigh scattering and tested with the acquisition of images of both steady and turbulent partially premixed methane/air flames. The comparison between conventional Rayleigh results and the averaged ones showed that the SNR of the averaged image is higher than the conventional one; with the setup used in this work, the gain in SNR was seen to approach 30 %, for both the steady and turbulent cases. The second technique uses the two-frame readout of an interline-transfer CCD to increase the image SNR based on high dynamic range imaging, and it was tested in an unsteady non-reactive flow of Freon-12 injected in air. The result showed a 15 % increase in the SNR of the low-pixel-count regions of an image, when compared to the pixels of a conventionally averaged one.

  17. Anatomy of a public health agency turnaround: the case of the general health district in Mahoning County.

    PubMed

    Honoré, Peggy A; Stefanak, Matthew; Dessens, Scott

    2012-01-01

    A turnaround describes an organization's ability to recover from successive periods of decline. Current and projected declines in US economic conditions continue to place local public health departments at risk of fiscal exigency. This examination focused on turnaround methodologies used by a local public health department to reverse successive periods of operational and financial declines. Illustrations are provided on the value added by implementing financial ratio and trend analysis in addition to using evidence-based private sector turnaround strategies of retrenchment, repositioning, and reorganization. Evidence has shown how the financial analysis and strategies aided in identifying operational weakness and set in motion corrective measures. The Public Health Uniform Data System is introduced along with a list of standards offered for mainstreaming these and other routine stewardship practices to diagnose, predict, and prevent agency declines.

  18. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  19. Paradoxes in Averages.

    ERIC Educational Resources Information Center

    Mitchem, John

    1989-01-01

    Examples used to illustrate Simpson's paradox for secondary students include probabilities, university admissions, batting averages, student-faculty ratios, and average and expected class sizes. Each result is explained. (DC)

  20. On the use of time-averaging restraints when deriving biomolecular structure from [Formula: see text]-coupling values obtained from NMR experiments.

    PubMed

    Smith, Lorna J; van Gunsteren, Wilfred F; Hansen, Niels

    2016-09-01

    Deriving molecular structure from [Formula: see text]-couplings obtained from NMR experiments is a challenge due to (1) the uncertainty in the Karplus relation [Formula: see text] connecting a [Formula: see text]-coupling value to a torsional angle [Formula: see text], (2) the need to account for the averaging inherent to the measurement of [Formula: see text]-couplings, and (3) the sampling road blocks that may emerge due to the multiple-valuedness of the inverse function [Formula: see text] of the function [Formula: see text]. Ways to properly handle these issues in structure refinement of biomolecules are discussed and illustrated using the protein hen egg white lysozyme as example.

  1. Sputum smear microscopy referral rates and turnaround time in the Tonga Islands.

    PubMed

    Fonua, L; Bissell, K; Vivili, P; Gounder, S; Hill, P C

    2014-06-21

    Contexte : Le programme national de lutte contre la tuberculose et le laboratoire de référence de Tonga, situé sur l'île principale, Tongatapu, et trois laboratoires d'hôpitaux de district situés sur d'autres îles.Objectifs : Comparer Tongatapu avec les autres îles en ce qui concerne le taux de référence des crachats, le nombre d'échantillons par patient, la qualité des échantillons, les résultats de l'examen, le délai de retour du résultat et le délai entre le résultat et la mise en route du traitement.Schéma: Etude rétrospective par revue des dossiers de laboratoire et des registres de traitement anti tuberculeux des quatre hôpitaux des Tonga entre 2003 et 2012.Résultats : Parmi 3078 échantillons de crachats, 71,7% étaient de bonne qualité. Le taux de référence des crachats était presque deux fois plus élevé à Tongatapu que dans les îles extérieures (353 contre 180/100 000). Le délai de retour des crachats à Tongatapu et dans les îles extérieures était respectivement de 4,02 et 4,11 jours. Sur 83 cas positifs, 91,2% étaient traités le jour même à Tongatapu contre 80% dans les îles extérieures.Conclusion : Entre l'île principale et les îles extérieures, on note des différences en matière de taux d'examen de crachats mais pas en termes de délai de retour. Les données relatives à la qualité des frottis et aux dates de réalisation ont des limitations qui méritent une intervention avec des directives et des registres spécifiques de la TB. Une recherche supplémentaire est requise pour comprendre les différences entre les taux de référence.

  2. The Race Is On To Shorten the Turnaround Time for Diagnosis of Multidrug-Resistant Tuberculosis

    PubMed Central

    Somoskovi, Akos

    2015-01-01

    To realize the most benefit from multidrug-resistant tuberculosis (MDR-TB) screening, all nucleic acid amplification test (NAAT)-positive respiratory specimens should be universally tested. Once an MDR-TB diagnosis is established, additional testing is warranted to provide details about the detected mutations. The lab-on-chip technology described by A. M. Cabibbe et al. (J Clin Microbiol 53:3876–3880, 2015, http://dx.doi.org/10.1128/JCM.01824-15) potentially provides this much needed information. PMID:26378276

  3. On the use of time-averaging restraints when deriving biomolecular structure from [Formula: see text]-coupling values obtained from NMR experiments.

    PubMed

    Smith, Lorna J; van Gunsteren, Wilfred F; Hansen, Niels

    2016-09-01

    Deriving molecular structure from [Formula: see text]-couplings obtained from NMR experiments is a challenge due to (1) the uncertainty in the Karplus relation [Formula: see text] connecting a [Formula: see text]-coupling value to a torsional angle [Formula: see text], (2) the need to account for the averaging inherent to the measurement of [Formula: see text]-couplings, and (3) the sampling road blocks that may emerge due to the multiple-valuedness of the inverse function [Formula: see text] of the function [Formula: see text]. Ways to properly handle these issues in structure refinement of biomolecules are discussed and illustrated using the protein hen egg white lysozyme as example. PMID:27627888

  4. The Turnaround Challenge: Why America's Best Opportunity to Dramatically Improve Student Achievement Lies in Our Worst-Performing Schools. Supplement to the Main Report

    ERIC Educational Resources Information Center

    Calkins, Andrew; Guenther, William; Belfiore, Grace; Lash, Dave

    2007-01-01

    The turnaround recommendations and framework in "The Turnaround Challenge" grew out of both new research and synthesis of extensive existing research, as carried out by Mass Insight Education & Research Institute and its partners since September 2005. If the main report is the tip of the proverbial iceberg, this supplement represents at least part…

  5. Portfolio District Reform Meets School Turnaround: Early Implementation Findings from the Los Angeles Public School Choice Initiative

    ERIC Educational Resources Information Center

    Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan

    2013-01-01

    Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…

  6. School Improvement and Urban Renewal: The Impact of a Turnaround School's Performance on Real Property Values in Its Surrounding Community

    ERIC Educational Resources Information Center

    Jacobson, Stephen L.; Szczesek, Jill

    2013-01-01

    This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…

  7. The Double Bind for Women: Exploring the Gendered Nature of Turnaround Leadership in a Principal Preparation Program

    ERIC Educational Resources Information Center

    Weiner, Jennie Miles; Burton, Laura J.

    2016-01-01

    In this study of nine participants in a turnaround principal preparation program, Jennie Miles Weiner and Laura J. Burton explore how gender role identity shaped participants' views of effective principal leadership and their place within it. The authors find that although female and male participants initially framed effective leadership…

  8. The Reading Turn-Around: A Five Part Framework for Differentiated Instruction. Practitioners Bookshelf, Language & Literacy Series

    ERIC Educational Resources Information Center

    Jones, Stephanie; Clarke, Lane; Enriquez, Grace

    2009-01-01

    This book demonstrates a five-part framework for teachers, reading specialists, and literacy coaches who want to help their least engaged students become powerful readers. Merging theory and practice, the guide offers successful strategies to reach your "struggling" learners. The authors show how you can "turn-around" your instructional practice,…

  9. Developing Arizona Turnaround Leaders to Build High-Capacity Schools in the Midst of Accountability Pressures and Changing Demographics

    ERIC Educational Resources Information Center

    Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad

    2014-01-01

    Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of…

  10. A Rural School/Community: A Case Study of a Dramatic Turnaround & Its Implications for School Improvement.

    ERIC Educational Resources Information Center

    Carlson, Robert V.

    This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failed…

  11. Orbit-on-demand vehicles - Mission possible, but one-day turnaround?

    NASA Technical Reports Server (NTRS)

    Talay, T. A.; Morris, W. D.; Lemessurier, R. W.

    1985-01-01

    The next generation launch-on-demand manned orbital vehicles will ideally be able to perform plane change maneuvers for rendezvous. Plane-changes of, e.g., 11 deg, will require a 3000 fps on-orbit maneuver velocity. Any plane-change capabilities beyond 800 fps rapidly increase the necessary vehicle weight. Sensitivity studies of the payload weight/vehicle weight, the presence or absence of air-breathing engines, high acceleration, etc., indicated that advances are required in the payload stowage pallet, thermal protection system, fault detection electronics, line-replaceable units, and accessibility to realize one-day turnaround. The spacecraft would also need autonomous systems checks and launch from minimum support sites reached by air ferry. A horizontal take-off, two-stage subsonic launch vehicle so far offers the lowest weight, maximum flexibility and minimum support combination of all configurations under consideration.

  12. Comparison of two different passive air samplers (PUF-PAS versus SIP-PAS) to determine time-integrated average air concentration of volatile hydrophobic organic pollutants

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Kyu; Park, Jong-Eun

    2014-06-01

    Despite remarkable achievements with r some chemicals, a field-measurement technique has not been advanced for volatile hydrophobic organic chemicals (HOCs) that are the subjects of international concern. This study assesses the applicability of passive air sampling (PAS) by comparing PUF-PAS and its modified SIP-PAS which was made by impregnating XAD-4 powder into PUF, overviewing the principles of PAS, screening sensitive parameters, and determining the uncertainty range of PAS-derived concentration. The PAS air sampling rate determined in this study, corrected by a co-deployed low-volume active air sampler (LAS) for neutral PFCs as model chemicals, was ˜1.2 m3 day-1. Our assessment shows that the improved sorption capacity in a SIP lengthens PAS deployment duration by expanding the linear uptake range and then enlarges the effective air sampling volume and detection frequency of chemicals at trace level. Consequently, volatile chemicals can be collected during sufficiently long times without reaching equilibrium when using SIP, while this is not possible for PUF. The most sensitive parameter to influence PAS-derived CA was an air-side mass transfer coefficient (kA), implying the necessity of spiking depuration chemicals (DCs) because this parameter is strongly related with meteorological conditions. Uncertainty in partition coefficients (KPSM-A or KOA) influences PAS-derived CA to a greater extent with regard to lower KPSM-A chemicals. Also, the PAS-derived CA has an uncertainty range of a half level to a 3-fold higher level of the calculated one. This work is expected to establish solid grounds for the improvement of field measurement technique of HOCs.

  13. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  14. Vibration analysis by time-average holography

    NASA Technical Reports Server (NTRS)

    Aprahamian, R.; Evensen, D. A.

    1971-01-01

    Method photographs vibration modes as high as 100 kHz. Technique also measures vibration decay and damping characteristics. Compensation techniques for structural size limitations and for background noise excitations are discussed.

  15. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  16. Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)

    NASA Technical Reports Server (NTRS)

    Gross, R. Steven

    1996-01-01

    This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.

  17. A Bound Violation on the Galaxy Group Scale: The Turn-around Radius of NGC 5353/4

    NASA Astrophysics Data System (ADS)

    Lee, Jounghun; Kim, Suk; Rey, Soo-Chang

    2015-12-01

    The first observational evidence for the violation of the maximum turn-around radius on the galaxy group scale is presented. The NGC 5353/4 group is chosen as an ideal target for our investigation of the bound-violation because of its proximity, low-density environment, optimal mass scale, and the existence of a nearby thin straight filament. Using the observational data on the line-of-sight velocities and three-dimensional distances of the filament galaxies located in the bound zone of the NGC 5353/4 group, we construct their radial velocity profile as a function of separation distance from the group center and then compare it to the analytic formula obtained empirically by Falco et al. to find the best-fit value of an adjustable parameter with the help of the maximum likelihood method. The turn-around radius of NGC 5353/4 is determined to be the separation distance where the adjusted analytic formula for the radial velocity profile yields zero. The estimated turn-around radius of NGC 5353/4 turned out to substantially exceed the upper limit predicted by the spherical model based on the ΛCDM cosmology. Even when the restrictive condition of spherical symmetry is released, the estimated value is found to be only marginally consistent with the ΛCDM expectation.

  18. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  19. On radar time and the twin ``paradox''

    NASA Astrophysics Data System (ADS)

    Dolby, Carl E.; Gull, Stephen F.

    2001-12-01

    In this paper we apply the concept of radar time (popularized by Bondi in his work on k calculus) to the well-known relativistic twin "paradox." Radar time is used to define hypersurfaces of simultaneity for a class of traveling twins, from the "immediate turn-around" case, through the "gradual turn-around" case, to the "uniformly accelerating" case. We show that this definition of simultaneity is independent of choice of coordinates, and assigns a unique time to any event (with which the traveling twin can send and receive signals), resolving some common misconceptions.

  20. DIGIT-PHYSICS: Digits Are Bosons Are Quanta Because (On Average) Quanta and Bosons Are and Always Were Digits!!! DIGITS?: For a Very Long Time Giving Us All The FINGER!!!

    NASA Astrophysics Data System (ADS)

    Siegel, Edward Carl-Ludwig; Newcomb, Simon; Strutt-Rayleigh, John William; Poincare, Henri; Weyl, Hermann; Benford, Frederick; Antonoff, Marvin

    2015-03-01

    DIGIT-PHYSICS: DIGITS?: For a Very Long Time Giving Us All The FINGER!!!: CONTRA Wigner,``On the Unreasonable Effectiveness of Physics in Mathematics!'' A Surprise in Theoretical/Experimental Physics and/or Ostensibly Pure-Mathematics: PHYSICS: Quantum-Mechanics/Statistical-.Mechanics. DIGITS-LAW(S); DIGITS' ostensibly ``pure-mathematics' 1:1-map onto the QUANTUM!!! [Google:''http://www.benfordonline.net/ list/ chronological'']: Newcomb[Am.J.Math.4,39(1881)]-Poincare[Calcul des Probabilité(1912)]-Weyl[Math.Ann., 77, 313(1916)-Benford[J.Am.Phil Soc,78,115 (1938)]-..-Antonoff/Siegel[AMS Joint-Mtg.,San Diego(2002)-abs.# 973-60-124] empirical inter-digit{on-ANY/ALL averageS) = log[base =10] (1 + 1/d) = log[base =10] ([d +1]/d) upon algebraic-inversion is d = 1/[10⌃[ ] -1] 1/[2.303..e⌃[ ] -1] 1/[2.303..e⌃[< ω>] -1] 1/[2.303..e⌃[ ω] -1]: Digits Are Bosons Are Quanta Because (On Average) Quanta and Bosons Are and Always Were Digits!!! (Ex: atom energy-levels numbering: 0,...,9) ANY/ALL QUANTUM-physics[Planck(1901)-Einstein(1905)-Bose(1924)-Einstein(1925)-vs.Fermi(1927)-Dirac(1927)-...] is and always was Newcomb(1881) DIGIT-physics!!!

  1. Determination of hydrologic properties needed to calculate average linear velocity and travel time of ground water in the principal aquifer underlying the southeastern part of Salt Lake Valley, Utah

    USGS Publications Warehouse

    Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.

    1994-01-01

    A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to

  2. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT

    PubMed Central

    Chen, Guang-Hong; Li, Yinsheng

    2015-01-01

    Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity

  3. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  4. Time-averaged discharge rate of subaerial lava at Kīlauea Volcano, Hawai‘i, measured from TanDEM-X interferometry: Implications for magma supply and storage during 2011-2013

    USGS Publications Warehouse

    Poland, Michael P.

    2014-01-01

    Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of Kīlauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at Kīlauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of Kīlauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of Kīlauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at Kīlauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.

  5. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  6. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  7. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  8. Computer-assisted time-averaged holograms of the motion of the surface of the mammalian tympanic membrane with sound stimuli of 0.4 to 25 kHz

    PubMed Central

    Rosowski, John J.; Cheng, Jeffrey Tao; Ravicz, Michael E.; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme

    2009-01-01

    Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f > 4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841

  9. Numerical computation of aerodynamics and heat transfer in a turbine cascade and a turn-around duct using advanced turbulence models

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, B.; Luo, J.

    1993-07-01

    The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.

  10. Numerical computation of aerodynamics and heat transfer in a turbine cascade and a turn-around duct using advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Luo, J.

    1993-01-01

    The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.

  11. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  12. Analysis of the particle interactions in a two-dimensional-plasma dust crystal and the use of dust as a probe of the time-averaged presheath electric field.

    PubMed

    Hebner, G A; Riley, M E; Greenberg, K E

    2002-10-01

    A method for measuring the interactions of dust particles within a two-dimensional (2D) layer in a plasma is presented, along with the use of dust as a probe for determining plasma presheath electric fields. Particle interactions were directly determined from the lateral compression of two-dimensional plasma dust crystals confined in a parabolic potential well. The shape of the parabolic potential well was confirmed by observing trajectories of single particles falling within the well. Particle trajectories were in good agreement with values calculated using gas temperatures derived from laser-induced-fluorescence measurements of the argon metastable velocity distributions and assuming diffusive gas scattering. Measured particle separations combined with an equation of state for the crystal were used to derive values for the plasma screening length and the charge on the particles. Screening lengths and charges were measured for a range of plasma power, pressure, and particle diameter (mass). Analysis of the particle heights and charge were used to derive the time-averaged electric fields at the edge of the rf driven plasma sheath. Measured electric fields were between 5 and 22 V/cm. For smaller particle diameters, the ion wind force was comparable to the force of gravity. These measurements showed that the particles are confined to the bulk-plasma side of the classical Bohm point.

  13. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  14. District Support Systems for the Alignment of Curriculum, Instruction, and Assessment: Can We Predict Student Achievement in Reading and Writing for School Turnaround?

    ERIC Educational Resources Information Center

    Abbott, Laura Lynn Tanner

    2014-01-01

    The purpose of this quantitative non-experimental predictive study was to determine if CIA alignment factors and related district support systems are associated with student achievement to enable the turnaround of schools in crisis. This study aimed to utilize the District Snapshot Tool to determine if the district systems that support CIA…

  15. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  16. The Hubble rate in averaged cosmology

    SciTech Connect

    Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com

    2011-03-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.

  17. Reliable and sensitive detection of fragile X (expanded) alleles in clinical prenatal DNA samples with a fast turnaround time.

    PubMed

    Seneca, Sara; Lissens, Willy; Endels, Kristof; Caljon, Ben; Bonduelle, Maryse; Keymolen, Kathleen; De Rademaeker, Marjan; Ullmann, Urielle; Haentjens, Patrick; Van Berkel, Kim; Van Dooren, Sonia

    2012-11-01

    This study evaluated a large set of blinded, previously analyzed prenatal DNA samples with a novel, CGG triplet-repeat primed (TP)-PCR assay (Amplidex FMR1 PCR Kit; Asuragen, Austin, TX). This cohort of 67 fetal DNAs contained 18 full mutations (270 to 1100 repeats, including 1 mosaic), 12 premutations (59 to 150 repeats), 9 intermediate mutations (54 to 58 repeats), and 28 normal samples (17 to 50 repeats, including 3 homozygous female samples). TP-PCR accurately identified FMR1 genotypes, ranging from normal to full- mutation alleles, with a 100% specificity (95% CI, 85.0% to 100%) and a 97.4% sensitivity (95% CI, 84.9% to 99.9%) in comparison with Southern blot analysis results. Exact sizing was possible for a spectrum of normal, intermediate, and premutation (up to 150 repeats) alleles, but CGG repeat numbers >200 are only identified as full mutations. All homozygous alleles were correctly resolved. The assay is also able to reproducibly detect a 2.5% premutation and a 3% full-mutation mosaicism in a normal male background, but a large premutation in a full male mutation background was masked when the amount of the latter was >5%. Implementation of this TP-PCR will significantly reduce reflex testing using Southern blot analyses. Additional testing with methylation-informative techniques might still be needed for a few cases with (large) premutations or full mutations.

  18. Average-passage flow model development

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  19. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  20. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  1. Searching for the Beginning of the Ozone Turnaround Using a 22-Year Merged Satellite Data Set

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Meeson, Blanche W. (Technical Monitor)

    2001-01-01

    We have used the data from six satellite instruments that measure the total column amount of ozone to construct a consistent merged data set extending from late 1978 into 2000. The keys to constructing a merged data set are to minimize potential drift of individual instruments and to accurately establish instrument-to-instrument offsets. We have used the short-wavelength D-pair measurements (306nm-313nm) of the SBUV and SBUV/2 instruments near the equator to establish a relatively drift-free record for these instruments. We have then used their overlap with the Nimbus 7 and EP TOMS instruments to establish the relative calibration of the various instruments. We have evaluated the drift uncertainty in our merged ozone data (MOD) set by examining both the individual instrument drift uncertainty and the uncertainty in establishing the instrument- to-instrument differences. We conclude that the instrumental drift uncertainty over the 22-year data record is 0.9 %/decade (2-sigma). We have compared our MOD record with 37 ground stations that have a continuous record over that time period. We have a mean drift with respect to the stations of +0.3 %/decade which is within 1-sigma of our uncertainty estimate. Using the satellite record as a transfer standard, we can estimate the capability of the ground instruments to establish satellite calibration. Adding the statistical variability of the station drifts with respect to the satellite to an estimate of the overall drift uncertainty of the world standard instrument, we conclude that the stations should be able to be used to establish the drift of the satellite data record to within and uncertainty of 0.6 %/decade (2-sigma). Adding to this an uncertainty due to the-incomplete global coverage of the stations, we conclude that the station data should be able to establish the global trend with an uncertainty of about 0.7 %/decade, slightly better than for the satellite record. We conclude that merging the two records together

  2. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  3. Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

    2012-01-01

    ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data

  4. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  5. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  6. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  7. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  8. Averaging of globally coupled oscillators

    NASA Astrophysics Data System (ADS)

    Swift, James W.; Strogatz, Steven H.; Wiesenfeld, Kurt

    1992-03-01

    We study a specific system of symmetrically coupled oscillators using the method of averaging. The equations describe a series array of Josephson junctions. We concentrate on the dynamics near the splay-phase state (also known as the antiphase state, ponies on a merry-go-round, or rotating wave). We calculate the Floquet exponents of the splay-phase periodic orbit in the weak-coupling limit, and find that all of the Floquet exponents are purely imaginary; in fact, all the Floquet exponents are zero except for a single complex conjugate pair. Thus, nested two-tori of doubly periodic solutions surround the splay-phase state in the linearized averaged equations. We numerically integrate the original system, and find startling agreement with the averaging results on two counts: The observed ratio of frequencies is very close to the prediction, and the solutions of the full equations appear to be either periodic or doubly periodic, as they are in the averaged equations. Such behavior is quite surprising from the point of view of generic dynamical systems theory-one expects higher-dimensional tori and chaotic solutions. We show that the functional form of the equations, and not just their symmetry, is responsible for this nongeneric behavior.

  9. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  10. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  11. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  12. Building Turnaround Capacity for Urban School Improvement: The Role of Adaptive Leadership and Defined Autonomy

    ERIC Educational Resources Information Center

    Conrad, Jill K.

    2013-01-01

    This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,…

  13. Simulation of two- and three-dimensional dense-fluid shear flows via nonequilibrium molecular dynamics: Comparison of time-and-space-averaged stresses from homogeneous Doll's and Sllod shear algorithms with those from boundary-driven shear

    NASA Astrophysics Data System (ADS)

    Hoover, Wm. G.; Hoover, Carol G.; Petravic, Janka

    2008-10-01

    Homogeneous shear flows (with constant strainrate dvx/dy ) are generated with the Doll’s and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear normal-stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, Pxx-Pyy , depend on both the thermostat type and the chosen shearflow algorithm. The Doll’s and Sllod algorithms predict opposite signs for this normal-stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures: Txx>Tzz>Tyy .

  14. Interlibrary Loan Time and Motion Study, Colorado Western Slope.

    ERIC Educational Resources Information Center

    Thomas, Sharon D.

    This report, which investigates turnaround time for interlibrary loans, presents a 1-month study of the interlibrary loan (ILL) process operating in the Western Slope areas of Colorado during 1980. It comprises introductory material presenting the importance, scope and limitations of the study, problem statement, hypothesis and term definitions; a…

  15. Kuss Middle School: Expanding Time to Accelerate School Improvement

    ERIC Educational Resources Information Center

    Massachusetts 2020, 2012

    2012-01-01

    In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning, deepen…

  16. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-01

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc. PMID:16652369

  17. Average luminosity distance in inhomogeneous universes

    SciTech Connect

    Kostov, Valentin

    2010-04-01

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained

  18. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the

  19. Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Luo, J.

    1993-01-01

    Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.

  20. Average entanglement for Markovian quantum trajectories

    SciTech Connect

    Vogelsberger, S.; Spehner, D.

    2010-11-15

    We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.

  1. Reflight of the First Microgravity Science Laboratory: Quick Turnaround of a Space Shuttle Mission

    NASA Technical Reports Server (NTRS)

    Simms, Yvonne

    1998-01-01

    Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.

  2. The modulated average structure of mullite.

    PubMed

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  3. Average configuration of the induced venus magnetotail

    SciTech Connect

    McComas, D.J.; Spence, H.E.; Russell, C.T.

    1985-01-01

    In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.

  4. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  5. Making a Turnaround

    ERIC Educational Resources Information Center

    Olson, Cathy Applefeld

    2010-01-01

    It's true that each school and each classroom presents its own unique challenges. But Kevin Mixon--who in October traded his instrumental music instructor duties at Blodgett School on the west side of Syracuse, New York, for a job as fine arts coordinator for Syracuse City Schools--knows from experience that some schools are far more challenging…

  6. Engineering a Turnaround

    ERIC Educational Resources Information Center

    Hood, Lucy

    2006-01-01

    This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…

  7. Scaling crossover for the average avalanche shape

    NASA Astrophysics Data System (ADS)

    Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.

    2010-03-01

    Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.

  8. Geomagnetic effects on the average surface temperature

    NASA Astrophysics Data System (ADS)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  9. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  10. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866

  11. Auto-exploratory average reward reinforcement learning

    SciTech Connect

    Ok, DoKyeong; Tadepalli, P.

    1996-12-31

    We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.

  12. Predictabilty of time averages: The influence of the boundary forcing

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1982-01-01

    The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.

  13. Space/time averaging of scattered coherence functions

    NASA Astrophysics Data System (ADS)

    Kelly, Damien P.

    2016-06-01

    A new optical technique for understanding, analyzing and developing optical systems is presented. This approach is statistical in nature, where information about an object under investigation is discovered by examining deviations from a known reference statistical distribution.

  14. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  15. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  16. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  17. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  18. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...

  19. Time?

    NASA Astrophysics Data System (ADS)

    Amoroso, Richard L.

    2013-09-01

    The concept of time in the `clockwork' Newtonian world was irrelevant; and has generally been ignored until recently by several generations of physicists since the implementation of quantum mechanics. We will set aside the utility of time as a property relating to physical calculations of events relating to a metrics line element or as an aspect of the transformation of a particles motion/interaction in a coordinate system or in relation to thermodynamics etc., i.e. we will discard all the usual uses of time as a concept used to circularly define physical parameters in terms of other physical parameters; concentrating instead on time as an aspect of the fundamental cosmic topology of our virtual reality especially as it inseparably relates to the nature and role of the observer in natural science.

  20. Transforming Schools through Expanded Learning Time: Orchard Gardens K-8 Pilot School. Update 2013

    ERIC Educational Resources Information Center

    Chan, Roy

    2013-01-01

    For years, Orchard Gardens K-8 Pilot School was plagued by low student achievement and high staff turnover. Then, in 2010, with an expanded school schedule made possible through federal funding, Orchard Gardens began a remarkable turnaround. Today, the school is demonstrating how increased learning time, combined with other key turnaround…

  1. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  2. Average observational quantities in the timescape cosmology

    SciTech Connect

    Wiltshire, David L.

    2009-12-15

    We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.

  3. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  5. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  6. Turnaround Insights from the Organizational Sciences: A Review of the Empirical Evidence and the Development of a Staged Model of Recovery with Potential Implications for the PK-12 Education Sector

    ERIC Educational Resources Information Center

    Murphy, Joseph

    2008-01-01

    In this article, we review research from the organizational sciences to develop lessons for educators and policy makers. The approach is an integrative review of the literature. We employ a comprehensive process to unpack and make sense of the turnaround literature from the organizational sciences. We rely on strategies appropriate for document…

  7. Average Lorentz self-force from electric field lines

    NASA Astrophysics Data System (ADS)

    Aashish, Sandeep; Haque, Asrarul

    2015-09-01

    We generalize the derivation of electromagnetic fields of a charged particle moving with a constant acceleration Singal (2011 Am. J. Phys. 79 1036) to a variable acceleration (piecewise constants) over a small finite time interval using Coulomb's law, relativistic transformations of electromagnetic fields and Thomson's construction Thomson (1904 Electricity and Matter (New York: Charles Scribners) ch 3). We derive the average Lorentz self-force for a charged particle in arbitrary non-relativistic motion via averaging the fields at retarded time.

  8. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  9. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  10. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  11. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  12. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  13. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  14. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  15. Average cross-responses in correlated financial markets

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  16. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  17. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  18. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  19. Spectral and parametric averaging for integrable systems

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Serota, R. A.

    2015-05-01

    We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.

  20. Spatial limitations in averaging social cues.

    PubMed

    Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  1. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  2. Average oxidation state of carbon in proteins.

    PubMed

    Dick, Jeffrey M

    2014-11-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.

  3. Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.

    PubMed

    Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu

    2010-05-01

    Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813

  4. Averaging underwater noise levels for environmental assessment of shipping.

    PubMed

    Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

    2012-10-01

    Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics. PMID:23039575

  5. 40 CFR 1037.710 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Averaging. 1037.710 Section 1037.710 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW HEAVY-DUTY MOTOR VEHICLES Averaging, Banking, and Trading for Certification §...

  6. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  7. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  8. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…

  9. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  10. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  11. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  12. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  13. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: Class Tier Model year FEL cap(g/km) HC+NOX Class I or II Tier 1 2006 and later 5.0 Class III Tier 1 2006... States. (c) To use the averaging program, do the following things: (1) Certify each vehicle to a family... to the nearest tenth of a g/km. Use consistent units throughout the calculation. The averaging...

  14. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  15. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  16. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  17. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  18. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...

  19. Averaged equations for distributed Josephson junction arrays

    NASA Astrophysics Data System (ADS)

    Bennett, Matthew; Wiesenfeld, Kurt

    2004-06-01

    We use an averaging method to study the dynamics of a transmission line studded by Josephson junctions. The averaged system is used as a springboard for studying experimental strategies which rely on spatial non-uniformity to achieve enhanced synchronization. A reduced model for the near resonant case elucidates in physical terms the key to achieving stable synchronized dynamics.

  20. New results on averaging theory and applications

    NASA Astrophysics Data System (ADS)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  1. Real-Time Patient Survey Data During Routine Clinical Activities for Rapid-Cycle Quality Improvement

    PubMed Central

    Jones, Robert E

    2015-01-01

    Background Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. Objective We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. Methods We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Results Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)—8.0, 11.8, 16.8, seconds, respectively. Conclusions This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health

  2. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    PubMed

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  3. Averaging of Backscatter Intensities in Compounds

    PubMed Central

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  4. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793

  5. Average Shape of Transport-Limited Aggregates

    NASA Astrophysics Data System (ADS)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  6. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  7. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  8. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  9. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  10. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  11. Average Passenger Occupancy (APO) in Your Community.

    ERIC Educational Resources Information Center

    Stenstrup, Al

    1995-01-01

    Provides details of an activity in which students in grades 4-10 determine the Average Passenger Occupancy (APO) in their community and develop, administer, and analyze a survey to determine attitudes toward carpooling. (DDR)

  12. Rotational averaging of multiphoton absorption cross sections

    SciTech Connect

    Friese, Daniel H. Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  13. Averaging Sampled Sensor Outputs To Detect Failures

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.

    1990-01-01

    Fluctuating signals smoothed by taking consecutive averages. Sampling-and-averaging technique processes noisy or otherwise erratic signals from number of sensors to obtain indications of failures in complicated system containing sensors. Used under both transient and steady-state conditions. Useful in monitoring automotive engines, chemical-processing plants, powerplants, and other systems in which outputs of sensors contain noise or other fluctuations in measured quantities.

  14. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  15. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  16. Rapid identification of microorganisms from positive blood cultures by testing early growth on solid media using matrix-assisted laser desorption ionization-time of flight mass spectrometry.

    PubMed

    Gonzalez, Mark D; Weber, Carol J; Burnham, Carey-Ann D

    2016-06-01

    We performed a retrospective analysis of a simple modification to MALDI-TOF MS for microorganism identification to accurately improve the turnaround time (TAT) for identification of Enterobacteriaceae recovered in blood cultures. Relative to standard MALDI-TOF MS procedures, we reduced TAT from 28.3 (n=90) to 21.2h (n=107).

  17. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  18. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  19. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  20. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  1. Successive averages of firmly nonexpansve mappings

    SciTech Connect

    Flam, S.

    1994-12-31

    The problem considered here is to find common fixed points of (possibly infinitely) many firmly nonexpansive selfmappings in a Hilbert space. For this purpose we use averaged relaxations of the original mappings, the averages being Bochner integrals with respect to chosen measures. Judicious choices of such measures serve to enhance the convergence towards common fixed points. Since projection operators onto closed convex sets are firmly non expansive, the methods explored are applicable for solving convex feasibility problems. In particular, by varying the measures our analysis encompasses recent developments of so-called block-iterative algorithms. We demonstrate convergence theorems which cover and extend many known results.

  2. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  3. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  4. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    SciTech Connect

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  5. Analysis of Finger Pulse by Standard Deviation Using Moving Average

    NASA Astrophysics Data System (ADS)

    Asakawa, Takashi; Nishihara, Kazue; Yoshidome, Tadashi

    We propose a method of analyzing a finger pulse by standard deviation using moving average for measuring mental load. Frequency analysis, Lorentz plot and Lyapnov exponent have been carried out to present measurement. However, this technique is analyzable in a shorter time than the existing technique.

  6. Grade Point Average and Changes in (Great) Grade Expectations.

    ERIC Educational Resources Information Center

    Wendorf, Craig A.

    2002-01-01

    Examines student grade expectations throughout a semester in which students offered their expectations three times during the course: (1) within the first week; (2) midway through the semester; and (3) the week before the final examination. Finds that their expectations decreased stating that their cumulative grade point average was related to the…

  7. State-Variable Representations For Moving-Average Sampling

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1991-01-01

    Two state-variable representations derived for continuous-time plant driven by control algorithm including zero-order hold and measurements sampled at mutliple rates by multiple-input/multiple-output moving-average processes. New representations enhance observability and controllability of plant. Applications include mathematical modeling of navigation systems including star trackers, gyroscopes, and accelerometers.

  8. Average properties of compressible laminar boundary layer on flat plate with unsteady flight velocity

    NASA Technical Reports Server (NTRS)

    Moore, Franklin K; Ostrach, Simon

    1957-01-01

    The time-average characteristics of boundary layers over a flat plate in nearly quasi-steady flow are determined. The plate may be either insulated or isothermal. The time averages are found without specifying the plate velocity explicitly except that it is positive and has an average value.

  9. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    PubMed

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  10. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  11. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  12. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733

  13. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  14. Cryo-Electron Tomography and Subtomogram Averaging.

    PubMed

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations.

  15. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  16. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  17. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  18. A Rapid Turn-around, Scalable Big Data Processing Capability for the JPL Airborne Snow Observatory (ASO) Mission

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.

    2014-12-01

    The JPL Airborne Snow Observatory (ASO) is an integrated LIDAR and Spectrometer measuring snow depth and rate of snow melt in the Sierra Nevadas, specifically, the Tuolumne River Basin, Sierra Nevada, California above the O'Shaughnessy Dam of the Hetch Hetchy reservoir, and the Uncompahgre Basin, Colorado, amongst other sites. The ASO data was delivered to water resource managers from the California Department of Water Resources in under 24 hours from the time that the Twin Otter aircraft landed in Mammoth Lakes, CA to the time disks were plugged in to the ASO Mobile Compute System (MCS) deployed at the Sierra Nevada Aquatic Research Laboratory (SNARL) near the airport. ASO performed weekly flights and each flight took between 500GB to 1 Terabyte of raw data, which was then processed from level 0 data products all the way to full level 4 maps of Snow Water Equivalent, albedo mosaics, and snow depth from LIDAR. These data were produced by Interactive Data analysis Language (IDL) algorithms which were then unobtrusively and automatically integrated into an Apache OODT and Apache Tika based Big Data processing system. Data movement was both electronic and physical including novel uses of LaCie 1 and 2 TeraByte (TB) data bricks and deployment in rugged terrain. The MCS was controlled remotely from the Jet Propulsion Laboratory, California Institute of Technology (JPL) in Pasadena, California on behalf of the National Aeronautics and Space Administration (NASA). Communication was aided through the use of novel Internet Relay Chat (IRC) command and control mechanisms and through the use of the Notifico open source communication tools. This talk will describe the high powered, and light-weight Big Data processing system that we developed for ASO and its implications more broadly for airborne missions at NASA and throughout the government. The lessons learned from ASO show the potential to have a large impact in the development of Big Data processing systems in the years

  19. Neutron average cross sections of {sup 237}Np

    SciTech Connect

    Noguere, G.

    2010-04-15

    This work reports {sup 237}Np neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the {sup 237}Np neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width =39.3+-1.0 meV, a neutron strength function 10{sup 4}S{sub 0}=1.02+-0.14, a mean level spacing D{sub 0}=0.60+-0.03 eV, and a potential scattering length R{sup '}=9.8+-0.1 fm.

  20. New applications for high average power beams

    NASA Astrophysics Data System (ADS)

    Neau, E. L.; Turman, B. N.; Patterson, E. L.

    1993-06-01

    The technology base formed by the development of high peak power simulators, laser drivers, FEL's, and ICF drivers from the early 60's through the late 80's is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.

  1. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  2. From cellular doses to average lung dose.

    PubMed

    Hofmann, W; Winkler-Heil, R

    2015-11-01

    Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions.

  3. High-average-power exciplex laser system

    NASA Astrophysics Data System (ADS)

    Sentis, M.

    The LUX high-average-power high-PRF exciplex laser (EL) system being developed at the Institut de Mecanique des Fluides de Marseille is characterized, and some preliminary results are presented. The fundamental principles and design criteria of ELs are reviewed, and the LUX components are described and illustrated, including a closed-circuit subsonic wind tunnel and a 100-kW-average power 1-kHz-PRF power pulser providing avalanche-discharge preionization by either an electron beam or an X-ray beam. Laser energy of 50 mJ has been obtained at wavelength 308 nm in the electron-beam mode (14.5 kV) using a 5300/190/10 mixture of Ne/Xe/HCl at pressure 1 bar.

  4. Apparent and average accelerations of the Universe

    SciTech Connect

    Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu

    2008-10-15

    In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.

  5. Experimental measurements and analytical analysis related to gas turbine heat transfer. Part 1: Time-averaged heat-flux and surface-pressure measurements on the vanes and blades of the SSME fuel-side turbine and comparison with prediction. Part 2: Phase-resolved surface-pressure and heat-flux measurements on the first blade of the SSME fuel-side turbine

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

  6. Emissions averaging top option for HON compliance

    SciTech Connect

    Kapoor, S. )

    1993-05-01

    In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.

  7. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  8. Iterative methods based upon residual averaging

    NASA Technical Reports Server (NTRS)

    Neuberger, J. W.

    1980-01-01

    Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.

  9. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  10. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  11. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  12. Annual average radon concentrations in California residences.

    PubMed

    Liu, K S; Hayward, S B; Girman, J R; Moed, B A; Huang, F Y

    1991-09-01

    A study was conducted to determine the annual average radon concentrations in California residences, to determine the approximate fraction of the California population regularly exposed to radon concentrations of 4 pCi/l or greater, and to the extent possible, to identify regions of differing risk for high radon concentrations within the state. Annual average indoor radon concentrations were measured with passive (alpha track) samplers sent by mail and deployed by home occupants, who also completed questionnaires on building and occupant characteristics. For the 310 residences surveyed, concentrations ranged from 0.10 to 16 pCi/l, with a geometric mean of whole-house (bedroom and living room) average concentrations of 0.85 pCi/l and a geometric standard deviation of 1.91. A total of 88,000 California residences (0.8 percent) were estimated to have radon concentrations exceeding 4 pCi/l. When the state was divided into six zones based on geology, significant differences in geometric mean radon concentrations were found between several of the zones. Zones with high geometric means were the Sierra Nevada mountains, the valleys east of the Sierra Nevada, the central valley (especially the southern portion), and Ventura and Santa Barbara Counties. Zones with low geometric means included most coastal counties and the portion of the state from Los Angeles and San Bernardino Counties south.

  13. Model averaging, optimal inference, and habit formation

    PubMed Central

    FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

  14. Fast Optimal Transport Averaging of Neuroimaging Data.

    PubMed

    Gramfort, A; Peyré, G; Cuturi, M

    2015-01-01

    Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface. PMID:26221679

  15. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  16. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  17. Fluctuations of wavefunctions about their classical average

    NASA Astrophysics Data System (ADS)

    Benet, L.; Flores, J.; Hernández-Saldaña, H.; Izrailev, F. M.; Leyvraz, F.; Seligman, T. H.

    2003-02-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  18. Collimation of average multiplicity in QCD jets

    NASA Astrophysics Data System (ADS)

    Arleo, François; Pérez Ramos, Redamy

    2009-11-01

    The collimation of average multiplicity inside quark and gluon jets is investigated in perturbative QCD in the modified leading logarithmic approximation (MLLA). The role of higher order corrections accounting for energy conservation and the running of the coupling constant leads to smaller multiplicity collimation as compared to leading logarithmic approximation (LLA) results. The collimation of jets produced in heavy-ion collisions has also been explored by using medium-modified splitting functions enhanced in the infrared sector. As compared to elementary collisions, the angular distribution of the jet multiplicity is found to broaden in QCD media at all energy scales.

  19. Average characteristics of partially coherent electromagnetic beams.

    PubMed

    Seshadri, S R

    2000-04-01

    Average characteristics of partially coherent electromagnetic beams are treated with the paraxial approximation. Azimuthally or radially polarized, azimuthally symmetric beams and linearly polarized dipolar beams are used as examples. The change in the mean squared width of the beam from its value at the location of the beam waist is found to be proportional to the square of the distance in the propagation direction. The proportionality constant is obtained in terms of the cross-spectral density as well as its spatial spectrum. The use of the cross-spectral density has advantages over the use of its spatial spectrum.

  20. A Green's function quantum average atom model

    SciTech Connect

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.