Science.gov

Sample records for average turnaround time

  1. Improving theatre turnaround time.

    PubMed

    Fletcher, Daniel; Edwards, David; Tolchard, Stephen; Baker, Richard; Berstock, James

    2017-01-01

    The NHS Institute for Innovation and Improvement has determined that a £7 million saving can be achieved per trust by improving theatre efficiency. The aim of this quality improvement project was to improve orthopaedic theatre turnaround without compromising the patient safety. We process mapped all the stages from application of dressing to knife to skin on the next patient in order to identify potential areas for improvement. Several suggestions arose which were tested in multiple PDSA cycles in a single theatre. These changes were either adopted, adapted or rejected on the basis of run chart data and theatre team feedback. Successful ideas which were adopted included, the operating department practitioner (ODP) seeing and completing check-in paperwork during the previous case rather than during turnaround, a 15 minute telephone warning to ensure the next patient was fully ready, a dedicated cleaning team mobilised during wound closure, sending for the next patient as theatre cleaning begins. Run charts demonstrate that as a result of these interventions the mean turnaround time almost halved from 66.5 minutes in July to 36.8 minutes over all PDSA cycles. This improvement has been sustained and rolled out into another theatre. As these improvements become more established we hope that additional cases will be booked, improving theatre output. The PDSA cycle continues as we believe that further gains may yet be made, and our improvements may be rolled out across other surgical specialities.

  2. Improving theatre turnaround time

    PubMed Central

    Fletcher, Daniel; Edwards, David; Tolchard, Stephen; Baker, Richard; Berstock, James

    2017-01-01

    The NHS Institute for Innovation and Improvement has determined that a £7 million saving can be achieved per trust by improving theatre efficiency. The aim of this quality improvement project was to improve orthopaedic theatre turnaround without compromising the patient safety. We process mapped all the stages from application of dressing to knife to skin on the next patient in order to identify potential areas for improvement. Several suggestions arose which were tested in multiple PDSA cycles in a single theatre. These changes were either adopted, adapted or rejected on the basis of run chart data and theatre team feedback. Successful ideas which were adopted included, the operating department practitioner (ODP) seeing and completing check-in paperwork during the previous case rather than during turnaround, a 15 minute telephone warning to ensure the next patient was fully ready, a dedicated cleaning team mobilised during wound closure, sending for the next patient as theatre cleaning begins. Run charts demonstrate that as a result of these interventions the mean turnaround time almost halved from 66.5 minutes in July to 36.8 minutes over all PDSA cycles. This improvement has been sustained and rolled out into another theatre. As these improvements become more established we hope that additional cases will be booked, improving theatre output. The PDSA cycle continues as we believe that further gains may yet be made, and our improvements may be rolled out across other surgical specialities. PMID:28243441

  3. Laboratory Turnaround Time

    PubMed Central

    Hawkins, Robert C

    2007-01-01

    Turnaround time (TAT) is one of the most noticeable signs of laboratory service and is often used as a key performance indicator of laboratory performance. This review summarises the literature regarding laboratory TAT, focusing on the different definitions, measures, expectations, published data, associations with clinical outcomes and approaches to improve TAT. It aims to provide a consolidated source of benchmarking data useful to the laboratory in setting TAT goals and to encourage introduction of TAT monitoring for continuous quality improvement. A 90% completion time (sample registration to result reporting) of <60 minutes for common laboratory tests is suggested as an initial goal for acceptable TAT. PMID:18392122

  4. Computerized Monitoring and Analysis of Radiology Report Turnaround Times

    NASA Astrophysics Data System (ADS)

    Wang, Yen

    1989-05-01

    A computerized Radiology Management System was used to monitor the turnaround time of radiology reports in a large university hospital. The time from patient entry into the department until the printing and distribution of the final examination report was monitored periodically for two-week time intervals. Total turnaround time was divided into four separate components. Analysis of the data enabled us to assess individual and departmental performance and thereby improve important patient service functions.

  5. Turnaround Time and Market Capacity in Contract Cheating

    ERIC Educational Resources Information Center

    Wallace, Melisa J.; Newton, Philip M.

    2014-01-01

    Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…

  6. Interlibrary Loan Turnaround Times in Science and Engineering.

    ERIC Educational Resources Information Center

    Horton, Weldon, Jr.

    1989-01-01

    Describes the use of fixed point analysis procedures at King Fahd University of Petroleum and Minerals to determine as narrow a range as possible of interlibrary loan turnaround times in science and engineering subjects. The findings are discussed in terms of the complexity of interlibrary loan factors and items determined as relevant for further…

  7. Factors that impact turnaround time of surgical pathology specimens in an academic institution.

    PubMed

    Patel, Samip; Smith, Jennifer B; Kurbatova, Ekaterina; Guarner, Jeannette

    2012-09-01

    Turnaround time of laboratory results is important for customer satisfaction. The College of American Pathologists' checklist requires an analytic turnaround time of 2 days or less for most routine cases and lets every hospital define what a routine specimen is. The objective of this study was to analyze which factors impact turnaround time of nonbiopsy surgical pathology specimens. We calculated the turnaround time from receipt to verification of results (adjusted for weekends and holidays) for all nonbiopsy surgical specimens during a 2-week period. Factors studied included tissue type, number of slides per case, decalcification, immunohistochemistry, consultations with other pathologists, and diagnosis. Univariate and multivariate analyses were performed. A total of 713 specimens were analyzed, 551 (77%) were verified within 2 days and 162 (23%) in 3 days or more. Lung, gastrointestinal, breast, and genitourinary specimens showed the highest percentage of cases being signed out in over 3 days. Diagnosis of malignancy (including staging of the neoplasia), consultation with other pathologists, having had a frozen section, and use of immunohistochemical stains were significantly associated with increased turnaround time in univariate analysis. Decalcification was not associated with increased turnaround time. In multivariate analysis, consultation with other pathologists, use of immunohistochemistry, diagnosis of malignancy, and the number of slides studied continued to be significantly associated with prolonged turnaround time. Our findings suggest that diagnosis of malignancy is central to significantly prolonging the turnaround time for surgical pathology specimens, thus institutions that serve cancer centers will have longer turnaround time than those that do not.

  8. Applying Lean Methodologies Reduces Emergency Department Laboratory Turnaround Times

    PubMed Central

    White, Benjamin A.; Baron, Jason M.; Dighe, Anand S.; Camargo, Carlos A.; Brown, David F.M.

    2015-01-01

    Background Increasing the value of healthcare delivery is a national priority, and providers face growing pressure to reduce cost while improving quality. Ample opportunity exists to increase efficiency and quality simultaneously through the application of systems engineering science. Objective We examined the hypothesis that Lean-based reorganization of laboratory process flow would improve laboratory turnaround times (TAT) and reduce waste in the system. Methods This study was a prospective, before-after analysis of laboratory process improvement in a teaching hospital Emergency Department (ED). The intervention included a reorganization of laboratory sample flow based in systems engineering science and Lean methodologies, with no additional resources. The primary outcome was the median TAT from sample collection to result for six tests previously performed in an ED kiosk. Results Following the intervention, median laboratory TAT decreased across most tests. The greatest decreases were found in “reflex tests” performed after an initial screening test: troponin T TAT was reduced by 33 minutes (86 to 53 min, 99%CI 30–35 min) and urine sedimentation TAT by 88 minutes (117 to 29 min, 99% CI 87–90 min). In addition, troponin I TAT was reduced by 12 minutes, urinalysis by 9 minutes, and urine HCG by 10 minutes. Microbiology rapid testing TAT, a ‘control’, did not change. Conclusions In this study, Lean-based reorganization of laboratory process flow significantly increased process efficiency. Broader application of systems engineering science might further improve healthcare quality and capacity, while reducing waste and cost. PMID:26145581

  9. Microwave and digital imaging technology reduce turnaround times for diagnostic electron microscopy.

    PubMed

    Giberson, Richard T; Austin, Ronald L; Charlesworth, Jon; Adamson, Grete; Herrera, Guillermo A

    2003-01-01

    The contributions of microwave methods and digital imaging techniques, when taken together, can reduce routine specimen processing and evaluation for diagnostic electron microscopy to a time frame never thought possible. Significant improvements in both technologies over the last 5 years led the authors to evaluate their combined attributes as the most likely candidate to provide a realistic solution in the reduction of turnaround times for diagnostic electron microscopy. For diagnostic electron microscopy to compete favorably with immunohistochemistry and other ancillary diagnostic techniques, it must improve its turnaround time. To evaluate this hypothesis the microwave-assisted processing results of over 2,000 diagnostic cases were evaluated as was a digital image administration system used for the acquisition and dissemination of diagnostic results. The incorporation of both technologies resulted in turnaround times being reduced to 4 h or less.

  10. Mapping Turnaround Times (TAT) to a Generic Timeline: A Systematic Review of TAT Definitions in Clinical Domains

    PubMed Central

    2011-01-01

    Background Assessing turnaround times can help to analyse workflows in hospital information systems. This paper presents a systematic review of literature concerning different turnaround time definitions. Our objectives were to collect relevant literature with respect to this kind of process times in hospitals and their respective domains. We then analysed the existing definitions and summarised them in an appropriate format. Methods Our search strategy was based on Pubmed queries and manual reviews of the bibliographies of retrieved articles. Studies were included if precise definitions of turnaround times were available. A generic timeline was designed through a consensus process to provide an overview of these definitions. Results More than 1000 articles were analysed and resulted in 122 papers. Of those, 162 turnaround time definitions in different clinical domains were identified. Starting and end points vary between these domains. To illustrate those turnaround time definitions, a generic timeline was constructed using preferred terms derived from the identified definitions. The consensus process resulted in the following 15 terms: admission, order, biopsy/examination, receipt of specimen in laboratory, procedure completion, interpretation, dictation, transcription, verification, report available, delivery, physician views report, treatment, discharge and discharge letter sent. Based on this analysis, several standard terms for turnaround time definitions are proposed. Conclusion Using turnaround times to benchmark clinical workflows is still difficult, because even within the same clinical domain many different definitions exist. Mapping of turnaround time definitions to a generic timeline is feasible. PMID:21609424

  11. 24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Indicator #1, vacancy rate and unit... HOUSING AND URBAN DEVELOPMENT PUBLIC HOUSING MANAGEMENT ASSESSMENT PROGRAM § 901.10 Indicator #1, vacancy rate and unit turnaround time. This indicator examines the vacancy rate, a PHA's progress in...

  12. Specimen origin, type and testing laboratory are linked to longer turnaround times for HIV viral load testing in Malawi

    PubMed Central

    Chipungu, Geoffrey; Kim, Andrea A.; Sarr, Abdoulaye; Ali, Hammad; Mwenda, Reuben; Nkengasong, John N.; Singer, Daniel

    2017-01-01

    Background Efforts to reach UNAIDS’ treatment and viral suppression targets have increased demand for viral load (VL) testing and strained existing laboratory networks, affecting turnaround time. Longer VL turnaround times delay both initiation of formal adherence counseling and switches to second-line therapy for persons failing treatment and contribute to poorer health outcomes. Methods We utilized descriptive statistics and logistic regression to analyze VL testing data collected in Malawi between January 2013 and March 2016. The primary outcomes assessed were greater-than-median pretest phase turnaround time (days elapsed from specimen collection to receipt at the laboratory) and greater-than-median test phase turnaround time (days from receipt to testing). Results The median number of days between specimen collection and testing increased 3-fold between 2013 (8 days, interquartile range (IQR) = 6–16) and 2015 (24, IQR = 13–39) (p<0.001). Multivariable analysis indicated that the odds of longer pretest phase turnaround time were significantly higher for specimen collection districts without laboratories capable of conducting viral load tests (adjusted odds ratio (aOR) = 5.16; 95% confidence interval (CI) = 5.04–5.27) as well as for Malawi’s Northern and Southern regions. Longer test phase turnaround time was significantly associated with use of dried blood spots instead of plasma (aOR = 2.30; 95% CI = 2.23–2.37) and for certain testing months and testing laboratories. Conclusion Increased turnaround time for VL testing appeared to be driven in part by categorical factors specific to the phase of turnaround time assessed. Given the implications of longer turnaround time and the global effort to scale up VL testing, addressing these factors via increasing efficiencies, improving quality management systems and generally strengthening the VL spectrum should be considered essential components of controlling the HIV epidemic. PMID:28235013

  13. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    PubMed

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  14. Analysis of high-level radioactive slurries as a method to reduce DWPF turnaround times

    SciTech Connect

    Coleman, C.J.; Bibler, N.E.; Ferrara, D.M.; Hay, M.S.

    1996-06-01

    Analysis of Defense Waste Processing Facility (DWPF) samples as slurries rather than as dried or vitrified samples is an effective way to reduce sample turnaround times. Slurries can be dissolved with a mixture of concentrated acids to yield solutions for elemental analysis by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). Slurry analyses can be performed in eight hours, whereas analyses of vitrified samples require up to 40 hours to complete. Analyses of melter feed samples consisting of the DWPF borosilicate frit and either simulated or actual DWPF radioactive sludge were typically within a range of 3--5% of the predicted value based on the relative amounts of sludge and frit added to the slurry. The results indicate that the slurry analysis approach yields analytical accuracy and precision competitive with those obtained from analyses of vitrified samples. Slurry analyses offer a viable alternative to analyses of solid samples as a simple way to reduce analytical turnaround times.

  15. Insertable system for fast turnaround time microwave experiments in a dilution refrigerator.

    PubMed

    Ong, Florian R; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian

    2012-09-01

    Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence.

  16. Insertable system for fast turnaround time microwave experiments in a dilution refrigerator

    NASA Astrophysics Data System (ADS)

    Ong, Florian R.; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian

    2012-09-01

    Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence.

  17. National turnaround time survey: professional consensus standards for optimal performance and thresholds considered to compromise efficient and effective clinical management.

    PubMed

    McKillop, Derek J; Auld, Peter

    2017-01-01

    Background Turnaround time can be defined as the time from receipt of a sample by the laboratory to the validation of the result. The Royal College of Pathologists recommends that a number of performance indicators for turnaround time should be agreed with stakeholders. The difficulty is in arriving at a goal which has some evidence base to support it other than what may simply be currently achievable technically. This survey sought to establish a professional consensus on the goals and meaning of targets for laboratory turnaround time. Methods A questionnaire was circulated by the National Audit Committee to 173 lead consultants for biochemistry in the UK. The survey asked each participant to state their current target turnaround time for core investigations in a broad group of clinical settings. Each participant was also asked to provide a professional opinion on what turnaround time would pose an unacceptable risk to patient safety for each departmental category. A super majority (2/3) was selected as the threshold for consensus. Results The overall response rate was 58% ( n = 100) with a range of 49-72% across the individual Association for Clinical Biochemistry and Laboratory Medicine regions. The consensus optimal turnaround time for the emergency department was <1 h with >2 h considered unacceptable. The times for general practice and outpatient department were <24 h and >48 h and for Wards <4 h and >12 h, respectively. Conclusions We consider that the figures provide a useful benchmark of current opinion, but clearly more empirical standards will have to develop alongside other aspects of healthcare delivery.

  18. Implementation and Operational Research: Expedited Results Delivery Systems Using GPRS Technology Significantly Reduce Early Infant Diagnosis Test Turnaround Times.

    PubMed

    Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh

    2015-09-01

    The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays.

  19. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  20. Additional technician tasks and turnaround time in the clinical Stat laboratory

    PubMed Central

    Salinas, Maria; López-Garrigós, Maite; Flores, Emilio; Leiva-Salinas, Maria; Lillo, Rosa; Leiva-Salinas, Carlos

    2016-01-01

    Introduction Many additional tasks in the Stat laboratory (SL) increase the workload. It is necessary to control them because they can affect the service provided by the laboratory. Our aim is to calculate these tasks, study their evolution over a 10 year period, and compare turnaround times (TAT) in summer period to the rest of the year. Materials and methods Additional tasks were classified as “additional test request” and “additional sample”. We collected those incidences from the laboratory information system (LIS), and calculated their evolution over time. We also calculated the monthly TAT for troponin for Emergency department (ED) patients, as the difference between the verification and LIS registration time. A median time of 30 minutes was our indicator target. TAT results and tests workload in summer were compared to the rest of the year. Results Over a 10-year period, the technologists in the SL performed 51,385 additional tasks, a median of 475 per month. The workload was significantly higher during the summer (45,496 tests) than the rest of the year (44,555 tests) (P = 0.019). The troponin TAT did not show this variation between summer and the rest of the year, complying always with our 30 minutes indicator target. Conclusion The technicians accomplished a significant number of additional tasks, and the workload kept increasing over the period of 10 years. That did not affect the TAT results. PMID:27346970

  1. Efficiency of an Automated Reception and Turnaround Time Management System for the Phlebotomy Room

    PubMed Central

    Yun, Soon Gyu; Park, Eun Su; Bang, Hae In; Kang, Jung Gu

    2016-01-01

    Background Recent advances in laboratory information systems have largely been focused on automation. However, the phlebotomy services have not been completely automated. To address this issue, we introduced an automated reception and turnaround time (TAT) management system, for the first time in Korea, whereby the patient's information is transmitted directly to the actual phlebotomy site and the TAT for each phlebotomy step can be monitored at a glance. Methods The GNT5 system (Energium Co., Ltd., Korea) was installed in June 2013. The automated reception and TAT management system has been in operation since February 2014. Integration of the automated reception machine with the GNT5 allowed for direct transmission of laboratory order information to the GNT5 without involving any manual reception step. We used the mean TAT from reception to actual phlebotomy as the parameter for evaluating the efficiency of our system. Results Mean TAT decreased from 5:45 min to 2:42 min after operationalization of the system. The mean number of patients in queue decreased from 2.9 to 1.0. Further, the number of cases taking more than five minutes from reception to phlebotomy, defined as the defect rate, decreased from 20.1% to 9.7%. Conclusions The use of automated reception and TAT management system was associated with a decrease of overall TAT and an improved workflow at the phlebotomy room. PMID:26522759

  2. Total Automation for the Core Laboratory: Improving the Turnaround Time Helps to Reduce the Volume of Ordered STAT Tests.

    PubMed

    Ialongo, Cristiano; Porzio, Ottavia; Giambini, Ilio; Bernardini, Sergio

    2016-06-01

    The transition to total automation represents the greatest leap for a clinical laboratory, characterized by a totally new philosophy of process management. We have investigated the impact of total automation on core laboratory efficiency and its effects on the clinical services related to STAT tests. For this purpose, a 47-month retrospective study based on the analysis of 44,212 records of STAT cardiac troponin I (CTNI) tests was performed. The core laboratory reached a new efficiency level 3 months after the implementation of total automation. Median turnaround time (TAT) was reduced by 14.9±1.5 min for the emergency department (p < 0.01), reaching 41.6±1.2 min. In non-emergency departments, median TAT was reduced by 19.8±2.2 min (p < 0.01), reaching 52±1.3 min. There was no change in the volume of ordered STAT CTNI tests by the emergency department (p = 0.811), whereas for non-emergency departments there was a reduction of 115.7±50 monthly requests on average (p = 0.026). The volume of ordered tests decreased only in time frames of the regular shift following the morning round. Thus, total automation significantly improves the core laboratory efficiency in terms of TAT. As a consequence, the volume of STAT tests ordered by hospital departments (except for the emergency department) decreased due to reduced duplicated requests.

  3. Diagnostic accuracy and turnaround time of the Xpert MTB/RIF assay in routine clinical practice.

    PubMed

    Kwak, Nakwon; Choi, Sun Mi; Lee, Jinwoo; Park, Young Sik; Lee, Chang-Hoon; Lee, Sang-Min; Yoo, Chul-Gyu; Kim, Young Whan; Han, Sung Koo; Yim, Jae-Joon

    2013-01-01

    The Xpert MTB/RIF assay was introduced for timely and accurate detection of tuberculosis (TB). The aim of this study was to determine the diagnostic accuracy and turnaround time (TAT) of Xpert MTB/RIF assay in clinical practice in South Korea. We retrospectively reviewed the medical records of patients in whom Xpert MTB/RIF assay using sputum were requested. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for the diagnosis of pulmonary tuberculosis (PTB) and detection of rifampicin resistance were calculated. In addition, TAT of Xpert MTB/RIF assay was compared with those of other tests. Total 681 patients in whom Xpert MTB/RIF assay was requested were included in the analysis. The sensitivity, specificity, PPV and NPV of Xpert MTB/RIF assay for diagnosis of PTB were 79.5% (124/156), 100.0% (505/505), 100.0% (124/124) and 94.0% (505/537), respectively. Those for the detection of rifampicin resistance were 57.1% (8/14), 100.0% (113/113), 100.0% (8/8) and 94.9% (113/119), respectively. The median TAT of Xpert MTB/RIF assay to the report of results and results confirmed by physicians in outpatient settings were 0 (0-1) and 6 (3-7) days, respectively. Median time to treatment after initial evaluation was 7 (4-9) days in patients with Xpert MTB/RIF assay, but was 21 (7-33.5) days in patients without Xpert MTB/RIF assay. Xpert MTB/RIF assay showed acceptable sensitivity and excellent specificity for the diagnosis of PTB and detection of rifampicin resistance in areas with intermediate TB burden. Additionally, the assay decreased time to the initiation of anti-TB drugs through shorter TAT.

  4. The Impact of a Health IT Changeover on Medical Imaging Department Work Processes and Turnaround Times

    PubMed Central

    Georgiou, A.; Lymer, S.; Hordern, A.; Ridley, L.; Westbrook, J.

    2015-01-01

    Summary Objectives To assess the impact of introducing a new Picture Archiving and Communication System (PACS) and Radiology Information System (RIS) on: (i) Medical Imaging work processes; and (ii) turnaround times (TATs) for x-ray and CT scan orders initiated in the Emergency Department (ED). Methods We employed a mixed method study design comprising: (i) semi-structured interviews with Medical Imaging Department staff; and (ii) retrospectively extracted ED data before (March/April 2010) and after (March/April 2011 and 2012) the introduction of a new PACS/RIS. TATs were calculated as: processing TAT (median time from image ordering to examination) and reporting TAT (median time from examination to final report). Results Reporting TAT for x-rays decreased significantly after introduction of the new PACS/RIS; from a median of 76 hours to 38 hours per order (p<.0001) for patients discharged from the ED, and from 84 hours to 35 hours (p<.0001) for patients admitted to hospital. Medical Imaging staff reported that the changeover to the new PACS/RIS led to gains in efficiency, particularly regarding the accessibility of images and patient-related information. Nevertheless, assimilation of the new PACS/RIS with existing Departmental work processes was considered inadequate and in some instances unsafe. Issues highlighted related to the synchronization of work tasks (e.g., porter arrangements) and the material set up of the work place (e.g., the number and location of computers). Conclusions The introduction of new health IT can be a “double-edged sword” providing improved efficiency but at the same time introducing potential hazards affecting the effectiveness of the Medical Imaging Department. PMID:26448790

  5. Preparing printed circuit boards for rapid turn-around time on a plotter

    SciTech Connect

    Hawtree, J.

    1998-01-01

    This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab`s Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day`s time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter.

  6. A quality initiative of postoperative radiographic imaging performed on mastectomy specimens to reduce histology cost and pathology report turnaround time.

    PubMed

    Kallen, Michael E; Sim, Myung S; Radosavcev, Bryan L; Humphries, Romney M; Ward, Dawn C; Apple, Sophia K

    2015-10-01

    Breast pathology relies on gross dissection for accurate diagnostic work, but challenges can necessitate submission of high tissue volumes resulting in excess labor, laboratory costs, and delays. To address these issues, a quality initiative was created through implementation of the Faxitron PathVision specimen radiography system as part of the breast gross dissection protocol; this report documents its impact on workflow and clinical care. Retrospective data from 459 patients who underwent simple or modified radical mastectomy at our institution between May 2012 and December 2014 were collected. Comparison was made between the mastectomy specimen control group before radiography use (233 patients, 340 breasts) and Faxitron group that underwent postoperative radiography (226 patients, 338 breasts). We observed a statistically significant decrease in mean number of blocks between control and Faxitron groups (47.0 vs 39.7 blocks; P<.0001), for calculated cost savings of US $146 per mastectomy. A statistically significant decrease in pathology report turnaround time was also observed (4.2 vs 3.8days; P=.038). Postoperative mastectomy specimen radiography has increased workflow efficiency and decreased histology costs and pathology report turnaround time. These findings may underestimate actual benefits and highlight the importance of quality improvement projects in anatomical pathology.

  7. Statistics of time averaged atmospheric scintillation

    SciTech Connect

    Stroud, P.

    1994-02-01

    A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.

  8. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  9. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  10. Comprehensive time average digital holographic vibrometry

    NASA Astrophysics Data System (ADS)

    Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, Pavel; Vojtíšek, Petr; Václavík, Jan

    2016-12-01

    This paper presents a method that simultaneously deals with drawbacks of time-average digital holography: limited measurement range, limited spatial resolution, and quantitative analysis of the measured Bessel fringe patterns. When the frequency of the reference wave is shifted by an integer multiple of frequency at which the object oscillates, the measurement range of the method can be shifted either to smaller or to larger vibration amplitudes. In addition, phase modulation of the reference wave is used to obtain a sequence of phase-modulated fringe patterns. Such fringe patterns can be combined by means of phase-shifting algorithms, and amplitudes of vibrations can be straightforwardly computed. This approach independently calculates the amplitude values in every single pixel. The frequency shift and phase modulation are realized by proper control of Bragg cells and therefore no additional hardware is required.

  11. Does Elimination of a Laboratory Sample Clotting Stage Requirement Reduce Overall Turnaround Times for Emergency Department Stat Biochemical Testing?

    PubMed Central

    Compeau, Sarah; Howlett, Michael; Matchett, Stephanie; Shea, Jennifer; Fraser, Jacqueline; McCloskey, Rose

    2016-01-01

    Introduction: Laboratory turnaround times (TAT) influence length of stay for emergency department (ED) patients. We studied biochemistry TATs around the implementation of a plasma separating tube (PST) that omitted a 20-minute clotting step in processing when compared to the standard serum separating tubes (SST). Methods: We compared laboratory TATs using PST vs SST in a prospective before-and-after study with a washout period. TATs for creatinine, urea, electrolytes, troponin, and N-terminal pro b-type natriuretic peptide (NT-proBNP), as well as hemolysis rates, were collected for all ED patients. Results were excluded if the TAT was four minutes or less (data entry error). We recorded the 90th percentile response times (TAT90; the time for 90% of the tests to be completed). Statistical analysis used survival analyses, Mann-Whitney U tests, and Chi-square tests of independence. Results: SST and PST groups were matched for days of the week, critical values, or hemolysis. There was a statistically significant reduction in median TAT and proportion completed by 60 minutes. However, the effect size was only two to four minutes in the In-Lab-TAT90 with the PST tubes for all tests, except B-type natriuretic peptide (BNP). Conclusions: Reducing the machine processing time for stat blood work with PST tubes did not produce a clinically meaningful reduction of TAT. Clinically important improvement for Lab TAT requires process analysis and intervention that is inclusive of the entire system. Fractile response times at a 90th percentile for TAT within 60 minutes may be an accurate benchmark for analysis. PMID:27843737

  12. An Audit of VDRL Testing from an STI Clinic in India: Analysing the Present Scenario with Focus on Estimating and Optimizing the Turnaround Time

    PubMed Central

    Mehra, Bhanu; Rawat, Deepti; Saxena, Shikhar

    2015-01-01

    Background Timeliness of reporting is of utmost importance to limit the spread of syphilis. The present analysis was undertaken to evaluate the turnaround time of syphilis testing (mainly Venereal disease research laboratory /VDRL test) in a sexually transmitted infections (STI) clinic in India; to find out the possible reasons for delay; to describe the trends of clinical indications for syphilis testing from an STI clinic; to assess the frequency of a positive syphilis serology among STI clinic attendees; and to analyse the follow-up rates of VDRL report collection. Materials and Methods Two hundred consecutive VDRL requests received at the serology laboratory of a tertiary care health facility from the STI clinic of the linked hospital were prospectively analysed to evaluate the above parameters. Results For the 200 requests audited, the mean absolute turnaround time of VDRL test was 7.46±2.81 days. The mean duration of the pre-laboratory, laboratory and post laboratory phases was 0, 4.69±2.13 and 2.77±2.51 days respectively. The interval from specimen receipt to performance of tests (mean duration=4.25±1.96 days) was the major reason for long VDRL turnaround time. The common indications for syphilis testing in STI clinic attendees were lower abdominal pain (33%), vaginal discharge (26.5%) and genital ulcer disease (9%); and the follow-up rate for report collection was 71%. Conclusion Our study highlights the strong need to shift to alternative testing methods, mainly rapid point of care procedures for serodiagnosis of syphilis in order to circumvent the problems of long turnaround time and low patient follow-up rates. PMID:26435966

  13. MDR-TB in Puducherry, India: reduction in attrition and turnaround time in the diagnosis and treatment pathway

    PubMed Central

    Govindarajan, S.; Thekkur, P.; Palanivel, C.; Muthaiah, M.; Kumar, A. M. V.; Gupta, V.; Sharath, B. N.; Tripathy, J. P.; Vivekananda, K.; Roy, G.

    2016-01-01

    Setting: A mixed-methods operational research (OR) study was conducted to examine the diagnosis and treatment pathway of patients with presumptive multidrug-resistant tuberculosis (MDR-TB) during 2012–2013 under the national TB programme in Puducherry, India. High pre-diagnosis and pre-treatment attrition and the reasons for these were identified. The recommendations from this OR were implemented and we planned to assess systematically whether there were any improvements. Objectives: Among patients with presumptive MDR-TB (July–December 2014), 1) to determine pre-diagnosis and pre-treatment attrition, 2) to determine factors associated with pre-diagnosis attrition, 3) to determine the turnaround time (TAT) from eligibility to testing and from diagnosis to treatment initiation, and 4) to compare these findings with those of the previous study (2012–2013). Design: This was a retrospective cohort study based on record review. Results: Compared to the previous study, there was a decrease in pre-diagnosis attrition from 45% to 24% (P < 0.001), in pre-treatment attrition from 29% to 0% (P = 0.18), in the TAT from eligibility to testing from a median of 11 days to 10 days (P = 0.89) and in the TAT from diagnosis to treatment initiation from a median of 38 days to 19 days (P = 0.04). There is further scope for reducing pre-diagnosis attrition by addressing the high risk of patients with human immunodeficiency virus and TB co-infection or those with extra-pulmonary TB not undergoing drug susceptibility testing. Conclusion: The implementation of findings from OR resulted in improved programme outcomes. PMID:28123961

  14. MDR-TB in Puducherry, India: reduction in attrition and turnaround time in the diagnosis and treatment pathway.

    PubMed

    Shewade, H D; Govindarajan, S; Thekkur, P; Palanivel, C; Muthaiah, M; Kumar, A M V; Gupta, V; Sharath, B N; Tripathy, J P; Vivekananda, K; Roy, G

    2016-12-21

    Setting: A mixed-methods operational research (OR) study was conducted to examine the diagnosis and treatment pathway of patients with presumptive multidrug-resistant tuberculosis (MDR-TB) during 2012-2013 under the national TB programme in Puducherry, India. High pre-diagnosis and pre-treatment attrition and the reasons for these were identified. The recommendations from this OR were implemented and we planned to assess systematically whether there were any improvements. Objectives: Among patients with presumptive MDR-TB (July-December 2014), 1) to determine pre-diagnosis and pre-treatment attrition, 2) to determine factors associated with pre-diagnosis attrition, 3) to determine the turnaround time (TAT) from eligibility to testing and from diagnosis to treatment initiation, and 4) to compare these findings with those of the previous study (2012-2013). Design: This was a retrospective cohort study based on record review. Results: Compared to the previous study, there was a decrease in pre-diagnosis attrition from 45% to 24% (P < 0.001), in pre-treatment attrition from 29% to 0% (P = 0.18), in the TAT from eligibility to testing from a median of 11 days to 10 days (P = 0.89) and in the TAT from diagnosis to treatment initiation from a median of 38 days to 19 days (P = 0.04). There is further scope for reducing pre-diagnosis attrition by addressing the high risk of patients with human immunodeficiency virus and TB co-infection or those with extra-pulmonary TB not undergoing drug susceptibility testing. Conclusion: The implementation of findings from OR resulted in improved programme outcomes.

  15. Scaling School Turnaround

    ERIC Educational Resources Information Center

    Herman, Rebecca

    2012-01-01

    This article explores the research on turning around low performing schools to summarize what we know, what we don't know, and what this means for scaling school turnaround efforts. "School turnaround" is defined here as quick, dramatic gains in academic achievement for persistently low performing schools. The article first considers the…

  16. Laboratory-based clinical audit as a tool for continual improvement: an example from CSF chemistry turnaround time audit in a South-African teaching hospital

    PubMed Central

    Imoh, Lucius C; Mutale, Mubanga; Parker, Christopher T; Erasmus, Rajiv T; Zemlin, Annalise E

    2016-01-01

    Introduction Timeliness of laboratory results is crucial to patient care and outcome. Monitoring turnaround times (TAT), especially for emergency tests, is important to measure the effectiveness and efficiency of laboratory services. Laboratory-based clinical audits reveal opportunities for improving quality. Our aim was to identify the most critical steps causing a high TAT for cerebrospinal fluid (CSF) chemistry analysis in our laboratory. Materials and methods A 6-month retrospective audit was performed. The duration of each operational phase across the laboratory work flow was examined. A process-mapping audit trail of 60 randomly selected requests with a high TAT was conducted and reasons for high TAT were tested for significance. Results A total of 1505 CSF chemistry requests were analysed. Transport of samples to the laboratory was primarily responsible for the high average TAT (median TAT = 170 minutes). Labelling accounted for most delays within the laboratory (median TAT = 71 minutes) with most delays occurring after regular work hours (P < 0.05). CSF chemistry requests without the appropriate number of CSF sample tubes were significantly associated with delays in movement of samples from the labelling area to the technologist’s work station (caused by a preference for microbiological testing prior to CSF chemistry). Conclusion A laboratory-based clinical audit identified sample transportation, work shift periods and use of inappropriate CSF sample tubes as drivers of high TAT for CSF chemistry in our laboratory. The results of this audit will be used to change pre-analytical practices in our laboratory with the aim of improving TAT and customer satisfaction. PMID:27346964

  17. Turnaround time of positive blood cultures after the introduction of matrix-assisted laser desorption-ionization time-of-flight mass spectrometry.

    PubMed

    Angeletti, Silvia; Dicuonzo, Giordano; D'Agostino, Alfio; Avola, Alessandra; Crea, Francesca; Palazzo, Carlo; Dedej, Etleva; De Florio, Lucia

    2015-07-01

    A comparative evaluation of the turnaround time (TAT) of positive blood culture before and after matrix-assisted laser desorption-ionization time-of-flight mass spectrometry (MALDI-TOF MS) introduction in the laboratory routine was performed. A total of 643 positive blood cultures, of which 310 before and 333 after MALDI-TOF technique introduction, were collected. In the post MALDI-TOF period, blood culture median TAT decreased from 73.53 hours to 71.73 for Gram-positive, from 64.09 hours to 63.59 for Gram-negative and from 115.7 hours to 47.62 for anaerobes. MALDI-TOF significantly decreased the TAT of anaerobes, for which antimicrobial susceptibility test is not routinely performed. Furthermore, the major advantage of MALDI-TOF introduction was the decrease of the time for pathogen identification (TID) independently from the species with an improvement of 93% for Gram-positive, 86% for Gram-negative and 95% for anaerobes. In addition, high species-level identification rates and cost savings than conventional methods were achieved after MALDI-TOF introduction.

  18. TIME INVARIANT MULTI ELECTRODE AVERAGING FOR BIOMEDICAL SIGNALS.

    PubMed

    Orellana, R Martinez; Erem, B; Brooks, D H

    2013-12-31

    One of the biggest challenges in averaging ECG or EEG signals is to overcome temporal misalignments and distortions, due to uncertain timing or complex non-stationary dynamics. Standard methods average individual leads over a collection of epochs on a time-sample by time-sample basis, even when multi-electrode signals are available. Here we propose a method that averages multi electrode recordings simultaneously by using spatial patterns and without relying on time or frequency.

  19. Phase II of a Six sigma Initiative to Study DWPF SME Analytical Turnaround Times: SRNL's Evaluation of Carbonate-Based Dissolution Methods

    SciTech Connect

    Edwards, Thomas

    2005-09-01

    The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3

  20. Turnaround Principal Competencies

    ERIC Educational Resources Information Center

    Steiner, Lucy; Barrett, Sharon Kebschull

    2012-01-01

    When the Minneapolis Public Schools first set out to hire turnaround school principals, administrators followed their usual process--which focused largely on reputation and anecdotal support and considered mainly internal candidates. Yet success at the complicated task of turning around the fortunes of a failing school depends on exceptionally…

  1. Identification of blood culture isolates directly from positive blood cultures by use of matrix-assisted laser desorption ionization-time of flight mass spectrometry and a commercial extraction system: analysis of performance, cost, and turnaround time.

    PubMed

    Lagacé-Wiens, Philippe R S; Adam, Heather J; Karlowsky, James A; Nichol, Kimberly A; Pang, Paulette F; Guenther, Jodi; Webb, Amanda A; Miller, Crystal; Alfa, Michelle J

    2012-10-01

    Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry represents a revolution in the rapid identification of bacterial and fungal pathogens in the clinical microbiology laboratory. Recently, MALDI-TOF has been applied directly to positive blood culture bottles for the rapid identification of pathogens, leading to reductions in turnaround time and potentially beneficial patient impacts. The development of a commercially available extraction kit (Bruker Sepsityper) for use with the Bruker MALDI BioTyper has facilitated the processing required for identification of pathogens directly from positive from blood cultures. We report the results of an evaluation of the accuracy, cost, and turnaround time of this method for 61 positive monomicrobial and 2 polymicrobial cultures representing 26 species. The Bruker MALDI BioTyper with the Sepsityper gave a valid (score, >1.7) identification for 85.2% of positive blood cultures with no misidentifications. The mean reduction in turnaround time to identification was 34.3 h (P < 0.0001) in the ideal situation where MALDI-TOF was used for all blood cultures and 26.5 h in a more practical setting where conventional identification or identification from subcultures was required for isolates that could not be directly identified by MALDI-TOF. Implementation of a MALDI-TOF-based identification system for direct identification of pathogens from blood cultures is expected to be associated with a marginal increase in operating costs for most laboratories. However, the use of MALDI-TOF for direct identification is accurate and should result in reduced turnaround time to identification.

  2. Time domain averaging based on fractional delay filter

    NASA Astrophysics Data System (ADS)

    Wu, Wentao; Lin, Jing; Han, Shaobo; Ding, Xianghui

    2009-07-01

    For rotary machinery, periodic components in signals are always extracted to investigate the condition of each rotating part. Time domain averaging technique is a traditional method used to extract those periodic components. Originally, a phase reference signal is required to ensure all the averaged segments are with the same initial phase. In some cases, however, there is no phase reference; we have to establish some efficient algorithms to synchronize the segments before averaging. There are some algorithms available explaining how to perform time domain averaging without using phase reference signal. However, those algorithms cannot eliminate the phase error completely. Under this background, a new time domain averaging algorithm that has no phase error theoretically is proposed. The performance is improved by incorporating the fractional delay filter. The efficiency of the proposed algorithm is validated by some simulations.

  3. Time average vibration fringe analysis using Hilbert transformation

    SciTech Connect

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-10-20

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  4. Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei

    2015-07-01

    We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.

  5. Time-average based on scaling law in anomalous diffusions

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Joo

    2015-05-01

    To solve the obscureness in measurement brought about from the weak ergodicity breaking appeared in anomalous diffusions, we have suggested the time-averaged mean squared displacement (MSD) /line{δ 2 (τ )}τ with an integral interval depending linearly on the lag time τ. For the continuous time random walk describing a subdiffusive behavior, we have found that /line{δ 2 (τ )}τ ˜ τ γ like that of the ensemble-averaged MSD, which makes it be possible to measure the proper exponent values through time-average in experiments like a single molecule tracking. Also, we have found that it has originated from the scaling nature of the MSD at an aging time in anomalous diffusion and confirmed them through numerical results of the other microscopic non-Markovian model showing subdiffusions and superdiffusions with the origin of memory enhancement.

  6. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    SciTech Connect

    Paiz, Mary Rose

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  7. An averaging analysis of discrete-time indirect adaptive control

    NASA Technical Reports Server (NTRS)

    Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.

    1988-01-01

    An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.

  8. Assessing School Turnaround: Evidence from Ohio

    ERIC Educational Resources Information Center

    Player, Daniel; Katz, Veronica

    2016-01-01

    Policy makers have struggled to find successful approaches to address concentrated, persistent low school achievement. While NCLB and the School Improvement Grant (SIG) program have devoted significant time and attention to turnaround, very little empirical evidence substantiates whether and how these efforts work. This study employs a comparative…

  9. Average waiting time in FDDI networks with local priorities

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.

  10. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  11. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  12. Oral biopsy turnaround time: 20-year experience of the Department of Oral Pathology, Oral Medicine and Periodontology, Faculty of Dentistry, University of Malaya.

    PubMed

    Siar, C H; Tan, B H

    2000-12-01

    The turnaround time (TAT) for oral biopsies received for histological examination by the Department of Oral Pathology, Oral Medicine and Periodontology, Faculty of Dentistry, University of Malaya, for the years 1978, 1988 and 1998 was evaluated. For the three years studied, TATs for 61, 233 and 463 specimens were retrospectively analysed. Testing intervals, that is, from the dates the surgeons procured the specimens, the laboratories accessioned them and until the pathologists signed off the diagnoses, were used to calculate TAT. The performance level of the respective pathologists, the growth of tissue diagnostic services and the possible variables that influence TAT were also evaluated. As prompt diagnosis means prompt treatment, which in turn has a bearing on prognosis, the TAT pertinent to oral malignant tumors was emphasized. The mean TAT, its mode and median fell significantly in 1998 compared with the previous 2 years; it was lower for soft tissue than for hard tissue specimens, and lower for malignant, than for non-malignant specimens. The progression of tissue diagnostic services is up to a satisfactory level, as 88.89 % of biopsies could render diagnoses within a fair period of time in 1998.

  13. Time-average TV holography for vibration fringe analysis

    SciTech Connect

    Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2009-06-01

    Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.

  14. Human Capital in Turnaround Schools

    ERIC Educational Resources Information Center

    Ferris, Kristen

    2012-01-01

    Finding, keeping and supporting great educators presents the single biggest challenge to successful school turnarounds. Without teachers and administrators who bring the needed combination of skills and passion, nothing else will achieve the desired effect. The turnaround model supported by the U.S. Department of Education School Improvement Grant…

  15. Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports

    ERIC Educational Resources Information Center

    Silva, Elena

    2012-01-01

    If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposed…

  16. H∞ control of switched delayed systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Li, Zhicheng; Gao, Huijun; Agarwal, Ramesh; Kaynak, Okyay

    2013-12-01

    This paper considers the problems of stability analysis and H∞ controller design of time-delay switched systems with average dwell time. In order to obtain less conservative results than what is seen in the literature, a tighter bound for the state delay term is estimated. Based on the scaled small gain theorem and the model transformation method, an improved exponential stability criterion for time-delay switched systems with average dwell time is formulated in the form of convex matrix inequalities. The aim of the proposed approach is to reduce the minimal average dwell time of the systems, which is made possible by a new Lyapunov-Krasovskii functional combined with the scaled small gain theorem. It is shown that this approach is able to tolerate a smaller dwell time or a larger admissible delay bound for the given conditions than most of the approaches seen in the literature. Moreover, the exponential H∞ controller can be constructed by solving a set of conditions, which is developed on the basis of the exponential stability criterion. Simulation examples illustrate the effectiveness of the proposed method.

  17. Transforming Turnaround Schools in China: A Review

    ERIC Educational Resources Information Center

    Liu, Peng

    2017-01-01

    This article reviews the literature on how Chinese turnaround schools are improved in practice. It starts by defining turnaround schools in the Chinese context, and then discusses the essential reasons why such schools exist. Approaches to improving turnaround schools, successful experiences of transforming turnaround schools, and the challenges…

  18. National survey on intra-laboratory turnaround time for some most common routine and stat laboratory analyses in 479 laboratories in China

    PubMed Central

    Fei, Yang; Zeng, Rong; Wang, Wei; He, Falin; Zhong, Kun

    2015-01-01

    Introduction To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. Materials and methods 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. Results The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479 / 1307), 38% (228 / 598), and 36% (449 / 1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Conclusions Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them. PMID:26110033

  19. Average waiting time profiles of uniform DQDB model

    SciTech Connect

    Rao, N.S.V.; Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D.

    1993-09-07

    The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.

  20. Recent advances in phase shifted time averaging and stroboscopic interferometry

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  1. The average rate of change for continuous time models.

    PubMed

    Kelley, Ken

    2009-05-01

    The average rate of change (ARC) is a concept that has been misunderstood in the applied longitudinal data analysis literature, where the slope from the straight-line change model is often thought of as though it were the ARC. The present article clarifies the concept of ARC and shows unequivocally the mathematical definition and meaning of ARC when measurement is continuous across time. It is shown that the slope from the straight-line change model generally is not equal to the ARC. General equations are presented for two measures of discrepancy when the slope from the straight-line change model is used to estimate the ARC in the case of continuous time for any model linear in its parameters, and for three useful models nonlinear in their parameters.

  2. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation in § 60.1935.... If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in... Reference Method 19 in appendix A of this part, section 4.1, to calculate the daily arithmetic average...

  3. Transmitter-receiver system for time average fourier telescopy

    NASA Astrophysics Data System (ADS)

    Pava, Diego Fernando

    Time Average Fourier Telescopy (TAFT) has been proposed as a means for obtaining high-resolution, diffraction-limited images over large distances through ground-level horizontal-path atmospheric turbulence. Image data is collected in the spatial-frequency, or Fourier, domain by means of Fourier Telescopy; an inverse twodimensional Fourier transform yields the actual image. TAFT requires active illumination of the distant object by moving interference fringe patterns. Light reflected from the object is collected by a "light-buckt" detector, and the resulting electrical signal is digitized and subjected to a series of signal processing operations, including an all-critical averaging of the amplitude and phase of a number of narrow-band signals. This dissertation reports on the formulation and analysis of a transmitter-receiver system appropriate for the illumination, signal detection, and signal processing required for successful application of the TAFT concept. The analysis assumes a Kolmogorov model for the atmospheric turbulence, that the object is rough on the scale of the optical wavelength of the illumination pattern, and that the object is not changing with time during the image-formation interval. An important original contribution of this work is the development of design principles for spatio-temporal non-redundant arrays of active sources for object illumination. Spatial non-redundancy has received considerable attention in connection with the arrays of antennas used in radio astronomy. The work reported here explores different alternatives and suggests the use of two-dimensional cyclic difference sets, which favor low frequencies in the spatial frequency domain. The temporal nonredundancy condition requires that all active sources oscillate at a different optical frequency and that the frequency difference between any two sources be unique. A novel algorithm for generating the array, based on optimized perfect cyclic difference sets, is described

  4. Making Sense of School Turnarounds

    ERIC Educational Resources Information Center

    Hess, Frederick M.

    2012-01-01

    Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…

  5. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA... SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed...

  6. Combining Quick-Turnaround and Batch Workloads at Scale

    NASA Technical Reports Server (NTRS)

    Matthews, Gregory A.

    2012-01-01

    NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

  7. Series Overview. Sustaining School Turnaround at Scale. Brief 1

    ERIC Educational Resources Information Center

    Education Resource Strategies, 2012

    2012-01-01

    Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…

  8. Progress Report 2013. Turnaround Arts Initiative

    ERIC Educational Resources Information Center

    Stoelinga, Sara Ray; Joyce, Katie; Silk, Yael

    2013-01-01

    This interim progress report provides a look at Turnaround Arts schools in their first year, including: (1) a summary of the evaluation design and research questions; (2) a preliminary description of strategies used to introduce the arts in Turnaround Arts schools; and (3) a summary of school reform indicators and student achievement data at…

  9. Turnaround Arts Initiative: Summary of Key Findings

    ERIC Educational Resources Information Center

    Stoelinga, Sara Ray; Silk, Yael; Reddy, Prateek; Rahman, Nadiv

    2015-01-01

    Turnaround Arts is a public-private partnership that aims to test the hypothesis that strategically implementing high-quality and integrated arts education programming in high-poverty, chronically underperforming schools adds significant value to school-wide reform. In 2014, the Turnaround Arts initiative completed an evaluation report covering…

  10. School Turnarounds: The Essential Role of Districts

    ERIC Educational Resources Information Center

    Zavadsky, Heather

    2012-01-01

    The inspiration for this book was a crucial observation: that if the school turnaround movement is to have widespread and lasting consequences, it will need to incorporate meaningful district involvement in its efforts. The result is a volume that considers school turnaround efforts at the district level, examining the evidence thus far and…

  11. Time-Averaged and Time-Dependent Computations of Isothermal Flowfields in a Centerbody Combustor.

    DTIC Science & Technology

    1984-12-01

    7.00 cm. The mesh is cons trtie tedI wi thca J1)i step sizes, AK and Ar to provide a - finer miesh at the near-wall rog ions, of the cfntorbody and the...2252 F/G 28/4 NL E _13 8 EE E D IEEENDEEOPTAIOSF * 11111- 3. 2A 1114-.2 11112- MICROCOPY RESOLUTION TEST CHART NATIONAL FLIRLAU OF SINALq163 A ...1.% . :. . - o%-. ., N (V) o TIME-AVERAGED AND TIME-DEPENDENT COMPUTATIONS OF isO’HERMAL FLOWFIELIDS IN A CENTERBODY

  12. 40 CFR 60.2943 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.2975 to calculate emissions at 7 percent oxygen. (b) Use Equation 2 in § 60.2975 to calculate the 12-hour rolling averages...

  13. Sustainment of Fine Particle Cloud by Means of Time-Averaged Particle Driving Force in Plasmas

    SciTech Connect

    Gohda, Takuma; Iizuka, Satoru

    2008-09-07

    We have succeeded in sustaining fine particle cloud by using a time-averaged particle driving (TAPD) method in the RF discharge plasma. The particles feel only time-averaged force when the period of pulses applied to those point-electrodes is shorter than the particle response time. The particles are transported to a middle point between two point-electrodes.

  14. The consequences of time averaging for measuring temporal species turnover in the fossil record

    NASA Astrophysics Data System (ADS)

    Tomašových, Adam; Kidwell, Susan

    2010-05-01

    Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and

  15. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... averages into the appropriate averaging times and units? 60.1265 Section 60.1265 Protection of Environment... averaging times and units? (a) Use the equation in § 60.1460(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of this part, section 4.3, to calculate the...

  16. The Impact of Turnaround Reform on Student Outcomes: Evidence and Insights from the Los Angeles Unified School District

    ERIC Educational Resources Information Center

    Strunk, Katharine O.; Marsh, Julie A.; Hashim, Ayesha K.; Bush-Mecenas, Susan; Weinstein, Tracey

    2016-01-01

    We examine the Los Angeles Unified School District's Public School Choice Initiative (PSCI), which sought to turnaround the district's lowest-performing schools. We ask whether school turnaround impacted student outcomes, and what explains variations in outcomes across reform cohorts. We use a Comparative Interrupted Time Series approach using…

  17. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to calculate emissions at 7 percent oxygen. (b) Use Equation 2 in § 60.3076 to calculate the 12-hour...

  18. Testing the recession theory as an explanation for the migration turnaround.

    PubMed

    Kontuly, T; Bierens, H J

    1990-02-01

    "In this paper the so-called recession theory explanation for the decline of net migration to large metropolitan core areas of industrialized countries is tested with an econometric time-series model. In the explanation it is contended that the migration turnaround represents only a temporary fluctuation in the general trend of urban economic and demographic spatial concentration, caused by the business cycle downturns of the 1970s. Our results show that the migration turnaround cannot be attributed exclusively to these business cycle fluctuations. For many of the countries tested, the business cycle operated simultaneously with other factors suggested as explanations for the turnaround. We conclude that several explanations should be combined to build a theory of the migration turnaround."

  19. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or Before August 30, 1999 Model Rule-Continuous Emission Monitoring § 60.1755 How do I convert my 1.... If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method 19 in appendix A of this part, section 5.4, to determine the daily geometric average percent reduction...

  20. Prospective evaluation of the VITEK MS for the routine identification of bacteria and yeast in the clinical microbiology laboratory: assessment of accuracy of identification and turnaround time.

    PubMed

    Charnot-Katsikas, Angella; Tesic, Vera; Boonlayangoor, Sue; Bethel, Cindy; Frank, Karen M

    2014-02-01

    This study assessed the accuracy of bacterial and yeast identification using the VITEK MS, and the time to reporting of isolates before and after its implementation in routine clinical practice. Three hundred and sixty-two isolates of bacteria and yeast, consisting of a variety of clinical isolates and American Type Culture Collection strains, were tested. Results were compared with reference identifications from the VITEK 2 system and with 16S rRNA sequence analysis. The VITEK MS provided an acceptable identification to species level for 283 (78 %) isolates. Considering organisms for which genus-level identification is acceptable for routine clinical care, 315 isolates (87 %) had an acceptable identification. Six isolates (2 %) were identified incorrectly, five of which were Shigella species. Finally, the time for reporting the identifications was decreased significantly after implementation of the VITEK MS for a total mean reduction in time of 10.52 h (P<0.0001). Overall, accuracy of the VITEK MS was comparable or superior to that from the VITEK 2. The findings were also comparable to other studies examining the accuracy of the VITEK MS, although differences exist, depending on the diversity of species represented as well as on the versions of the databases used. The VITEK MS can be incorporated effectively into routine use in a clinical microbiology laboratory and future expansion of the database should provide improved accuracy for the identification of micro-organisms.

  1. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  2. Appropriateness of selecting different averaging times for modelling chronic and acute exposure to environmental odours

    NASA Astrophysics Data System (ADS)

    Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.

    Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.

  3. Estimating the average time for inter-continental transport of air pollutants

    NASA Astrophysics Data System (ADS)

    Liu, Junfeng; Mauzerall, Denise L.

    2005-06-01

    We estimate the average time required for inter-continental transport of atmospheric tracers based on simulations with the global chemical tracer model MOZART-2 driven with NCEP meteorology. We represent the average transport time by a ratio of the concentration of two tracers with different lifetimes. We find that average transport times increase with tracer lifetimes. With tracers of 1- and 2-week lifetimes the average transport time from East Asia (EA) to the surface of western North America (NA) in April is 2-3 weeks, approximately a half week longer than transport from NA to western Europe (EU) and from EU to EA. We develop an `equivalent circulation' method to estimate a timescale which has little dependence on tracer lifetimes and obtain similar results to those obtained with short-lived tracers. Our findings show that average inter-continental transport times, even for tracers with short lifetimes, are on average 1-2 weeks longer than rapid transport observed in plumes.

  4. Turnaround Arts Initiative: Final Evaluatiion Report

    ERIC Educational Resources Information Center

    Stoelinga, Sara Ray; Silk, Yael; Reddy, Prateek; Rahman, Nadiv

    2015-01-01

    The President's Committee on the Arts and the Humanities (PCAH) released the results of an independent study that shows substantial gains in student achievement at schools participating in its Turnaround Arts initiative. The eight schools' in the pilot phase of the initiative, showing increases in reading and math scores, as well as an increase in…

  5. Turnarounds to Transfer: Design beyond the Modes

    ERIC Educational Resources Information Center

    Eddy, Jennifer

    2014-01-01

    In "Turnarounds to Transfer," teachers design a collection of tasks toward the summative performance goal but go beyond the Communicative mode criteria: they must assess for transfer. Transfer design criteria must include a complexity or variation that make learners engage critical thinking skills and call upon a repertoire of knowledge…

  6. Textiles, Tariffs, and Turnarounds: Profits Improved.

    ERIC Educational Resources Information Center

    Aronoff, Craig

    1986-01-01

    The U.S. textile industry may serve as a classic study on regeneration through market forces. The industry has recently made a turnaround in profits after having been recognized as an industry that was losing most of its profits to overseas producers. The reasons for the emerging strength of the industry is that it began to innovate after a…

  7. School Turnaround: Cristo Rey Boston High School Case Study

    ERIC Educational Resources Information Center

    Thielman, Jeff

    2012-01-01

    The mandates of the federal No Child Left Behind Law, including the threat of closing a school for underperformance, have led to multiple public school turnaround attempts. Because turnaround is a relatively new area of focus in education, there is limited research on what does and does not work, and even the definition of turnaround is a work in…

  8. Neural Networks Used to Compare Designed and Measured Time-Average Patterns

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1999-01-01

    Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.

  9. Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio

    NASA Astrophysics Data System (ADS)

    Li, Shenghong; Bi, Guoan

    2014-12-01

    Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.

  10. Time averaged transmitter power and exposure to electromagnetic fields from mobile phone base stations.

    PubMed

    Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo

    2014-08-07

    Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels.

  11. Time Averaged Transmitter Power and Exposure to Electromagnetic Fields from Mobile Phone Base Stations

    PubMed Central

    Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo

    2014-01-01

    Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F ≈ 0.32 ± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551

  12. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    NASA Astrophysics Data System (ADS)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  13. A second-order closure model for the effect of averaging time on turbulent plume dispersion

    SciTech Connect

    Sykes, R.I.; Gabruk, R.S.

    1996-12-31

    Turbulent dispersion in the atmosphere is a result of chaotic advection by a wide spectrum of eddy motions. In genera, the larger scale motions behave like a time-dependent, spatially inhomogeneous mean wind and produce coherent meandering of a pollutant cloud or plume, while the smaller scale motions act to diffuse the pollutant and mix it with the ambient air. The distinction between the two types of motion is dependent on both the sampling procedure and the scale of the pollutant cloud. For the case of a continuous plume of material, the duration of the sampling time (the time average period) determines the effective size of the plume. The objective is the development of a practical scheme for representing the effect of time-averaging on plume width. The model must describe relative dispersion in the limit of short-term averages, and give the absolute, or ensemble, dispersion rate for long-term sampling. The authors shall generalize the second-order closure ensemble dispersion model of Sykes et al. to include the effect of time-averaging, so they first briefly review the basic model.

  14. The Estimation of Theta in the Integrated Moving Average Time-Series Model.

    ERIC Educational Resources Information Center

    Martin, Gerald R.

    Through Monte Carlo procedures, three different techniques for estimating the parameter theta (proportion of the "shocks" remaining in the system) in the Integrated Moving Average (0,1,1) time-series model are compared in terms of (1) the accuracy of the estimates, (2) the independence of the estimates from the true value of theta, and…

  15. Violation of Homogeneity of Variance Assumption in the Integrated Moving Averages Time Series Model.

    ERIC Educational Resources Information Center

    Gullickson, Arlen R.; And Others

    This study is an analysis of the robustness of the Box-Tiao integrated moving averages model for analysis of time series quasi experiments. One of the assumptions underlying the Box-Tiao model is that all N values of alpha subscript t come from the same population which has a variance sigma squared. The robustness was studied only in terms of…

  16. Spatial and Temporal scales of time-averaged 700 MB height anomalies

    NASA Technical Reports Server (NTRS)

    Gutzler, D.

    1981-01-01

    The monthly and seasonal forecasting technique is based to a large extent on the extrapolation of trends in the positions of the centers of time averaged geopotential height anomalies. The complete forecasted height pattern is subsequently drawn around the forecasted anomaly centers. The efficacy of this technique was tested and time series of observed monthly mean and 5 day mean 700 mb geopotential heights were examined. Autocorrelation statistics are generated to document the tendency for persistence of anomalies. These statistics are compared to a red noise hypothesis to check for evidence of possible preferred time scales of persistence. Space-time spectral analyses at middle latitudes are checked for evidence of periodicities which could be associated with predictable month-to-month trends. A local measure of the average spatial scale of anomalies is devised for guidance in the completion of the anomaly pattern around the forecasted centers.

  17. An upper bound to time-averaged space-charge limited diode currents

    SciTech Connect

    Griswold, M. E.; Fisch, N. J.; Wurtele, J. S.

    2010-11-15

    The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.

  18. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  19. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability.

    PubMed

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator.

  20. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    NASA Astrophysics Data System (ADS)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  1. The uncertainty of averaging a time series of measurements and its use in environmental legislation

    NASA Astrophysics Data System (ADS)

    Pérez Ballesta, Pascual

    This paper assesses the problem of the calculation of an averaging uncertainty for measurements in the framework of the European Air Quality Directives. Current environmental legislation establishes maximum uncertainties associated with a defined period of measurements. The difficulties of an 'a priori' determination of uncertainty contributions associated with both the averaging of measurements and an incomplete time series are discussed. The definition of an overall uncertainty, which includes budget contributions from the afore-mentioned factors, is not helpful as a regulatory parameter for quality. Alternatively, it should be used as additional information to associate with the average measurement value. This should be taken into consideration in future revisions of the European ambient air quality legislation.

  2. Effects of time-averaging climate parameters on predicted multicompartmental fate of pesticides and POPs.

    PubMed

    Lammel, Gerhard

    2004-01-01

    With the aim to investigate the justification of time-averaging of climate parameters in multicompartment modelling the effects of various climate parameters and different modes of entry on the predicted substances' total environmental burdens and the compartmental fractions were studied. A simple, non-steady state zero-dimensional (box) mass-balance model of intercompartmental mass exchange which comprises four compartments was used for this purpose. Each two runs were performed, one temporally unresolved (time-averaged conditions) and a time-resolved (hourly or higher) control run. In many cases significant discrepancies are predicted, depending on the substance and on the parameter. We find discrepancies exceeding 10% relative to the control run and up to an order of magnitude for prediction of the total environmental burden from neglecting seasonalities of the soil and ocean temperatures and the hydroxyl radical concentration in the atmosphere and diurnalities of atmospheric mixing depth and the hydroxyl radical concentration in the atmosphere. Under some conditions it was indicated that substance sensitivity could be explained by the magnitude of the sink terms in the compartment(s) with parameters varying. In general, however, any key for understanding substance sensitivity seems not be linked in an easy manner to the properties of the substance, to the fractions of its burden or to the sink terms in either of the compartments with parameters varying. Averaging of diurnal variability was found to cause errors of total environmental residence time of different sign for different substances. The effects of time-averaging of several parameters are in general not additive but synergistic as well as compensatory effects occur. An implication of these findings is that the ranking of substances according to persistence is sensitive to time resolution on the scale of hours to months. As a conclusion it is recommended to use high temporal resolution in multi

  3. Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1984-01-01

    The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three-dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed. Previously announced in STAR as N84-21849

  4. Measurement of fluid properties using rapid-double-exposure and time-average holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1984-01-01

    The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.

  5. Exposing local symmetries in distorted driven lattices via time-averaged invariants

    NASA Astrophysics Data System (ADS)

    Wulf, T.; Morfonios, C. V.; Diakonos, F. K.; Schmelcher, P.

    2016-05-01

    Time-averaged two-point currents are derived and shown to be spatially invariant within domains of local translation or inversion symmetry for arbitrary time-periodic quantum systems in one dimension. These currents are shown to provide a valuable tool for detecting deformations of a spatial symmetry in static and driven lattices. In the static case the invariance of the two-point currents is related to the presence of time-reversal invariance and/or probability current conservation. The obtained insights into the wave functions are further exploited for a symmetry-based convergence check which is applicable for globally broken but locally retained potential symmetries.

  6. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by

  7. Adaptive Output-Feedback Neural Control of Switched Uncertain Nonlinear Systems With Average Dwell Time.

    PubMed

    Long, Lijun; Zhao, Jun

    2015-07-01

    This paper investigates the problem of adaptive neural tracking control via output-feedback for a class of switched uncertain nonlinear systems without the measurements of the system states. The unknown control signals are approximated directly by neural networks. A novel adaptive neural control technique for the problem studied is set up by exploiting the average dwell time method and backstepping. A switched filter and different update laws are designed to reduce the conservativeness caused by adoption of a common observer and a common update law for all subsystems. The proposed controllers of subsystems guarantee that all closed-loop signals remain bounded under a class of switching signals with average dwell time, while the output tracking error converges to a small neighborhood of the origin. As an application of the proposed design method, adaptive output feedback neural tracking controllers for a mass-spring-damper system are constructed.

  8. Testing ΛCDM cosmology at turnaround: where to look for violations of the bound?

    SciTech Connect

    Tanoglidis, D.; Pavlidou, V.; Tomaras, T.N. E-mail: pavlidou@physics.uoc.gr

    2015-12-01

    In ΛCDM cosmology, structure formation is halted shortly after dark energy dominates the mass/energy budget of the Universe. A manifestation of this effect is that in such a cosmology the turnaround radius—the non-expanding mass shell furthest away from the center of a structure— has an upper bound. Recently, a new, local, test for the existence of dark energy in the form of a cosmological constant was proposed based on this turnaround bound. Before designing an experiment that, through high-precision determination of masses and —independently— turnaround radii, will challenge ΛCDM cosmology, we have to answer two important questions: first, when turnaround-scale structures are predicted to be close enough to their maximum size, so that a possible violation of the bound may be observable. Second, which is the best mass scale to target for possible violations of the bound. These are the questions we address in the present work. Using the Press-Schechter formalism, we find that turnaround structures have in practice already stopped forming, and consequently, the turnaround radius of structures must be very close to the maximum value today. We also find that the mass scale of ∼ 10{sup 13} M{sub ⊙} characterizes the turnaround structures that start to form in a statistically important number density today —and even at an infinite time in the future, since structure formation has almost stopped. This mass scale also separates turnaround structures with qualitatively different cosmological evolution: smaller structures are no longer readjusting their mass distribution inside the turnaround scale, they asymptotically approach their ultimate abundance from higher values, and they are common enough to have, at some epoch, experienced major mergers with structures of comparable mass; larger structures exhibit the opposite behavior. We call this mass scale the transitional mass scale and we argue that it is the optimal for the purpose outlined above. As a

  9. Time reversal seismic source imaging using peak average power ratio (PAPR) parameter

    NASA Astrophysics Data System (ADS)

    Franczyk, Anna; Leśniak, Andrzej; Gwiżdż, Damian

    2017-03-01

    The time reversal method has become a standard technique for the location of seismic sources. It has been used both for acoustic and elastic numerical modelling and for 2D and 3D propagation models. Although there are many studies concerning its application to point sources, little so far has been done to generalise the time reversal method to the study of sequences of seismic events. The need to describe such processes better motivates the analysis presented in this paper. The synthetic time reversal imaging experiments presented in this work were conducted for sources with the same origin time as well as for the sources with a slight delay in origin time. For efficient visualisation of the seismic wave propagation and interference, a new coefficient—peak average power ratio—was introduced. The paper also presents a comparison of visualisation based on the proposed coefficient against a commonly used visualisation based on a maximum value.

  10. Hospital turnarounds--thinking strategically.

    PubMed

    Rosenfield, R H

    1990-01-01

    Healthcare executives throughout the country are losing confidence that their organizations can remain viable without significant organizational change. Now is the time to review principal trends in this area--conversions and partnerships--and to assess how well particular arrangements seem to be working.

  11. Investigation of Average Prediction Time for Different Meteorological Variables By Using Chaotic Approach

    NASA Astrophysics Data System (ADS)

    Özgür, Evren; Koçak, Kasım

    2016-04-01

    According to nonlinear dynamical system approach, it is possible that the time evolution of a system can be represented by its trajectories in phase space. This phase space is spanned by the state variables which are necessary to determine the time evolution of the system. Atmospheric processes can not be represented by linear approaches because of their dependency on numerous independent variables. Since a small changes in initial conditions lead to significant differences in prediction, long term prediction of meteorological variables is not possible. This situation can be explained by the term "sensitive dependence on initial conditions". In the study, it was tried to determine the average prediction time for different atmospheric variables by applying nonlinear approach. In order to apply the method, the first step is to reconstruct the phase space. Phase space has two parameters which are time delay and embedding dimension. Mutual Information Function (MIF) can be used to determine optimum time delay. MIF considers both linear and nonlinear inner-dependencies in a given time series. To define phase space, embedding dimension must be identified correctly. Embedding dimesion is the number of necessary state variables which describe the dynamics of a system. The algorithm to define embedding dimension is False Nearest Neighbors (FNN). After constructing the phase space by using time delay and embedding dimension, the maximum Lyapunov exponent was introduced. Lyapunov exponent is related to the exponential divergence or convergence of nearby orbits in the phase space. A dynamical system which has positive Lyapunov exponent is defined as chaotic system. Because meteorological variables can be controlled with large numbers of independent variables, time series of meteorological variables might be produced by a chaotic dynamical system. By using phase space and maximum Lyapunov exponent value, average prediction times of different parameters were calculated

  12. On simulating flow with multiple time scales using a method of averages

    SciTech Connect

    Margolin, L.G.

    1997-12-31

    The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his new method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.

  13. Time scales and variability of area-averaged tropical oceanic rainfall

    NASA Technical Reports Server (NTRS)

    Shin, Kyung-Sup; North, Gerald R.; Ahn, Yoo-Shin; Arkin, Phillip A.

    1990-01-01

    A statistical analysis of time series of area-averaged rainfall over the oceans has been conducted around the diurnal time scale. The results of this analysis can be applied directly to the problem of establishing the magnitude of expected errors to be incurred in the estimation of monthly area-averaged rain rate from low orbiting satellites. Such statistics as the mean, standard deviation, integral time scale of background red noise, and spectral analyses were performed on time series of the GOES precipitation index taken at 3-hour intervals during the period spanning December 19, 1987 to March 31, 1988 over the central and eastern tropical Pacific. The analyses have been conducted on 2.5 x 2.5 deg and 5 x 5 deg grid boxes, separately. The study shows that rainfall measurements by a sun-synchronous satellite visiting a spot twice per day will include a bias due to the existence of the semidiurnal cycle in the SPCZ ranging from 5 to 10 percentage points. The bias in the ITCZ may be of the order of 5 percentage points.

  14. TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA

    SciTech Connect

    Iuppa, R.; Di Sciascio, G. E-mail: giuseppe.disciascio@roma2.infn.it

    2013-04-01

    Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

  15. Memory efficient and constant time 2D-recursive spatial averaging filter for embedded implementations

    NASA Astrophysics Data System (ADS)

    Gan, Qifeng; Seoud, Lama; Ben Tahar, Houssem; Langlois, J. M. Pierre

    2016-04-01

    Spatial Averaging Filters (SAF) are extensively used in image processing for image smoothing and denoising. Their latest implementations have already achieved constant time computational complexity regardless of kernel size. However, all the existing O(1) algorithms require additional memory for temporary data storage. In order to minimize memory usage in embedded systems, we introduce a new two-dimensional recursive SAF. It uses previous resultant pixel values along both rows and columns to calculate the current one. It can achieve constant time computational complexity without using any additional memory usage. Experimental comparisons with previous SAF implementations shows that the proposed 2D-Recursive SAF does not require any additional memory while offering a computational time similar to the most efficient existing SAF algorithm. These features make it especially suitable for embedded systems with limited memory capacity.

  16. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  17. Experimental Characterization of the Time-Averaged and Oscillatory Behavior of a Hall Plasma Discharge

    NASA Astrophysics Data System (ADS)

    Young, Christopher; Lucca Fabris, Andrea; Gascon, Nicolas; Cappelli, Mark

    2014-10-01

    An extensive experimental campaign characterizes a 70 mm diameter stationary plasma thruster operating on xenon in the 200--500 W power range. This study resolves both time-averaged properties and oscillatory phenomena in the plasma discharge. Specifically, we explore the time variation of the plume ion velocity field referenced to periodic discharge current oscillations using time-synchronized laser induced fluorescence (LIF) measurements. This LIF scheme relies on a triggered signal acquisition gate locked at a given phase of the current oscillation period. The laser is modulated at a characteristic frequency and homodyne detection through a lock-in amplifier extracts the induced fluorescence signal out of the bright background emission. This work is sponsored by the U.S. Air Force Office of Scientific Research with Dr. Mitat Birkan as program manager. CVY acknowledges support from the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.

  18. A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

    1992-01-01

    A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

  19. Leading a supply chain turnaround.

    PubMed

    Slone, Reuben E

    2004-10-01

    Just five years ago, salespeople at Whirlpool were in the habit of referring to their supply chain organization as the "sales disablers." Now the company excels at getting products to the right place at the right time--while managing to keep inventories low. How did that happen? In this first-person account, Reuben Slone, Whirlpool's vice president of Global Supply Chain, describes how he and his colleagues devised the right supply chain strategy, sold it internally, and implemented it. Slone insisted that the right focal point for the strategy was the satisfaction of consumers at the end of the supply chain. Most supply chain initiatives do the opposite: They start with the realities of a company's manufacturing base and proceed from there. Through a series of interviews with trade customers large and small, his team identified 27 different capabilities that drove industry perceptions of Whirlpool's performance. Knowing it was infeasible to aim for world-class performance across all of them, Slone weighed the costs of excelling at each and found the combination of initiatives that would provide overall competitive advantage. A highly disciplined project management office and broad training in project management were key to keeping work on budget and on benefit. Slone set an intense pace--three "releases" of new capabilities every month--that the group maintains to this day. Lest this seem like a technology story, however, Slone insists it is just as much a "talent renaissance." People are proud today to be part of Whirlpool's supply chain organization, and its new generation of talent will give the company a competitive advantage for years to come.

  20. Probe shapes that measure time-averaged streamwise momentum and cross-stream turbulence intensity

    NASA Technical Reports Server (NTRS)

    Rossow, Vernon J. (Inventor)

    1993-01-01

    A method and apparatus for directly measuring the time-averaged streamwise momentum in a turbulent stream use a probe which has total head response which varies as the cosine-squared of the angle of incidence. The probe has a nose with a slight indentation on its front face for providing the desired response. The method of making the probe incorporates unique design features. Another probe may be positioned in a side-by-side relationship to the first probe to provide a direct measurement of the total pressure. The difference between the two pressures yields the sum of the squares of the cross-stream components of the turbulence level.

  1. Time-averages for Plane Travelling Waves—The Effect of Attenuation: I, Adiabatic Approach

    NASA Astrophysics Data System (ADS)

    Makarov, S. N.

    1993-05-01

    The analysis of the effect of attenuation on the time-averages for a plane travelling wave is presented. The barotropic equation of state is considered: i.e., acoustic heating is assumed to be negligible. The problem statement consists of calculating means in a finite region bounded by a transducer surface as well as by a perfectly absorbing surface, respectively. Although the simple wave approximation cannot be used throughout the field it is still valid near the perfect absorber. The result for radiation pressure is different from the conclusions given previously by Beyer and Livett, Emery and Leeman.

  2. Study of distribution and characteristics of the time average of pressure of a water cushion pool

    NASA Astrophysics Data System (ADS)

    Guo, Y. H.; Fu, J. F.

    2016-08-01

    When a dam discharges flood water, the plunging flow with greater kinetic energy, will scour the riverbed, resulting in erosion damage. In order to improve the anti-erosion capacity of a riverbed, the cushion pool created. This paper is based on turbulent jet theoryto deduce the semi-empirical formula of the time average of pressure in the impinging portion of the cushion pool. Additionally, MATLAB numerical is used to conduct a simulation analysis according to turbulent jet energy and watercushion depth when water floods into the water cushion pool, to determine the regularities of distribution and related characteristics.

  3. Fetal cardiac time intervals estimated on fetal magnetocardiograms: single cycle analysis versus average beat inspection.

    PubMed

    Comani, Silvia; Alleva, Giovanna

    2007-01-01

    Fetal cardiac time intervals (fCTI) are dependent on fetal growth and development, and may reveal useful information for fetuses affected by growth retardation, structural cardiac defects or long QT syndrome. Fetal cardiac signals with a signal-to-noise ratio (SNR) of at least 15 dB were retrieved from fetal magnetocardiography (fMCG) datasets with a system based on independent component analysis (ICA). An automatic method was used to detect the onset and offset of the cardiac waves on single cardiac cycles of each signal, and the fCTI were quantified for each heartbeat; long rhythm strips were used to calculate average fCTI and their variability for single fetal cardiac signals. The aim of this work was to compare the outcomes of this system with the estimates of fCTI obtained with a classical method based on the visual inspection of averaged beats. No fCTI variability can be measured from averaged beats. A total of 25 fMCG datasets (fetal age from 22 to 37 weeks) were evaluated, and 1768 cardiac cycles were used to compute fCTI. The real differences between the values obtained with a single cycle analysis and visual inspection of averaged beats were very small for all fCTI. They were comparable with signal resolution (+/-1 ms) for QRS complex and QT interval, and always <5 ms for the PR interval, ST segment and T wave. The coefficients of determination between the fCTI estimated with the two methods ranged between 0.743 and 0.917. Conversely, inter-observer differences were larger, and the related coefficients of determination ranged between 0.463 and 0.807, assessing the high performance of the automated single cycle analysis, which is also rapid and unaffected by observer-dependent bias.

  4. Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay

    USGS Publications Warehouse

    Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.

    2000-01-01

    Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.

  5. Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance

    NASA Astrophysics Data System (ADS)

    Kidwell, Susan M.

    2002-09-01

    Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.

  6. School Turnaround Teachers: Selection Toolkit. Part of the School Turnaround Collection from Public Impact

    ERIC Educational Resources Information Center

    Public Impact, 2008

    2008-01-01

    This toolkit includes these separate sections: (1) Selection Preparation Guide; (2) Day-of-Interview Tools; (3) Candidate Rating Tools; and (4) Candidate Comparison and Decision Tools. Each of the sections is designed to be used at different stages of the selection process. The first section provides turnaround teacher competencies that are the…

  7. Autocorrelation-based time synchronous averaging for condition monitoring of planetary gearboxes in wind turbines

    NASA Astrophysics Data System (ADS)

    Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho

    2016-03-01

    We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.

  8. Time-averaged properties of unstable periodic orbits and chaotic orbits in ordinary differential equation systems.

    PubMed

    Saiki, Yoshitaka; Yamada, Michio

    2009-01-01

    It has recently been found in some dynamical systems in fluid dynamics that only a few unstable periodic orbits (UPOs) with low periods can give good approximations to the mean properties of turbulent (chaotic) solutions. By employing three chaotic systems described by ordinary differential equations, we compare time-averaged properties of a set of UPOs and those of a set of segments of chaotic orbits. For every chaotic system we study, the distributions of a time average of a dynamical variable along UPOs with lower and higher periods are similar to each other and the variance of the distribution is small, in contrast with that along chaotic segments. The distribution seems to converge to some limiting distribution with nonzero variance as the period of the UPO increases, although that along chaotic orbits inclines to converge to a delta -like distribution. These properties seem to lie in the background of why only a few UPOs with low periods can give good mean statistical properties in dynamical systems in fluid dynamics.

  9. Average weighted trapping time of the node- and edge- weighted fractal networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Ye, Dandan; Hou, Jie; Xi, Lifeng; Su, Weiyi

    2016-10-01

    In this paper, we study the trapping problem in the node- and edge- weighted fractal networks with the underlying geometries, focusing on a particular case with a perfect trap located at the central node. We derive the exact analytic formulas of the average weighted trapping time (AWTT), the average of node-to-trap mean weighted first-passage time over the whole networks, in terms of the network size Ng, the number of copies s, the node-weight factor w and the edge-weight factor r. The obtained result displays that in the large network, the AWTT grows as a power-law function of the network size Ng with the exponent, represented by θ(s , r , w) =logs(srw2) when srw2 ≠ 1. Especially when srw2 = 1 , AWTT grows with increasing order Ng as log Ng. This also means that the efficiency of the trapping process depend on three main parameters: the number of copies s > 1, node-weight factor 0 < w ≤ 1, and edge-weight factor 0 < r ≤ 1. The smaller the value of srw2 is, the more efficient the trapping process is.

  10. Calculations of the time-averaged local heat transfer coefficients in circulating fluidized bed

    SciTech Connect

    Dai, T.H.; Qian, R.Z.; Ai, Y.F.

    1999-04-01

    The great potential to burn a wide variety of fuels and the reduced emission of pollutant gases mainly SO{sub x} and NO{sub x} have inspired the investigators to conduct research at a brisk pace all around the world on circulating fluidized bed (CFB) technology. An accurate understanding of heat transfer to bed walls is required for proper design of CFB boilers. To develop an optimum economic design of the boiler, it is also necessary to know how the heat transfer coefficient depends on different design and operating parameters. It is impossible to do the experiments under all operating conditions. Thus, the mathematical model prediction is a valuable method instead. Based on the cluster renewal theory of heat transfer in circulating fluidized beds, a mathematical model for predicting the time-averaged local bed-to-wall heat transfer coefficients is developed. The effects of the axial distribution of the bed density on the time-average local heat transfer coefficients are taken into account via dividing the bed into a series of sections along its height. The assumptions are made about the formation and falling process of clusters on the wall. The model predictions are in an acceptable agreement with the published data.

  11. Bose-Einstein condensation in large time-averaged optical ring potentials

    NASA Astrophysics Data System (ADS)

    Bell, Thomas A.; Glidden, Jake A. P.; Humbert, Leif; Bromley, Michael W. J.; Haine, Simon A.; Davis, Matthew J.; Neely, Tyler W.; Baker, Mark A.; Rubinsztein-Dunlop, Halina

    2016-03-01

    Interferometric measurements with matter waves are established techniques for sensitive gravimetry, rotation sensing, and measurement of surface interactions, but compact interferometers will require techniques based on trapped geometries. In a step towards the realisation of matter wave interferometers in toroidal geometries, we produce a large, smooth ring trap for Bose-Einstein condensates using rapidly scanned time-averaged dipole potentials. The trap potential is smoothed by using the atom distribution as input to an optical intensity correction algorithm. Smooth rings with a diameter up to 300 μm are demonstrated. We experimentally observe and simulate the dispersion of condensed atoms in the resulting potential, with good agreement serving as an indication of trap smoothness. Under time of flight expansion we observe low energy excitations in the ring, which serves to constrain the lower frequency limit of the scanned potential technique. The resulting ring potential will have applications as a waveguide for atom interferometry and studies of superfluidity.

  12. Asynchronous H∞ filtering for linear switched systems with average dwell time

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Zhang, Hongbin; Wang, Gang; Dang, Chuangyin

    2016-09-01

    This paper is concerned with the H∞ filtering problem for a class of continuous-time linear switched systems with the asynchronous behaviours, where 'asynchronous' means that the switching of the filters to be designed has a lag to the switching of the system modes. By using the Lyapunov-like functions and the average dwell time technique, a sufficient condition is obtained to guarantee the asymptotic stability with a weighted H∞ performance index for the filtering error system. Moreover, the results are formulated in the form of linear matrix inequalities that are numerical feasible. As a result, the filter design problem is solved. Finally, an illustrative numerical example is presented to show the effectiveness of the results.

  13. Time-averaged flow over a hydrofoil at high Reynolds number

    NASA Astrophysics Data System (ADS)

    Bourgoyne, Dwayne A.; Hamel, Joshua M.; Ceccio, Steven L.; Dowling, David R.

    2003-12-01

    At high Reynolds number, the flow of an incompressible viscous fluid over a lifting surface is a rich blend of fluid dynamic phenomena. Here, boundary layers formed at the leading edge develop over both the suction and pressure sides of the lifting surface, transition to turbulence, separate near the foil's trailing edge, combine in the near wake, and eventually form a turbulent far-field wake. The individual elements of this process have been the subject of much prior work. However, controlled experimental investigations of these flow phenomena and their interaction on a lifting surface at Reynolds numbers typical of heavy-lift aircraft wings or full-size ship propellers (chord-based Reynolds numbers, Re_C {˜} 10(7{-}10^8) ) are largely unavilable. This paper presents results from an experimental effort to identify and measure the dominant features of the flow over a two-dimensional hydrofoil at nominal Re_C values from near one million to more than 50 million. The experiments were conducted in the US Navy's William B. Morgan Large Cavitation Channel with a solid-bronze hydrofoil (2.1 m chord, 3.0 m span, 17 cm maximum thickness) at flow speeds from 0.25 to 18.3 m s(-1) . The foil section, a modified NACA 16 with a pressure side that is nearly flat and a suction side that terminates in a blunt trailing-edge bevel, approximates the cross-section of a generic naval propeller blade. Time-averaged flow-field measurements drawn from laser-Doppler velocimetry, particle-imaging velocimetry, and static pressure taps were made for two trailing-edge bevel angles (44 (°) and 56 (°) ). These velocity and pressure measurements were concentrated in the trailing-edge and near-wake regions, but also include flow conditions upstream and far downstream of the foil, as well as static pressure distributions on the foil surface and test section walls. Observed Reynolds-number variations in the time-averaged flow over the foil are traced to changes in suction-side boundary

  14. ATS simultaneous and turnaround ranging experiments

    NASA Technical Reports Server (NTRS)

    Watson, J. S.; Putney, B. H.

    1971-01-01

    This report explains the data reduction and spacecraft position determination used in conjunction with two ATS experiments - Trilateration and Turnaround Ranging - and describes in detail a multilateration program that is used for part of the data reduction process. The process described is for the determination of the inertial position of the satellite, and for formating input for related programs. In the trilateration procedure, a geometric determination of satellite position is made from near simultaneous range measurements made by three different tracking stations. Turnaround ranging involves two stations; one, the master station, transmits the signal to the satellite and the satellite retransmits the signal to the slave station which turns the signal around to the satellite which in turn retransmits the signal to the master station. The results of the satellite position computations using the multilateration program are compared to results of other position determination programs used at Goddard. All programs give nearly the same results which indicates that because of its simplicity and computational speed the trilateration technique is useful in obtaining spacecraft positions for near synchronous satellites.

  15. Average time spent by Lévy flights and walks on an interval with absorbing boundaries.

    PubMed

    Buldyrev, S V; Havlin, S; Kazakov, A Y; da Luz, M G; Raposo, E P; Stanley, H E; Viswanathan, G M

    2001-10-01

    We consider a Lévy flyer of order alpha that starts from a point x(0) on an interval [O,L] with absorbing boundaries. We find a closed-form expression for the average number of flights the flyer takes and the total length of the flights it travels before it is absorbed. These two quantities are equivalent to the mean first passage times for Lévy flights and Lévy walks, respectively. Using fractional differential equations with a Riesz kernel, we find exact analytical expressions for both quantities in the continuous limit. We show that numerical solutions for the discrete Lévy processes converge to the continuous approximations in all cases except the case of alpha-->2, and the cases of x(0)-->0 and x(0)-->L. For alpha>2, when the second moment of the flight length distribution exists, our result is replaced by known results of classical diffusion. We show that if x(0) is placed in the vicinity of absorbing boundaries, the average total length has a minimum at alpha=1, corresponding to the Cauchy distribution. We discuss the relevance of this result to the problem of foraging, which has received recent attention in the statistical physics literature.

  16. Long-time average spectrograms of dysphonic voices before and after therapy.

    PubMed

    Kitzing, P; Akerlund, L

    1993-01-01

    Tape recordings before and after successful voice therapy from 174 subjects with non-organic voice disorders (functional dysphonia) were analysed by long-time averaged voice spectrograms (LTAS). In female as well as in male voices there was a statistically significant increase in level in the first formant region of the spectra. In the female voices there was also an increase in level in the region of the fundamental. The LTAS were compared to the results of a perceptual evaluation of the voice qualities by a small group of expert listeners. There was no significant change of the LTAS in voices with negligible amelioration after therapy. In the voices, where the change after therapy was perceptually rated to be considerable, the LTAS showed only an increase in intensity, but the general configuration of the spectral envelope remained unchanged. There was only a weakly positive correlation between the quality ratings and parameters of the spectra.

  17. Shear banding in a lyotropic lamellar phase. I. Time-averaged velocity profiles

    NASA Astrophysics Data System (ADS)

    Salmon, Jean-Baptiste; Manneville, Sébastien; Colin, Annie

    2003-11-01

    Using velocity profile measurements based on dynamic light scattering and coupled to structural and rheological measurements in a Couette cell, we present evidences for a shear banding scenario in the shear flow of the onion texture of a lyotropic lamellar phase. Time-averaged measurements clearly show the presence of structural shear banding in the vicinity of a shear-induced transition, associated with the nucleation and growth of a highly sheared band in the flow. Our experiments also reveal the presence of slip at the walls of the Couette cell. Using a simple mechanical approach, we demonstrate that our data confirm the classical assumption of the shear banding picture, in which the interface between bands lies at a given stress σ*. We also outline the presence of large temporal fluctuations of the flow field, which are the subject of the second part of this paper [Salmon et al., Phys. Rev. E 68, 051504 (2003)].

  18. Thermodynamic formula for the cumulant generating function of time-averaged current.

    PubMed

    Nemoto, Takahiro; Sasa, Shin-ichi

    2011-12-01

    The cumulant generating function of time-averaged current is studied from an operational viewpoint. Specifically, for interacting Brownian particles under nonequilibrium conditions, we show that the first derivative of the cumulant generating function is equal to the expectation value of the current in a modified system with an extra force added, where the modified system is characterized by a variational principle. The formula reminds us of Einstein's fluctuation theory in equilibrium statistical mechanics. Furthermore, since the formula leads to the fluctuation-dissipation relation when the linear response regime is focused on, it is regarded as an extension of the linear response theory to that valid beyond the linear response regime. The formula is also related to previously known theories such as the Donsker-Varadhan theory, the additivity principle, and the least dissipation principle, but it is not derived from them. Examples of its application are presented for a driven Brownian particle on a ring subject to a periodic potential.

  19. Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations

    PubMed Central

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-01

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

  20. Using Competencies to Improve School Turnaround Principal Success

    ERIC Educational Resources Information Center

    Steiner, Lucy; Hassel, Emily Ayscue

    2011-01-01

    This paper aims first to shed light on one element of leadership: the characteristics--or "competencies"--of turnaround leaders who succeed in driving rapid, dramatic change. Second, it recounts the elements of support that districts must provide these leaders to enable and sustain a portfolio of successful school turnarounds.…

  1. The State Role in School Turnaround: Emerging Best Practices

    ERIC Educational Resources Information Center

    Rhim, Lauren Morando, Ed.; Redding, Sam, Ed.

    2014-01-01

    This publication explores the role of the state education agency (SEA) in school turnaround efforts. An emphasis is placed on practical application of research and best practices related to the SEA's critical leadership role in driving and supporting successful school turnaround efforts. The publication is organized around the four goals of…

  2. "Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform

    ERIC Educational Resources Information Center

    Johnson, Amanda Walker

    2013-01-01

    "Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas.…

  3. Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?

    ERIC Educational Resources Information Center

    Mette, Ian M.

    2013-01-01

    This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…

  4. Fault detection and isolation for discrete-time switched linear systems based on average dwell-time method

    NASA Astrophysics Data System (ADS)

    Li, Jian; Yang, Guang-Hong

    2013-12-01

    This article is concerned with the problem of fault detection and isolation (FDI) for discrete-time switched linear systems based on the average dwell-time method. The proposed FDI framework consists of a bank of FDI filters, which are divided into N groups for N subsystems. The FDI filters belonging to one group correspond to the faults for a subsystem, and generate a residual signal to guarantee the fault sensitivity performance for the subsystem, the fault attenuation performance for other subsystems and the disturbance attenuation performance for all subsystems. Different form employing the weighting matrices to restrict the frequency ranges of faults for each subsystem, the finite-frequency H - performance for switched systems is first defined. Sufficient conditions are established by linear matrix inequalities (LMIs), and the filter gains are characterised in terms of the solution of a convex optimisation problem. Two examples are used to demonstrate the effectiveness of the proposed design method.

  5. Modified box dimension and average weighted receiving time on the weighted fractal networks

    PubMed Central

    Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi

    2015-01-01

    In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355

  6. Respiratory sinus arrhythmia: time domain characterization using autoregressive moving average analysis

    NASA Technical Reports Server (NTRS)

    Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.

    1995-01-01

    Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.

  7. An automated methodology for performing time synchronous averaging of a gearbox signal without speed sensor

    NASA Astrophysics Data System (ADS)

    Combet, F.; Gelman, L.

    2007-08-01

    In this paper we extend a sensorless algorithm proposed by Bonnardot et al. for angular resampling of the acceleration signal of a gearbox submitted to limited speed fluctuation. The previous algorithm estimates the shaft angular position by narrow-band demodulation of one harmonic of the mesh frequency. The harmonic was chosen by trial and error. This paper proposes a solution to select automatically the mesh harmonic used for the shaft angular position estimation. To do so it evaluates the local signal-to-noise ratio associated to the mesh harmonic and deduces the associated low-pass filtering effect on the time synchronous average (TSA) of the signal. Results are compared with the TSA obtained when using a tachometer on an industrial gearbox used for wastewater treatment. The proposed methodology requires only the knowledge of an approximate value of the running speed and the number of teeth of the gears. It forms an automated scheme which can prove useful for real-time diagnostic applications based on TSA where speed measurement is not possible or not advisable due to difficult environmental conditions.

  8. Windows of National Opportunity: An Excerpt from the Center on School Turnaround's Report on State Supports for Turnaround

    ERIC Educational Resources Information Center

    Scott, Caitlin; Lasley, Nora

    2014-01-01

    In 2014, state and national leaders found many aspects of turning around America's low-performing schools even more daunting than in the previous year. These views were revealed in the Center on School Turnaround's (CST's) 2014 February/March survey of school turnaround leaders in State Education Agencies (SEA) and directors of the nation's…

  9. New device for time-averaged measurement of volatile organic compounds (VOCs).

    PubMed

    Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio

    2014-07-01

    Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes

  10. Cochlear Modeling Using Time-Averaged Lagrangian" Method:. Comparison with VBM, PST, and ZC Measurements

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.

    2009-02-01

    In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (μCT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.

  11. Static and dynamic micro deformable mirror characterization by phase-shifting and time-averaged interferometry

    NASA Astrophysics Data System (ADS)

    Liotard, Arnaud; Zamkotsian, Frédéric

    2004-06-01

    The micro-opto-electro-mechanical systems (MOEMS), based on mature technologies of micro-electronics, are essential in the design of future astronomical instruments. One of these key-components is the micro-deformable mirror for wave-front correction. Very challenging topics like search of exo-planets could greatly benefit from this technology. Design, realization and characterization of micro-Deformable Mirrors are under way at Laboratoire d'Astrophysique de Marseille (LAM) in collaboration with Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS). In order to measure the surface shape and the deformation parameters during operation of these devices, a high-resolution Twyman-Green interferometer has been developed. Measurements have been done on a tiltable micro-mirror (170×100μm2) designed by LAM-LAAS and realized by an American foundry, and also on an OKO deformable mirror (15mm diameter). Static characterization is made by phase shifting interferometry and dynamic measurements have been made by quantitative time-averaged interferometry. The OKO mirror has an actuator stroke of 370+/-10nm for 150V applied and its resonant frequency is 1170+/-50 Hz, and the tiltable mirror has a rotation cut-off frequency of 31±3kHz.

  12. Static and dynamic microdeformable mirror characterization by phase-shifting and time-averaged interferometry

    NASA Astrophysics Data System (ADS)

    Liotard, Arnaud; Muratet, Sylvaine; Zamkotsian, Fr‰d.‰ric; Fourniols, Jean-Yves

    2005-01-01

    Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing, characterizing and modeling this key-component. Actuators and a continuous-membrane micro deformable mirror (3*3 actuators, 600*600 μm2) have been designed in-house and processed by surface micromachining in the Cronos foundry. A dedicated characterization bench has been developed for the complete analysis. This Twyman-Green interferometer allows high in-plane resolution (4 μm) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps and FEM can be fitted. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometer. The deformable mirror exhibit a 350nm stroke for 35 volts on the central actuator. This limited stroke could be overcome by changing the components material and promising actuators are made with polymers.

  13. Static and dynamic micro deformable mirror characterization by phase-shifting and time-averaged interferometry

    NASA Astrophysics Data System (ADS)

    Liotard, Arnaud; Zamkotsian, Frederic

    2004-09-01

    Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing and characterizing blocks of this key-component. An in-house designed tiltable mirror (170*100 μm2) has been processed by surface micromachining in the Cronos foundry, and a dedicated characterization bench has been developed for the complete analysis of building blocks as well as operational deformable mirrors. This modular Twyman-Green interferometer allows high in-plane resolution (4μm) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps. Range is increased without loosing accuracy by using two-wavelength phase-shifting interferometry authorizing large steps measurements such as 590 nm print-through steps caused by the Cronos process. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometry. Rotation mode frequency of 31-3kHz of the micro tiltable mirror, and a resonance with a tuned damping at 1.1kHz of the commercial OKO deformable mirror are revealed.

  14. Static and dynamic microdeformable mirror characterization by phase-shifting and time-averaged interferometry

    NASA Astrophysics Data System (ADS)

    Liotard, Arnaud; Muratet, Sylvaine; Zamkotsian, Frédéric; Fourniols, Jean-Yves

    2004-12-01

    Since micro deformable mirrors based on Micro-Opto-Electronico-Mechanical Systems (MOEMS) technology would be essential in next generation adaptive optics system, we are designing, realizing, characterizing and modeling this key-component. Actuators and a continuous-membrane micro deformable mirror (3*3 actuators, 600*600 µm2) have been designed in-house and processed by surface micromachining in the Cronos foundry. A dedicated characterization bench has been developed for the complete analysis. This Twyman-Green interferometer allows high in-plane resolution (4 µm) or large field of view (40mm). Out-of-plane measurements are performed with phase-shifting interferometry showing highly repeatable results (standard deviation<5nm). Features such as optical quality or electro-mechanical behavior are extracted from these high precision three-dimensional component maps and FEM can be fitted. Dynamic analysis like vibration mode and cut-off frequency is realized with time-averaged interferometer. The deformable mirror exhibit a 350nm stroke for 35 volts on the central actuator. This limited stroke could be overcome by changing the components material and promising actuators are made with polymers.

  15. Time-weighted average water sampling in Lake Ontario with solid-phase microextraction passive samplers.

    PubMed

    Ouyang, Gangfeng; Zhao, Wennan; Bragg, Leslie; Qin, Zhipei; Alaee, Mehran; Pawliszyn, Janusz

    2007-06-01

    In this study, three types of solid-phase microextraction (SPME) passive samplers, including a fiber-retracted device, a polydimethylsiloxane (PDMS)-rod and a PDMS-membrane, were evaluated to determine the time weighted average (TWA) concentrations of polycyclic aromatic hydrocarbons (PAHs) in Hamilton Harbor (the western tip of Lake Ontario, ON, Canada). Field trials demonstrated that these types of SPME samplers are suitable for the long-term monitoring of organic pollutants in water. These samplers possess all of the advantages of SPME: they are solvent-free, sampling, extraction and concentration are combined into one step, and they can be directly injected into a gas chromatograph (GC) for analysis without further treatment. These samplers also address the additional needs of a passive sampling technique: they are economical, easy to deploy, and the TWA concentrations of target analytes can be obtained with one sampler. Moreover, the mass uptake of these samplers is independent of the face velocity, or the effect can be calibrated, which is desirable for long-term field sampling, especially when the convection conditions of the sampling environment are difficult to measure and calibrate. Among the three types of SPME samplers that were tested, the PDMS-membrane possesses the highest surface-to-volume ratio, which results in the highest sensitivity and mass uptake and the lowest detection level.

  16. Holographic microscope for measuring displacements of vibrating microbeams using time-averaged, electro-optic holography

    NASA Astrophysics Data System (ADS)

    Brown, Gordon C.; Pryputniewicz, Ryszard J.

    1998-05-01

    An optical microscope, utilizing the principles of time- averaged hologram interferometry, is described for microelectromechanical systems (MEMS) applications. MEMS are devices fabricated via techniques such as microphotolithography to create miniature actuators and sensors. Many of these sensors are currently deployed in automotive applications which rely on, or depend on, the dynamic behavior of the sensor, e.g., airbag sensors, ride monitoring suspensions sensors, etc. Typical dimensions of current MEMS devices are measured in micrometers, a small fraction of the diameter of a human hair, and the current trends is to further decrease the size of MEMS devices to submicrometer dimensions. However, the smaller MEMS become, the more challenging it is to measure with accuracy the dynamic characteristics of these devices. An electro-optic holographic microscope (EOHM) for the purpose of studying the dynamic behavior of MEMS type devices is described. Additionally, by performing phase measurements within an EOHM image, object displacements are determined as illustrated by representative examples. With the EOHM, devices with surface sizes ranging from approximately 35 X 400 to 5 X 18 micrometers are studied while undergoing resonant vibrations at frequencies as high as 2 MHz.

  17. Time-averaged Turbulent Flow Characteristics over a Highly Spatially Heterogeneous Gravel-Bed

    NASA Astrophysics Data System (ADS)

    Sarkar, Sankar

    2016-10-01

    The present study focuses on the time-averaged turbulence characteristics over a highly spatially-heterogeneous gravel-bed. The timeaveraged streamwise velocity, Reynolds shear and normal stresses, turbulent kinetic energy, higher-order moments of velocity fluctuations, length scales, and the turbulent bursting were measured over a gravel-bed with an array of larger gravels. It was observed that the turbulence characteristics do not vary significantly above the crest level of the array as compared to those below the array. The nondimensional streamwise velocity decreases considerably with a decrease in depth below the array. Below the array, the Reynolds shear stress (RSS) deviates from the gravity- law of RSS distributions. Turbulence intensities reduce below the crest level of the gravel-bed. The third-order moments of velocity fluctuations increase below the crest level of the gravel-bed and give a clear indication of sweeps as the predominating event which were further verified with the quadrant analysis plots. The turbulent length scales values change significantly below the crest level of the gravel-bed.

  18. Scaling of the Average Receiving Time on a Family of Weighted Hierarchical Networks

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang

    2016-08-01

    In this paper, based on the un-weight hierarchical networks, a family of weighted hierarchical networks are introduced, the weight factor is denoted by r. The weighted hierarchical networks depend on the number of nodes in complete bipartite graph, denoted by n1, n2 and n = n1 + n2. Assume that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the weight of edge linking them. We deduce the analytical expression of the average receiving time (ART). The obtained remarkable results display two conditions. In the large network, when nr > n1n2, the ART grows as a power-law function of the network size |V (Gk)| with the exponent, represented by θ =logn( nr n1n2 ), 0 < θ < 1. This means that the smaller the value of θ, the more efficient the process of receiving information. When nr ≤ n1n2, the ART grows with increasing order |V (Gk)| as logn|V (Gk)| or (logn|V (Gk)|)2.

  19. Time-dependent wave packet averaged vibrational frequencies from femtosecond stimulated Raman spectra

    NASA Astrophysics Data System (ADS)

    Wu, Yue-Chao; Zhao, Bin; Lee, Soo-Y.

    2016-02-01

    Femtosecond stimulated Raman spectroscopy (FSRS) on the Stokes side arises from a third order polarization, P(3)(t), which is given by an overlap of a first order wave packet, |" separators=" Ψ2 ( 1 ) ( p u , t ) > , prepared by a narrow band (ps) Raman pump pulse, Epu(t), on the upper electronic e2 potential energy surface (PES), with a second order wave packet, <" separators=" Ψ1 ( 2 ) ( p r ∗ , p u , t ) | , that is prepared on the lower electronic e1 PES by a broadband (fs) probe pulse, Epr(t), acting on the first-order wave packet. In off-resonant FSRS, |" separators=" Ψ2 ( 1 ) ( p u , t ) > resembles the zeroth order wave packet |" separators=" Ψ1 ( 0 ) ( t ) > on the lower PES spatially, but with a force on |" separators=" Ψ2 ( 1 ) ( p u , t ) > along the coordinates of the reporter modes due to displacements in the equilibrium position, so that <" separators=" Ψ1 ( 2 ) ( p r ∗ , p u , t ) | will oscillate along those coordinates thus giving rise to similar oscillations in P(3)(t) with the frequencies of the reporter modes. So, by recovering P(3)(t) from the FSRS spectrum, we are able to deduce information on the time-dependent quantum-mechanical wave packet averaged frequencies, ω ¯ j ( t ) , of the reporter modes j along the trajectory of |" separators=" Ψ1 ( 0 ) ( t ) > . The observable FSRS Raman gain is related to the imaginary part of P(3)(ω). The imaginary and real parts of P(3)(ω) are related by the Kramers-Kronig relation. Hence, from the FSRS Raman gain, we can obtain the complex P(3)(ω), whose Fourier transform then gives us the complex P(3)(t) to analyze for ω ¯ j ( t ) . We apply the theory, first, to a two-dimensional model system with one conformational mode of low frequency and one reporter vibrational mode of higher frequency with good results, and then we apply it to the time-resolved FSRS spectra of the cis-trans isomerization of retinal in rhodopsin [P. Kukura et al., Science 310, 1006 (2005)]. We obtain the vibrational

  20. Time-dependent wave packet averaged vibrational frequencies from femtosecond stimulated Raman spectra.

    PubMed

    Wu, Yue-Chao; Zhao, Bin; Lee, Soo-Y

    2016-02-07

    Femtosecond stimulated Raman spectroscopy (FSRS) on the Stokes side arises from a third order polarization, P(3)(t), which is given by an overlap of a first order wave packet, |Ψ2(1)(pu,t)>, prepared by a narrow band (ps) Raman pump pulse, Epu(t), on the upper electronic e2 potential energy surface (PES), with a second order wave packet, <Ψ1(2)(pr(∗),pu,t)|, that is prepared on the lower electronic e1 PES by a broadband (fs) probe pulse, Epr(t), acting on the first-order wave packet. In off-resonant |FSRS, Ψ2(1)(pu,t)> resembles the zeroth order wave packet |Ψ1(0)(t)> on the lower PES spatially, but with a force on |Ψ2(1)(pu,t)> along the coordinates of the reporter modes due to displacements in the equilibrium position, so that <Ψ1(2)(pr(∗),pu,t)| will oscillate along those coordinates thus giving rise to similar oscillations in P(3)(t) with the frequencies of the reporter modes. So, by recovering P(3)(t) from the FSRS spectrum, we are able to deduce information on the time-dependent quantum-mechanical wave packet averaged frequencies, ω̄j(t), of the reporter modes j along the trajectory of |Ψ1 (0)(t)>. The observable FSRS Raman gain is related to the imaginary part of P(3)(ω). The imaginary and real parts of P(3)(ω) are related by the Kramers-Kronig relation. Hence, from the FSRS Raman gain, we can obtain the complex P(3)(ω), whose Fourier transform then gives us the complex P(3)(t) to analyze for ω̄j(t). We apply the theory, first, to a two-dimensional model system with one conformational mode of low frequency and one reporter vibrational mode of higher frequency with good results, and then we apply it to the time-resolved FSRS spectra of the cis-trans isomerization of retinal in rhodopsin [P. Kukura et al., Science 310, 1006 (2005)]. We obtain the vibrational frequency up-shift time constants for the C12-H wagging mode at 216 fs and for the C10-H wagging mode at 161 fs which are larger than for the C11-H wagging mode at 127 fs, i.e., the C11-H

  1. Time weighted average concentration monitoring based on thin film solid phase microextraction.

    PubMed

    Ahmadi, Fardin; Sparham, Chris; Boyaci, Ezel; Pawliszyn, Janusz

    2017-03-02

    Time weighted average (TWA) passive sampling with thin film solid phase microextraction (TF-SPME) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for collection, identification, and quantification of benzophenone-3, benzophenone-4, 2-phenylbenzimidazole-5-sulphonic acid, octocrylene, and triclosan in the aquatic environment. Two types of TF-SPME passive samplers, including a retracted thin film device using a hydrophilic lipophilic balance (HLB) coating, and an open bed configuration with an octadecyl silica-based (C18) coating, were evaluated in an aqueous standard generation (ASG) system. Laboratory calibration results indicated that the thin film retracted device using HLB coating is suitable to determine TWA concentrations of polar analytes in water, with an uptake that was linear up to 70 days. In open bed form, a one-calibrant kinetic calibration technique was accomplished by loading benzophenone3-d5 as calibrant on the C18 coating to quantify all non-polar compounds. The experimental results showed that the one-calibrant kinetic calibration technique can be used for determination of classes of compounds in cases where deuterated counterparts are either not available or expensive. The developed passive samplers were deployed in wastewater-dominated reaches of the Grand River (Kitchener, ON) to verify their feasibility for determination of TWA concentrations in on-site applications. Field trials results indicated that these devices are suitable for long-term and short-term monitoring of compounds varying in polarity, such as UV blockers and biocide compounds in water, and the data were in good agreement with literature data.

  2. Redshift-space equal-time angular-averaged consistency relations of the gravitational dynamics

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Valageas, Patrick

    2015-12-01

    We present the redshift-space generalization of the equal-time angular-averaged consistency relations between (ℓ+n )- and n -point polyspectra (i.e., the Fourier counterparts of correlation functions) of the cosmological matter density field. Focusing on the case of the ℓ=1 large-scale mode and n small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of N -body simulations, we find that our relation on the bispectrum in the squeezed limit (i.e., the limit where one wave number is much smaller than the other two) is valid to better than 20% up to 1 h Mpc-1 , for both the monopole and quadrupole at z =0.35 , in a Λ CDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come from the breakdown of the approximate symmetry of the gravitational dynamics. For practical applications, we introduce a simple ansatz to estimate the new derivative terms in the relation using only observables. Although the relation holds worse after using this ansatz, we can still recover it within 20% up to 1 h Mpc-1 , at z =0.35 for the monopole. On larger scales, k =0.2 h Mpc-1 , it still holds within the statistical accuracy of idealized simulations of volume ˜8 h-3Gpc3 without shot-noise error.

  3. Uncertainty and variability in historical time-weighted average exposure data.

    PubMed

    Davis, Adam J; Strom, Daniel J

    2008-02-01

    Beginning around 1940, private companies began processing of uranium and thorium ore, compounds, and metals for the Manhattan Engineer District and later the U.S. Atomic Energy Commission (AEC). Personnel from the AEC's Health and Safety Laboratory (HASL) visited many of the plants to assess worker exposures to radiation and radioactive materials. They developed a time-and-task approach to estimating "daily weighted average" (DWA) concentrations of airborne uranium, thorium, radon, and radon decay products. While short-term exposures greater than 10(5) dpm m(-3) of uranium and greater than 10(5) pCi L(-1) of radon were observed, DWA concentrations were much lower. The HASL-reported DWA values may be used as inputs for dose reconstruction in support of compensation decisions, but they have no numerical uncertainties associated with them. In this work, Monte Carlo methods are used retrospectively to assess the uncertainty and variability in the DWA values for 63 job titles from five different facilities that processed U, U ore, Th, or 226Ra-222Rn between 1948 and 1955. Most groups of repeated air samples are well described by lognormal distributions. Combining samples associated with different tasks often results in a reduction of the geometric standard deviation (GSD) of the DWA to less than those GSD values typical of individual tasks. Results support the assumption of a GSD value of 5 when information on uncertainty in DWA exposures is unavailable. Blunders involving arithmetic, transposition, and transcription are found in many of the HASL reports. In 5 out of the 63 cases, these mistakes result in overestimates of DWA values by a factor of 2 to 2.5, and in 2 cases DWA values are underestimated by factors of 3 to 10.

  4. Mercury's Time-Averaged and Induced Magnetic Fields from MESSENGER Observations

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Winslow, R. M.; Anderson, B. J.; Purucker, M. E.; Korth, H.; Al Asad, M. M.; Slavin, J. A.; Baker, D. N.; Hauck, S. A.; Phillips, R. J.; Zuber, M. T.; Solomon, S. C.

    2012-12-01

    Observations from MESSENGER's Magnetometer (MAG) have allowed the construction of a baseline, time-averaged model for Mercury's magnetosphere. The model, constructed with the approximation that the magnetospheric shape can be represented as a paraboloid, includes two external (magnetopause and magnetotail) current systems and an internal (dipole) field. We take advantage of the geometry of the orbital MAG data to constrain all but one of the model parameters, and their ranges, directly from the observations. These parameters are then used as a priori constraints in the magnetospheric model, and the remaining parameter, the dipole moment, is estimated from a grid search. The model provides an excellent fit to the MAG observations, with a root-mean-square misfit of less than 20 nT globally. The mean distance from the planetary dipole origin to the magnetopause subsolar point, RSS, is 1.45 RM (where RM = 2440 km) and the mean planetary dipole moment is 190 nT- RM3. Temporal variations in the global-scale magnetic fields result from changes in solar wind ram pressure, Pram, at Mercury that arise from the planet's 88-day eccentric orbit around the Sun and from transient, rapid changes in solar wind conditions. For a constant planetary dipole moment, RSS varies as Pram-1/6. However, magnetopause crossings obtained from several Mercury years of MESSENGER observations indicate that RSS is proportional to Pram-1/a where a is greater than 6, suggesting induction in Mercury's highly conducting metallic interior. We obtain an effective dipole moment that varies by up to ˜15% about its mean value. We further investigate the periodic 88-day induction signature and use the paraboloid model to describe the spatial structure in the inducing magnetopause field, together with estimates for the outer radius of Mercury's liquid core and possible overlying solid iron sulfide layer, to calculate induced core fields. The baseline magnetospheric model is adapted to include the 88-day

  5. Comment on "Time-averaged properties of unstable periodic orbits and chaotic orbits in ordinary differential equation systems".

    PubMed

    Zaks, Michael A; Goldobin, Denis S

    2010-01-01

    A recent paper claims that mean characteristics of chaotic orbits differ from the corresponding values averaged over the set of unstable periodic orbits, embedded in the chaotic attractor. We demonstrate that the alleged discrepancy is an artifact of the improper averaging. Since the natural measure is nonuniformly distributed over the attractor, different periodic orbits make different contributions into the time averages. As soon as the corresponding weights are accounted for, the discrepancy disappears.

  6. Advanced time average holographic method for measurement in extensive vibration amplitude range with quantitative single-pixel analysis

    NASA Astrophysics Data System (ADS)

    Psota, Pavel; Lédl, Vít.; Vojtíšek, Petr; Václavík, Jan; Doleček, Roman; Mokrý, Pavel

    2015-05-01

    In this paper we propose a time average digital holographical arrangement employing frequency shift of reference wave and its phase modulation. It results in Phase Modulated Frequency Shifted Time Average Digital Holography PMFSTADH method. This method has a potential to extend currently using frequency shifted time average digital holography to possibility of numerical analysis. It is primarily useful for measurement of great or very small amplitudes of vibration. Moreover we use acusto-optical modulators to realize frequency as well as phase modulation so we need no additional hardware in our experimental setup.

  7. 40 CFR 60.1265 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of... daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate a... Continuous Emission Monitoring § 60.1265 How do I convert my 1-hour arithmetic averages into the...

  8. A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows

    NASA Astrophysics Data System (ADS)

    Diaz, Steven William

    A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the

  9. Time-Averaged Velocity, Temperature and Density Surveys of Supersonic Free Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.; Mielke, Amy F.

    2005-01-01

    A spectrally resolved molecular Rayleigh scattering technique was used to simultaneously measure axial component of velocity U, static temperature T, and density p in unheated free jets at Mach numbers M = 0.6,0.95, 1.4 and 1.8. The latter two conditions were achieved using contoured convergent-divergent nozzles. A narrow line-width continuous wave laser was passed through the jet plumes and molecular scattered light from a small region on the beam was collected and analyzed using a Fabry-Perot interferometer. The optical spectrum analysis air density at the probe volume was determined by monitoring the intensity variation of the scattered light using photo-multiplier tubes. The Fabry-Perot interferometer was operated in the imaging mode, whereby the fringe formed at the image plane was captured by a cooled CCD camera. Special attention was given to remove dust particles from the plume and to provide adequate vibration isolation to the optical components. The velocity profiles from various operating conditions were compared with that measured by a Pitot tube. An excellent comparison within 5m's demonstrated the maturity of the technique. Temperature was measured least accurately, within 10K, while density was measured within 1% uncertainty. The survey data consisted of centerline variations and radial profiles of time-averaged U, T and p. The static temperature and density values were used to determine static pressure variations inside the jet. The data provided a comparative study of jet growth rates with increasing Mach number. The current work is part of a data-base development project for Computational Fluid Dynamics and Aeroacoustics codes that endeavor to predict noise characteristics of high speed jets. A limited amount of far field noise spectra from the same jets are also presented. Finally, a direct experimental validation was obtained for the Crocco-Busemann equation which is commonly used to predict temperature and density profiles from known velocity

  10. Diagnostic quality of time-averaged ECG-gated CT data

    NASA Astrophysics Data System (ADS)

    Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan

    2009-02-01

    Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.

  11. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    USGS Publications Warehouse

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this

  12. Turnaround operations analysis for OTV. Volume 2: Detailed technical report

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.

  13. 34. BOILER HOUSE, COAL CONVEYOR AND TURNAROUND TRACK FOR COAL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA

  14. Leadership and the psychology of turnarounds.

    PubMed

    Kanter, Rosabeth Moss

    2003-06-01

    Turnaround champions--those leaders who manage to bring distressed organizations back from the brink of failure--are often acclaimed for their canny financial and strategic decision making. But having studied their work closely, Harvard Business School's Rosabeth Moss Kanter emphasizes another aspect of their achievement. These leaders reverse the cycle of corporate decline through deliberate interventions that increase the level of communication, collaboration, and respect among their managers. Ailing companies descend into what Kanter calls a "death spiral," which typically works this way: After an initial blow to the company's fortunes, people begin pointing fingers and deriding colleagues in other parts of the business. Tensions rise and collaboration declines. Once they are no longer acting in concert, people find themselves less able to effect change. Eventually, many come to believe they are helpless. Passivity sets in. Finally, the ultimate pathology of troubled companies takes hold: denial. Rather than volunteer an opinion that no one else seems to share, people engage in collective pretense to ignore what they individually know. To counter these dynamics, Kanter says, and reverse the company's slide, the CEO needs to apply certain psychological interventions--specifically, replacing secrecy and denial with dialogue, blame and scorn with respect, avoidance and turf protection with collaboration, and passivity and helplessness with initiative. The author offers in-depth accounts of how the CEOs at Gillette, Invensys, and the BBC used these interventions to guide their employees out of corporate free fall and onto a more productive path.

  15. Average KR schedule in learning of timing: influence of length for summary knowledge of results and task complexity.

    PubMed

    Ishikura, Tadao

    2005-12-01

    This experiment investigated the influence of length for average Knowledge of Results (KR) and task complexity on learning of timing in a barrier knock-down task. Participants (30 men and 30 women) attempted to press a goal button in 1200 msec. after pressing a start button. The participant was assigned into one of six groups by two tasks (simple and complex) and three feedback groups (100% KR, Average 3, Average 5). The simple and complex tasks required a participant to knock down one or three barriers before pressing a goal button. After a pretest without KR, participants practiced 60 trials of physical practice with one of the three following groups as a practice phase: one given the result of movement time after every trial (100% KR), a second given the average movement time after every third trial (Average 3), a third given the average movement time after every fifth trial (Average 5). Participants then performed a posttest with no-KR and two retention tests, taken 10 min. and 24 hr. after the posttest without KR. Analysis gave several findings. (1) On the complex task, the absolute constant error (/CE/) and the variable error (VE) were less than those on the simple task. (2) The /CE/ and the VE of the 100% KR and the Average 3 groups were less than those of the Average 5 group in the practice phase, and the VE of the 100% KR and the Average 3 group were less than those of the Average 5 group on the retention tests. (3) In the practice phase, the /CE/ and the VE on Blocks 1 and 2 were higher than on Blocks 5 and 6. (4) On the retention tests, the /CE/ of the posttest was less than retention tests 1 and 2. And, the VE of the 100% KR and the Average 3 groups were less than that of the Average 5 group. These results suggest that the average feedback length of three trials and the given feedback information after every trial are advantageous to learning timing on this barrier knock-down task.

  16. Time-Average Measurement of Velocity, Density, Temperature, and Turbulence Using Molecular Rayleigh Scattering

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta

    2004-01-01

    Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature

  17. Turn Around Time (TAT) as a Benchmark of Laboratory Performance

    PubMed Central

    Goswami, Binita; Chawla, Ranjna; Gupta, V. K.; Mallika, V.

    2010-01-01

    Laboratory analytical turnaround time is a reliable indicator of laboratory effectiveness. Our study aimed to evaluate laboratory analytical turnaround time in our laboratory and appraise the contribution of the different phases of analysis towards the same. The turn around time (TAT) for all the samples (both routine and emergency) for the outpatient and hospitalized patients were evaluated for one year. TAT was calculated from sample reception to report dispatch. The average TAT for the clinical biochemistry samples was 5.5 h for routine inpatient samples while the TAT for the outpatient samples was 24 h. The turnaround time for stat samples was 1 h. Pre- and Post-analytical phases were found to contribute approximately 75% to the total TAT. The TAT demonstrates the need for improvement in the pre- and post-analytical periods. We need to tread the middle path to perform optimally according to clinician expectations. PMID:21966108

  18. Evaluation of Time-Averaged CERES TOA SW Product Using CAGEX Data

    NASA Technical Reports Server (NTRS)

    Carlson, Ann B.; Wong, Takmeng

    1998-01-01

    A major component in the analysis of the Earth's radiation budget is the recovery of daily and monthly averaged radiative parameters using noncontinuous spatial and temporal measurements from polar orbiting satellites. In this study, the accuracy of the top of atmosphere (TOA) shortwave (SW) temporal interpolation model for the Clouds and the Earth's Radiant Energy System (CERES) is investigated using temporally intensive half-hourly TOA fluxes from the CERES/ARM/GEWEX Experiment (CAGEX) over Oklahoma (Charlock et al., 1996).

  19. Where the world stands still: turnaround as a strong test of ΛCDM cosmology

    SciTech Connect

    Pavlidou, V.; Tomaras, T.N. E-mail: tomaras@physics.uoc.gr

    2014-09-01

    Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ΛCDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ΛCDM model is, for a given mass M, equal to (3GM/Λ c{sup 2}){sup 1/3}, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ΛCDM cosmology. Current data appear to favor ΛCDM over alternatives with local inhomogeneities and no Λ. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ΛCDM cosmology.

  20. Decomposition-order effects of time integrator on ensemble averages for the Nosé-Hoover thermostat.

    PubMed

    Itoh, Satoru G; Morishita, Tetsuya; Okumura, Hisashi

    2013-08-14

    Decomposition-order dependence of time development integrator on ensemble averages for the Nosé-Hoover dynamics is discussed. Six integrators were employed for comparison, which were extensions of the velocity-Verlet or position-Verlet algorithm. Molecular dynamics simulations by these integrators were performed for liquid-argon systems with several different time steps and system sizes. The obtained ensemble averages of temperature and potential energy were shifted from correct values depending on the integrators. These shifts increased in proportion to the square of the time step. Furthermore, the shifts could not be removed by increasing the number of argon atoms. We show the origin of these ensemble-average shifts analytically. Our discussion can be applied not only to the liquid-argon system but also to all MD simulations with the Nosé-Hoover thermostat. Our recommended integrators among the six integrators are presented to obtain correct ensemble averages.

  1. Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation.

    PubMed

    Safdari, Hadiseh; Cherstvy, Andrey G; Chechkin, Aleksei V; Bodrova, Anna; Metzler, Ralf

    2017-01-01

    We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.

  2. Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation

    NASA Astrophysics Data System (ADS)

    Safdari, Hadiseh; Cherstvy, Andrey G.; Chechkin, Aleksei V.; Bodrova, Anna; Metzler, Ralf

    2017-01-01

    We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.

  3. Evidence of Discrete Scale Invariance in DLA and Time-to-Failure by Canonical Averaging

    NASA Astrophysics Data System (ADS)

    Johansen, A.; Sornette, D.

    Discrete scale invariance, which corresponds to a partial breaking of the scaling symmetry, is reflected in the existence of a hierarchy of characteristic scales l0,l0λ,l0λ2,…, where λ is a preferred scaling ratio and l0 a microscopic cut-off. Signatures of discrete scale invariance have recently been found in a variety of systems ranging from rupture, earthquakes, Laplacian growth phenomena, "animals" in percolation to financial market crashes. We believe it to be a quite general, albeit subtle phenomenon. Indeed, the practical problem in uncovering an underlying discrete scale invariance is that standard ensemble averaging procedures destroy it as if it was pure noise. This is due to the fact, that while λ only depends on the underlying physics, l0 on the contrary is realization-dependent. Here, we adapt and implement a novel so-called "canonical" averaging scheme which re-sets the l0 of different realizations to approximately the same value. The method is based on the determination of a realization-dependent effective critical point obtained from, e.g., a maximum susceptibility criterion. We demonstrate the method on diffusion limited aggregation and a model of rupture.

  4. Infinite-time average of local fields in an integrable quantum field theory after a quantum quench.

    PubMed

    Mussardo, G

    2013-09-06

    The infinite-time average of the expectation values of local fields of any interacting quantum theory after a global quench process are key quantities for matching theoretical and experimental results. For quantum integrable field theories, we show that they can be obtained by an ensemble average that employs a particular limit of the form factors of local fields and quantities extracted by the generalized Bethe ansatz.

  5. An Integrated Gate Turnaround Management Concept Leveraging Big Data/Analytics for NAS Performance Improvements

    NASA Technical Reports Server (NTRS)

    Chung, William; Chachad, Girish; Hochstetler, Ronald

    2016-01-01

    The Integrated Gate Turnaround Management (IGTM) concept was developed to improve the gate turnaround performance at the airport by leveraging relevant historical data to support optimization of airport gate operations, which include: taxi to the gate, gate services, push back, taxi to the runway, and takeoff, based on available resources, constraints, and uncertainties. By analyzing events of gate operations, primary performance dependent attributes of these events were identified for the historical data analysis such that performance models can be developed based on uncertainties to support descriptive, predictive, and prescriptive functions. A system architecture was developed to examine system requirements in support of such a concept. An IGTM prototype was developed to demonstrate the concept using a distributed network and collaborative decision tools for stakeholders to meet on time pushback performance under uncertainties.

  6. Exploring Granger causality between global average observed time series of carbon dioxide and temperature

    SciTech Connect

    Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R

    2010-01-01

    Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.

  7. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  8. Time-averaging approximation in the interaction picture: absorption line shapes for coupled chromophores with application to liquid water.

    PubMed

    Yang, Mino; Skinner, J L

    2011-10-21

    The time-averaging approximation (TAA), originally developed to calculate vibrational line shapes for coupled chromophores using mixed quantum/classical methods, is reformulated. In the original version of the theory, time averaging was performed for the full one-exciton Hamiltonian, while herein the time averaging is performed on the coupling (off-diagonal) Hamiltonian in the interaction picture. As a result, the influence of the dynamic fluctuations of the transition energies is more accurately described. We compare numerical results of the two versions of the TAA with numerically exact results for the vibrational absorption line shape of the OH stretching modes in neat water. It is shown that the TAA in the interaction picture yields theoretical line shapes that are in better agreement with exact results.

  9. Extending Hierarchical Reinforcement Learning to Continuous-Time, Average-Reward, and Multi-Agent Models

    DTIC Science & Technology

    2003-07-09

    Hierarchical reinforcement learning (HRL) is a general framework that studies how to exploit the structure of actions and tasks to accelerate policy...framework could su ce, we focus in this paper on the MAXQ framework. We describe three new hierarchical reinforcement learning algorithms: continuous-time... reinforcement learning to speed up the acquisition of cooperative multiagent tasks. We extend the MAXQ framework to the multiagent case which we term

  10. The Effects of Part-Time Employment on High School Students' Grade Point Averages and Rate of School Attendance.

    ERIC Educational Resources Information Center

    Heffez, Jack

    To determine what effects employment will have on high school students' grade point averages and rate of school attendance, the author involved fifty-six students in an experiment. Twenty-eight students were employed part-time under the Youth Incentive Entitlement Project (YIEP). The twenty-eight students in the control group were eligible for…

  11. Strong averaging principle for two-time-scale non-autonomous stochastic FitzHugh-Nagumo system with jumps

    NASA Astrophysics Data System (ADS)

    Xu, Jie; Miao, Yu; Liu, Jicheng

    2016-09-01

    In this paper, we study an averaging principle for stochastic FitzHugh-Nagumo system with different time scales driven by cylindrical Wiener processes and Poisson jumps, where the slow equation is non-autonomous and the fast equation is autonomous case. Under suitable assumptions, we show that the slow component mean-square strongly converges to the solution of the corresponding averaging equation, and the rate of the convergence as a by-product is also affirmed. Finally, we give some open problems which are derived from this paper.

  12. Symplectic time-average propagators for the Schrödinger equation with a time-dependent Hamiltonian.

    PubMed

    Blanes, Sergio; Casas, Fernando; Murua, Ander

    2017-03-21

    Several symplectic splitting methods of orders four and six are presented for the step-by-step time numerical integration of the Schrödinger equation when the Hamiltonian is a general explicitly time-dependent real operator. They involve linear combinations of the Hamiltonian evaluated at some intermediate points. We provide the algorithm and the coefficients of the methods, as well as some numerical examples showing their superior performance with respect to other available schemes.

  13. Symplectic time-average propagators for the Schrödinger equation with a time-dependent Hamiltonian

    NASA Astrophysics Data System (ADS)

    Blanes, Sergio; Casas, Fernando; Murua, Ander

    2017-03-01

    Several symplectic splitting methods of orders four and six are presented for the step-by-step time numerical integration of the Schrödinger equation when the Hamiltonian is a general explicitly time-dependent real operator. They involve linear combinations of the Hamiltonian evaluated at some intermediate points. We provide the algorithm and the coefficients of the methods, as well as some numerical examples showing their superior performance with respect to other available schemes.

  14. Time-dependent local and average structural evolution of δ-phase 239Pu-Ga alloys

    SciTech Connect

    Smith, Alice I.; Page, Katharine L.; Siewenie, Joan E.; Losko, Adrian S.; Vogel, Sven C.; Gourdon, Olivier A.; Richmond, Scott; Saleh, Tarik A.; Ramos, Michael; Schwartz, Daniel S.

    2016-08-05

    Here, plutonium metal is a very unusual element, exhibiting six allotropes at ambient pressure, between room temperature and its melting point, a complicated phase diagram, and a complex electronic structure. Many phases of plutonium metal are unstable with changes in temperature, pressure, chemical additions, or time. This strongly affects structure and properties, and becomes of high importance, particularly when considering effects on structural integrity over long periods of time [1]. This paper presents a time-dependent neutron total scattering study of the local and average structure of naturally aging δ-phase239Pu-Ga alloys, together with preliminary results on neutron tomography characterization.

  15. Time-dependent local and average structural evolution of δ-phase 239Pu-Ga alloys

    DOE PAGES

    Smith, Alice I.; Page, Katharine L.; Siewenie, Joan E.; ...

    2016-08-05

    Here, plutonium metal is a very unusual element, exhibiting six allotropes at ambient pressure, between room temperature and its melting point, a complicated phase diagram, and a complex electronic structure. Many phases of plutonium metal are unstable with changes in temperature, pressure, chemical additions, or time. This strongly affects structure and properties, and becomes of high importance, particularly when considering effects on structural integrity over long periods of time [1]. This paper presents a time-dependent neutron total scattering study of the local and average structure of naturally aging δ-phase239Pu-Ga alloys, together with preliminary results on neutron tomography characterization.

  16. Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter

    NASA Astrophysics Data System (ADS)

    Johnson, W. R.; Nilsen, J.

    2016-03-01

    The influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity and also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.

  17. Time-resolved and time-averaged stereo-PIV measurements of a unit-ratio cavity

    NASA Astrophysics Data System (ADS)

    Immer, Marc; Allegrini, Jonas; Carmeliet, Jan

    2016-06-01

    An experimental setup was developed to perform wind tunnel measurements on a unit-ratio, 2D open cavity under perpendicular incident flow. The open cavity is characterized by a mixing layer at the cavity top, that divides the flow field into a boundary layer flow and a cavity flow. Instead of precisely replicating a specific type of inflow, such as a turbulent flat plate boundary layer or an atmospheric boundary layer, the setup is capable of simulating a wide range of inflow profiles. This is achieved by using triangular spires as upstream turbulence generators, which can modify the otherwise laminar inflow boundary layer to be moderately turbulent and stationary, or heavily turbulent and intermittent. Measurements were performed by means of time-resolved stereo PIV. The cavity shear layer is analyzed in detail using flow statistics, spectral analysis, and space-time plots. The ability of the setup to generate typical cavity flow cases is demonstrated for characteristic inflow boundary layers, laminar and turbulent. Each case is associated with a distinct shear layer flow phenomena, self-sustained oscillations for the former and Kelvin-Helmholtz instabilities for the latter. Additionally, large spires generate a highly turbulent wake flow, resulting in a significantly different cavity flow. Large turbulent sweep and ejection events in the wake flow suppress the typical shear layer and sporadic near wall sweep events generate coherent vortices at the upstream edge.

  18. WESSEL: Code for Numerical Simulation of Two-Dimensional Time-Dependent Width-Averaged Flows with Arbitrary Boundaries.

    DTIC Science & Technology

    1985-08-01

    id This report should be cited as follows: -0 Thompson , J . F ., and Bernard, R. S. 1985. "WESSEL: Code for Numerical Simulation of Two-Dimensional Time...Bodies," Ph. D. Dissertation, Mississippi State University, Mississippi State, Miss. Thompson , J . F . 1983. "A Boundary-Fitted Coordinate Code for General...Vicksburg, Miss. Thompson , J . F ., and Bernard, R. S. 1985. "Numerical Modeling of Two-Dimensional Width-Averaged Flows Using Boundary-Fitted Coordinate

  19. California Turnaround Schools: An Analysis of School Improvement Grant Effectiveness

    ERIC Educational Resources Information Center

    Graham, Khalil N.

    2013-01-01

    The purpose of this study was to evaluate the effectiveness of School Improvement Grants (SIGs) in the state of California (CA) in increasing student achievement using the turnaround implementation model. The American Recovery and Reinvestment Act of 2009 (ARRA) included educational priorities focused on fixing America's lowest achieving schools.…

  20. Turnaround: Leading Stressed Colleges and Universities to Excellence

    ERIC Educational Resources Information Center

    Martin, James; Samels, James E.

    2008-01-01

    Nearly one thousand colleges and universities in the United States face major challenges--from catastrophic hurricanes to loss of accreditation to sagging enrollment. What can leaders of such at-risk institutions do to improve their situation? "Turnaround" gives college and university leaders the tools they need to put their fragile institutions…

  1. The BBSome controls IFT assembly and turnaround in cilia.

    PubMed

    Wei, Qing; Zhang, Yuxia; Li, Yujie; Zhang, Qing; Ling, Kun; Hu, Jinghua

    2012-09-01

    The bidirectional movement of intraflagellar transport (IFT) particles, which are composed of motors, IFT-A and IFT-B subcomplexes, and cargoes, is required for the biogenesis and signalling of cilia(1,2). A successful IFT cycle depends on the proper assembly of the massive IFT particle at the ciliary base and its turnaround from anterograde to retrograde transport at the ciliary tip. However, how IFT assembly and turnaround are regulated in vivo remains elusive. From a whole-genome mutagenesis screen in Caenorhabditis elegans, we identified two hypomorphic mutations in dyf-2 and bbs-1 as the only mutants showing normal anterograde IFT transport but defective IFT turnaround at the ciliary tip. Further analyses revealed that the BBSome (refs 3, 4), a group of conserved proteins affected in human Bardet-Biedl syndrome(5) (BBS), assembles IFT complexes at the ciliary base, then binds to the anterograde IFT particle in a DYF-2- (an orthologue of human WDR19) and BBS-1-dependent manner, and lastly reaches the ciliary tip to regulate proper IFT recycling. Our results identify the BBSome as the key player regulating IFT assembly and turnaround in cilia.

  2. Transforming Turnaround Schools in China: Strategies, Achievements, and Challenges

    ERIC Educational Resources Information Center

    Liu, Peng

    2016-01-01

    The existence of turnaround schools has been a problem in the Chinese education system. There are diverse causes including the education system itself, the financial system, and other issues. However, there has been a lack of research to help us fully understand this phenomenon. This article provides a holistic perspective on the strategies the…

  3. Portrait of a Turnaround Leader in a High Needs District

    ERIC Educational Resources Information Center

    Hewitt, Kimberly Kappler; Reitzug, Ulrich

    2015-01-01

    Using portraiture methodology involving interview, observation, and artifact data, this study portrays a turnaround leader, Dr. Susan Gray, in a high needs, rural district in the Southeast. In three years, Gray led Lincoln Elementary from nearly being reconstituted to being an award-winning school. Gray has subsequently been assigned other…

  4. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  5. Importing Leaders for School Turnarounds: Lessons and Opportunities

    ERIC Educational Resources Information Center

    Kowal, Julie; Hassel, Emily Ayscue

    2011-01-01

    One of the biggest challenges in education today is identifying talented candidates to successfully lead turnarounds of persistently low-achieving schools. Evidence suggests that the traditional principal pool is already stretched to capacity and cannot supply enough leaders to fix failing schools. But potentially thousands of leaders capable of…

  6. Democratic School Turnarounds: Pursuing Equity and Learning from Evidence

    ERIC Educational Resources Information Center

    Trujillo, Tina; Renée, Michelle

    2013-01-01

    the report "Democratic School Turnarounds" considers the democratic tensions inherent in the federal School Improvement Grant (SIG) policy's market-based school reforms and critiques the research base that many of these reforms are based on. It concludes with a set of recommendations that re-center the purposes of public education…

  7. Policy Perspective: School Turnaround in England. Utilizing the Private Sector

    ERIC Educational Resources Information Center

    Corbett, Julie

    2014-01-01

    This paper, written by strategic partner of the Center on School Turnaround (CST), Julie Corbett, provides research and examples on England's approach to turning around its lowest performing schools. The English education system utilizes private vendors to support chronically low-performing schools and districts. The introduction is followed by…

  8. Rebuilding Organizational Capacity in Turnaround Schools: Insights from the Corporate, Government, and Non-Profit Sectors

    ERIC Educational Resources Information Center

    Murphy, Joseph; Meyers, Coby V.

    2009-01-01

    In this article, we provide a grounded narrative of capacity building in the turnaround equation by exploring the turnaround literature outside of education and applying it to troubled schools. Our analysis is based upon reviews of: (1) 14 comprehensive, historical volumes that examine the turnaround phenomenon; (2) 16 book-length analyses of…

  9. School Turnaround Principals: What Does Initial Research Literature Suggest They Are Doing to Be Successful?

    ERIC Educational Resources Information Center

    Meyers, Coby V.; Hambrick Hitt, Dallas

    2017-01-01

    As the research literature on principals leading school turnaround grows, determining whether or not real differences between good, even effective, principals and turnaround principals becomes increasingly important. Recent federal government policy and investment established turnaround models that emphasize the role of the school principal,…

  10. Identification of continuous-time nonlinear systems: The nonlinear difference equation with moving average noise (NDEMA) framework

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2015-08-01

    Although a vast number of techniques for the identification of nonlinear discrete-time systems have been introduced, the identification of continuous-time nonlinear systems is still extremely difficult. In this paper, the Nonlinear Difference Equation with Moving Average noise (NDEMA) model which is a general representation of nonlinear systems and contains, as special cases, both continuous-time and discrete-time models, is first proposed. Then based on this new representation, a systematic framework for the identification of nonlinear continuous-time models is developed. The new approach can not only detect the model structure and estimate the model parameters, but also work for noisy nonlinear systems. Both simulation and experimental examples are provided to illustrate how the new approach can be applied in practice.

  11. Average recovery time from a standardized intravenous sedation protocol and standardized discharge criteria in the general dental practice setting.

    PubMed Central

    Lepere, A. J.; Slack-Smith, L. M.

    2002-01-01

    Intravenous sedation has been used in dentistry for many years because of its perceived advantages over general anesthesia, including shorter recovery times. However, there is limited literature available on recovery from intravenous dental sedation, particularly in the private general practice setting. The aim of this study was to describe the recovery times when sedation was conducted in private dental practice and to consider this in relation to age, weight, procedure type, and procedure time. The data were extracted from the intravenous sedation records available with 1 general anesthesia-trained dental practitioner who provides ambulatory sedation services to a number of private general dental practices in the Perth, Western Australia Metropolitan Area. Standardized intravenous sedation techniques as well as clear standardized discharge criteria were utilized. The sedatives used were fentanyl, midazolam, and propofol. Results from 85 patients produced an average recovery time of 19 minutes. Recovery time was not associated with the type or length of dental procedures performed. PMID:15384295

  12. Non-fragile reduced-order dynamic output feedback ? control for switched systems with average dwell-time switching

    NASA Astrophysics Data System (ADS)

    Zhou, Jianping; Park, Ju H.; Shen, Hao

    2016-02-01

    This paper is concerned with the problem of non-fragile reduced-order dynamic output feedback ? control for both continuous- and discrete-time switched systems with average dwell-time switching. First, Lyapunov conditions for the analysis of asymptotic stability and weighted ? performance are presented. Then, sufficient conditions for the solvability of the problem of controller design are developed in terms of linear matrix inequalities. The design methods are suitable for general switched systems, without the need to impose extra constraints on the system matrices. Finally, numerical examples are presented to illustrate the effectiveness of the proposed methods.

  13. Do Long-Lived Features Really Exist in the Solar Photosphere? II. Contrast of Time-Averaged Granulation Images

    NASA Astrophysics Data System (ADS)

    Brandt, P. N.; Getling, A. V.

    2008-06-01

    The decrease in the rms contrast of time-averaged images with the averaging time is compared between four data sets: (1) a series of solar granulation images recorded at La Palma in 1993, (2) a series of artificial granulation images obtained in numerical simulations by Rieutord et al. ( Nuovo Cimento 25, 523, 2002), (3) a similar series computed by Steffen and his colleagues (see Wedemeyer et al. in Astron. Astrophys. 44, 1121, 2004), (4) a random field with some parameters typical of the granulation, constructed by Rast ( Astron. Astrophys. 392, L13, 2002). In addition, (5) a sequence of images was obtained from real granulation images by using a temporal and spatial shuffling procedure, and the contrast of the average of n images from this sequence as a function of n is analysed. The series (1) of real granulation images exhibits a considerably slower contrast decrease than do both the series (3) of simulated granulation images and the series (4) of random fields. Starting from some relatively short averaging times t, the behaviour of the contrast in series (3) and (4) resembles the t -1/2 statistical law, whereas the shuffled series (5) obeys the n -1/2 law from n=2 on. Series (2) demonstrates a peculiarly slow decline of contrast, which could be attributed to particular properties of the boundary conditions used in the simulations. Comparisons between the analysed contrast-variation laws indicate quite definitely that the brightness field of solar granulation contains a long-lived component, which could be associated with locally persistent dark intergranular holes and/or with the presence of quasi-regular structures. The suggestion that the random field (4) successfully reproduces the contrast-variation law for the real granulation (Rast in Astron. Astrophys. 392, L13, 2002) can be dismissed.

  14. Apollo/Saturn 5 space vehicle countdown. Volume 2: Turnaround from scrub

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The procedures required to prepare a space vehicle for subsequent launch attempt after cancelling lift-off activities are presented. The cancellation must occur after the start of cryogenic loading, but prior to initiation of ignition sequences. The sequence of operations necessary to return the space vehicle to a predetermined configuration at which time the launch count can be resumed or rescheduled for following launch opportunities is developed. The considerations and constraints that are the basis for the scrub/turnaround times are described.

  15. Time-Domain Frequency Correction Method for Averaging Low-Field NMR Signals Acquired in Urban Laboratory Environment

    NASA Astrophysics Data System (ADS)

    Qiu, Long-Qing; Liu, Chao; Dong, Hui; Xu, Lu; Zhang, Yi; Hans-Joachim, Krause; Xie, Xiao-Ming; Andreas, Offenhäusser

    2012-10-01

    Using a second-order helium-cooled superconducting quantum interference device gradiometer as the detector, ultra-low-field nuclear magnetic resonance (ULF-NMR) signals of protons are recorded in an urban environment without magnetic shielding. The homogeneity and stability of the measurement field are investigated. NMR signals of protons are studied at night and during working hours. The Larmor frequency variation caused by the fluctuation of the external magnetic field during daytime reaches around 5 Hz when performing multiple measurements for about 10 min, which seriously affects the results of averaging. In order to improve the performance of the averaged data, we suggest the use of a data processor, i.e. the so-called time-domain frequency correction (TFC). For a 50-times averaged signal spectrum, the signal-to-noise ratio is enhanced from 30 to 120 when applying TFC while preserving the NMR spectrum linewidth. The TFC is also applied successfully to the measurement data of the hetero-nuclear J-coupling in 2,2,2-trifluoroethanol.

  16. Long-time averaged dynamics of a Bose-Einstein condensate in a bichromatic optical lattice with external harmonic confinement

    NASA Astrophysics Data System (ADS)

    Sakhel, Asaad R.

    2016-07-01

    The dynamics of a Bose-Einstein condensate are examined numerically in the presence of a one-dimensional bichromatic optical lattice (BCOL) with external harmonic confinement in the strongly interacting regime. The condensate is excited by a focusing stirring red laser. Two realizations of the BCOL are considered, one with a rational and the other with an irrational ratio of the two constituting wave lengths. The system is simulated by the time-dependent Gross Pitaevskii equation that is solved using the Crank Nicolson method in real time. It is found that for a weak BCOL, the long-time averaged physical observables of the condensate respond only very weakly (or not at all) to changes in the secondary OL depth V1 showing that under these conditions the harmonic trap plays a dominant role in governing the dynamics. However, for a much larger strength of the BCOL, the response is stronger as it begins to compete with the external harmonic trap, such that the frequency of Bloch oscillations of the bosons rises with V1 yielding higher time-averages. Qualitatively there is no difference between the dynamics of the condensate resulting from the use of a rational or irrational ratio of the wavelengths since the external harmonic trap washes it out. It is further found that in the presence of an external harmonic trap, the BCOL acts in favor of superflow.

  17. Reduction of time-averaged irradiation speckle nonuniformity in laser-driven plasmas due to target ablation

    NASA Astrophysics Data System (ADS)

    Epstein, R.

    1997-09-01

    In inertial confinement fusion (ICF) experiments, irradiation uniformity is improved by passing laser beams through distributed phase plates (DPPs), which produce focused intensity profiles with well-controlled, reproducible envelopes modulated by fine random speckle. [C. B. Burckhardt, Appl. Opt. 9, 695 (1970); Y. Kato and K. Mima, Appl. Phys. B 29, 186 (1982); Y. Kato et al., Phys. Rev. Lett. 53, 1057 (1984); Laboratory for Laser Energetics LLE Review 33, NTIS Document No. DOE/DP/40200-65, 1987 (unpublished), p. 1; Laboratory for Laser Energetics LLE Review 63, NTIS Document No. DOE/SF/19460-91, 1995 (unpublished), p. 1.] A uniformly ablating plasma atmosphere acts to reduce the contribution of the speckle to the time-averaged irradiation nonuniformity by causing the intensity distribution to move relative to the absorption layer of the plasma. This occurs most directly as the absorption layer in the plasma moves with the ablation-driven flow, but it is shown that the effect of the accumulating ablated plasma on the phase of the laser light also makes a quantitatively significant contribution. Analytical results are obtained using the paraxial approximation applied to the beam propagation, and a simple statistical model is assumed for the properties of DPPs. The reduction in the time-averaged spatial spectrum of the speckle due to these effects is shown to be quantitatively significant within time intervals characteristic of atmospheric hydrodynamics under typical ICF irradiation intensities.

  18. An analysis of error propagation in AERMOD lateral dispersion using Round Hill II and Uttenweiller experiments in reduced averaging times.

    PubMed

    Hoinaski, Leonardo; Franco, Davide; de Melo Lisboa, Henrique

    2017-03-01

    Dispersion modelling was proved by researchers that most part of the models, including the regulatory models recommended by the Environmental Protection Agency of the United States (AERMOD and CALPUFF), do not have the ability to predict under complex situations. This article presents a novel evaluation of the propagation of errors in lateral dispersion coefficient of AERMOD with emphasis on estimate of average times under 10 min. The sources of uncertainty evaluated were parameterizations of lateral dispersion ([Formula: see text]), standard deviation of lateral wind speed ([Formula: see text]) and processing of obstacle effect. The model's performance was tested in two field tracer experiments: Round Hill II and Uttenweiller. The results show that error propagation from the estimate of [Formula: see text] directly affects the determination of [Formula: see text], especially in Round Hill II experiment conditions. After average times are reduced, errors arise in the parameterization of [Formula: see text], even after observation assimilations of [Formula: see text], exposing errors on Lagrangian Time Scale parameterization. The assessment of the model in the presence of obstacles shows that the implementation of a plume rise model enhancement algorithm can improve the performance of the AERMOD model. However, these improvements are small when the obstacles have a complex geometry, such as Uttenweiller.

  19. Heat-flux measurements for the rotor of a full-stage turbine. I - Time-averaged results

    NASA Technical Reports Server (NTRS)

    Dunn, M. G.

    1986-01-01

    Blade measurements of time-averaged flux distribution are obtained with and without gas injection for a full-stage rotating turbine. Results are presented along the blade in the flow direction at 10, 50, and 90 percent span locations for both the pressure and suction surfaces; enough measurements were obtained to present spanwise distributions as well. The results suggest that the suction surface laminar flat plate prediction is in reasonable agreement with the data from the stagnation point up to about 10 percent of the wetted distance. The influence of upstream nozzle guide vane injection is to significantly increase the local blade heat flux in the immediate vicinity of the leading edge.

  20. Two Stage Helical Gearbox Fault Detection and Diagnosis based on Continuous Wavelet Transformation of Time Synchronous Averaged Vibration Signals

    NASA Astrophysics Data System (ADS)

    Elbarghathi, F.; Wang, T.; Zhen, D.; Gu, F.; Ball, A.

    2012-05-01

    Vibration signals from a gearbox are usually very noisy which makes it difficult to find reliable symptoms of a fault in a multistage gearbox. This paper explores the use of time synchronous average (TSA) to suppress the noise and Continue Wavelet Transformation (CWT) to enhance the non-stationary nature of fault signal for more accurate fault diagnosis. The results obtained in diagnosis an incipient gear breakage show that fault diagnosis results can be improved by using an appropriate wavelet. Moreover, a new scheme based on the level of wavelet coefficient amplitudes of baseline data alone, without faulty data samples, is suggested to select an optimal wavelet.

  1. On the time-dependent calculation of angular averaged vibronic absorption spectra with an application to molecular aggregates

    NASA Astrophysics Data System (ADS)

    Brüning, Christoph; Engel, Volker

    2017-01-01

    We introduce an efficient method to determine angular averaged absorption spectra for cases where electronic transitions take place to a manifold of N coupled excited states. The approach rests on the calculation of time-dependent auto-correlation functions which, upon Fourier-transform yield the spectrum. Assuming the Condon-approximation, it is shown that three wave-packet propagations are sufficient to calculate the spectrum. This is in contrast to a direct approach where it is necessary to perform N propagations to arrive at N2 cross-correlation functions. The reduction in computation time is of importance for larger molecular aggregates where the number N is determined by the aggregate size. We provide an example by determining spectra for macrocyclic dyes in different dipole-geometries.

  2. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  3. Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI

    NASA Astrophysics Data System (ADS)

    Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam

    2017-04-01

    White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and

  4. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study

    PubMed Central

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang

    2016-01-01

    Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727

  5. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    PubMed Central

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  6. Plio-Pleistocene paleomagnetic secular variation and time-averaged field: Ruiz-Tolima volcanic chain, Colombia

    NASA Astrophysics Data System (ADS)

    Sánchez-Duque, A.; Mejia, V.; Opdyke, N. D.; Huang, K.; Rosales-Rivera, A.

    2016-02-01

    Paleomagnetic results obtained from 47 Plio-Pleistocene volcanic flows from the Ruiz-Tolima Volcanic Chain (Colombia) are presented. The mean direction of magnetization among these flows, which comprise normal (n = 43) and reversed (n = 4) polarities, is Dec = 1.8°, Inc = 3.2°, α95 = 5.0°, and κ = 18.4. This direction of magnetization coincides with GAD plus a small persistent axial quadrupolar component (around 5%) at the site-average latitude (4.93°). This agreement is robust after applying several selection criteria (α95 < 10º α95 < 5.5º polarities: normal, reversed, and tentatively transitional). The data are in agreement with Model G proposed by McElhinny and McFadden (1997) and the fit is improved when sites tentatively identified as transitional (two that otherwise have normal polarity) are excluded from the calculations. Compliance observed with the above mentioned time-averaged field and paleosecular variation models, is also observed for many recent similar studies from low latitudes, with the exception of results from Galapagos Islands that coincide with GAD and tend to be near sided.

  7. Time-averaging approximation in the interaction picture: Anisotropy of vibrational pump-probe experiments for coupled chromophores with application to liquid water

    NASA Astrophysics Data System (ADS)

    Yang, Mino

    2012-10-01

    A time-averaging approximation method developed to efficiently calculate the short-time dynamics of coupled vibrational chromophores using mixed quantum/classical theories is extended in order to be applicable to the study of vibrational dynamics at longer time scales. A quantum mechanical time propagator for long times is decomposed into the product of short-time propagators, and a time-averaging approximation is then applied to each of the latter. Using the extended time-averaging approximation, we calculate the anisotropy decay of the data obtained from impulsive vibrational pump-probe experiments on the OH stretching modes of water, which is in excellent agreement with numerically exact results.

  8. Time-averaged distributions of solute and solvent motions: exploring proton wires of GFP and PfM2DH.

    PubMed

    Velez-Vega, Camilo; McKay, Daniel J J; Aravamuthan, Vibhas; Pearlstein, Robert; Duca, José S

    2014-12-22

    Proton translocation pathways of selected variants of the green fluorescent protein (GFP) and Pseudomonas fluorescens mannitol 2-dehydrogenase (PfM2DH) were investigated via an explicit solvent molecular dynamics-based analysis protocol that allows for direct quantitative relationship between a crystal structure and its time-averaged solute-solvent structure obtained from simulation. Our study of GFP is in good agreement with previous research suggesting that the proton released from the chromophore upon photoexcitation can diffuse through an extended internal hydrogen bonding network that allows for the proton to exit to bulk or be recaptured by the anionic chromophore. Conversely for PfM2DH, we identified the most probable ionization states of key residues along the proton escape channel from the catalytic site to bulk solvent, wherein the solute and high-density solvent crystal structures of binary and ternary complexes were properly reproduced. Furthermore, we proposed a plausible mechanism for this proton translocation process that is consistent with the state-dependent structural shifts observed in our analysis. The time-averaged structures generated from our analyses facilitate validation of MD simulation results and provide a comprehensive profile of the dynamic all-occupancy solvation network within and around a flexible solute, from which detailed hydrogen-bonding networks can be inferred. In this way, potential drawbacks arising from the elucidation of these networks by examination of static crystal structures or via alternate rigid-protein solvation analysis procedures can be overcome. Complementary studies aimed at the effective use of our methodology for alternate implementations (e.g., ligand design) are currently underway.

  9. Turnaround Aid Raising Hopes, Also Concerns

    ERIC Educational Resources Information Center

    Klein, Alyson

    2009-01-01

    As the U.S. Department of Education prepares to throw $3 billion in one-time money on the table to improve perennially foundering schools, a gulf is emerging between what federal officials would like to see done with the funds and what many districts say is their capacity--and inclination--to deliver. While some districts say the federal largess…

  10. On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods

    NASA Astrophysics Data System (ADS)

    Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

    2012-04-01

    Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what

  11. Characteristic length scales and time-averaged transport velocities of suspended sediment in the mid-Atlantic Region, USA

    USGS Publications Warehouse

    Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis

    2014-01-01

    Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4–6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100–1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.

  12. An Exploration of Discontinuous Time Synchronous Averaging for Helicopter HUMS Using Cruise and Terminal Area Vibration Data

    NASA Technical Reports Server (NTRS)

    Huff, Edward M.; Mosher, Marianne; Barszcz, Eric

    2002-01-01

    Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local

  13. BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra

    NASA Technical Reports Server (NTRS)

    Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.

    1996-01-01

    We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.

  14. Estimation of temporal variations in path-averaged atmospheric refractive index gradient from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    Basu, Santasri; McCrae, Jack E.; Fiorino, Steven; Przelomski, Jared

    2016-09-01

    The sea level vertical refractive index gradient in the U.S. Standard Atmosphere model is -2.7×10-8 m-1 at 500 nm. At any particular location, the actual refractive index gradient varies due to turbulence and local weather conditions. An imaging experiment was conducted to measure the temporal variability of this gradient. A tripod mounted digital camera captured images of a distant building every minute. Atmospheric turbulence caused the images to wander quickly, randomly, and statistically isotropically and changes in the average refractive index gradient along the path caused the images to move vertically and more slowly. The temporal variations of the refractive index gradient were estimated from the slow, vertical motion of the building over a period of several days. Comparisons with observational data showed the gradient variations derived from the time-lapse imagery correlated well with solar heating and other weather conditions. The time-lapse imaging approach has the potential to be used as a validation tool for numerical weather models. These validations will benefit directed energy simulation tools and applications.

  15. Paleosecular variation and time-averaged field analysis over the last 10 Ma from a new global dataset (PSV10)

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Johnson, C. L.; Tauxe, L.; Constable, C.; Jarboe, N.

    2015-12-01

    Previous paleosecular variation (PSV) and time-averaged field (TAF) models draw on compilations of paleodirectional data that lack equatorial and high latitude sites and use latitudinal virtual geomagnetic pole (VGP) cutoffs designed to remove transitional field directions. We present a new selected global dataset (PSV10) of paleodirectional data spanning the last 10 Ma. We include all results calculated with modern laboratory methods, regardless of site VGP colatitude, that meet statistically derived selection criteria. We exclude studies that target transitional field states or identify significant tectonic effects, and correct for any bias from serial correlation by averaging directions from sequential lava flows. PSV10 has an improved global distribution compared with previous compilations, comprising 1519 sites from 71 studies. VGP dispersion in PSV10 varies with latitude, exhibiting substantially higher values in the southern hemisphere than at corresponding northern latitudes. Inclination anomaly estimates at many latitudes are within error of an expected GAD field, but significant negative anomalies are found at equatorial and mid-northern latitudes. Current PSV models Model G and TK03 do not fit observed PSV or TAF latitudinal behavior in PSV10, or subsets of normal and reverse polarity data, particularly for southern hemisphere sites. Attempts to fit these observations with simple modifications to TK03 showed slight statistical improvements, but still exceed acceptable errors. The root-mean-square misfit of TK03 (and subsequent iterations) is substantially lower for the normal polarity subset of PSV10, compared to reverse polarity data. Two-thirds of data in PSV10 are normal polarity, most which are from the last 5 Ma, so we develop a new TAF model using this subset of data. We use the resulting TAF model to explore whether new statistical PSV models can better describe our new global compilation.

  16. An Integrated Gate Turnaround Management Concept Leveraging Big Data Analytics for NAS Performance Improvements

    NASA Technical Reports Server (NTRS)

    Chung, William W.; Ingram, Carla D.; Ahlquist, Douglas Kurt; Chachad, Girish H.

    2016-01-01

    "Gate Turnaround" plays a key role in the National Air Space (NAS) gate-to-gate performance by receiving aircraft when they reach their destination airport, and delivering aircraft into the NAS upon departing from the gate and subsequent takeoff. The time spent at the gate in meeting the planned departure time is influenced by many factors and often with considerable uncertainties. Uncertainties such as weather, early or late arrivals, disembarking and boarding passengers, unloading/reloading cargo, aircraft logistics/maintenance services and ground handling, traffic in ramp and movement areas for taxi-in and taxi-out, and departure queue management for takeoff are likely encountered on the daily basis. The Integrated Gate Turnaround Management (IGTM) concept is leveraging relevant historical data to support optimization of the gate operations, which include arrival, at the gate, departure based on constraints (e.g., available gates at the arrival, ground crew and equipment for the gate turnaround, and over capacity demand upon departure), and collaborative decision-making. The IGTM concept provides effective information services and decision tools to the stakeholders, such as airline dispatchers, gate agents, airport operators, ramp controllers, and air traffic control (ATC) traffic managers and ground controllers to mitigate uncertainties arising from both nominal and off-nominal airport gate operations. IGTM will provide NAS stakeholders customized decision making tools through a User Interface (UI) by leveraging historical data (Big Data), net-enabled Air Traffic Management (ATM) live data, and analytics according to dependencies among NAS parameters for the stakeholders to manage and optimize the NAS performance in the gate turnaround domain. The application will give stakeholders predictable results based on the past and current NAS performance according to selected decision trees through the UI. The predictable results are generated based on analysis of the

  17. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  18. Analysis of trace contaminants in hot gas streams using time-weighted average solid-phase microextraction: proof of concept.

    PubMed

    Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C

    2013-03-15

    Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low

  19. Catalyst separation method reduces Platformer turnaround costs

    SciTech Connect

    Blashka, S.R.; Welch, J.G.; Nite, K.; Furfaro, A.P.

    1995-09-18

    A catalyst separation technology that segregates catalyst particles by density has proved successful in recovering CCR (continuous catalyst regeneration) Platforming catalyst that had been contaminated with heel catalyst, non-flowing catalyst. UOP`s CCR Platforming process converts naphtha to high-octane gasoline components and aromatics for petrochemical use. The reforming reactions take place in a series of Platforming reactors loaded with platinum-containing reforming catalyst. CCR Platforming technology incorporates a moving catalyst bed in a system that permits addition and withdrawal of catalyst from the reactor while the unit is operating. As the catalyst circulates through the reactors, it builds up typical carbon levels of 5%. Over time, the heel catalyst will build up carbon levels as high as 50%. When the catalyst is unloaded, heel catalyst is released, contaminating the last fraction of catalyst removed from the reactor. The heel-contaminated catalyst should not be reused because only a small fraction of the carbon on the heel catalyst is removed in the regeneration section. If returned to inventory, the carbon would react rapidly, causing temperature excursions. If heel-contaminated catalyst is reused, there is a high potential for damage to the unit. Density grading was used, after ex situ regeneration to recover the uncontaminated catalyst for reuse.

  20. A Simulation Based Approach for Contingency Planning for Aircraft Turnaround Operation System Activities in Airline Hubs

    NASA Technical Reports Server (NTRS)

    Adeleye, Sanya; Chung, Christopher

    2006-01-01

    Commercial aircraft undergo a significant number of maintenance and logistical activities during the turnaround operation at the departure gate. By analyzing the sequencing of these activities, more effective turnaround contingency plans may be developed for logistical and maintenance disruptions. Turnaround contingency plans are particularly important as any kind of delay in a hub based system may cascade into further delays with subsequent connections. The contingency sequencing of the maintenance and logistical turnaround activities were analyzed using a combined network and computer simulation modeling approach. Experimental analysis of both current and alternative policies provides a framework to aid in more effective tactical decision making.

  1. Paleosecular variation and time-averaged field recorded in late Pliocene-Holocene lava flows from Mexico

    NASA Astrophysics Data System (ADS)

    Mejia, V.; BöHnel, H.; Opdyke, N. D.; Ortega-Rivera, M. A.; Lee, J. K. W.; Aranda-Gomez, J. J.

    2005-07-01

    This paper presents results from 13 paleomagnetic sites from an area west of Mexico City and 7 sites from an area of dispersed monogenetic volcanism in the state of San Luis Potosi, accompanied by seven 40Ar/39Ar radiometric dates. An analysis of secular variation and time-averaged paleomagnetic field in the Trans-Mexican Volcanic Belt (TMVB), using compiled data both newly obtained and from the literature, is presented. Interpretation can best be constrained after excluding from the data set sites that appear to be tectonically affected. The selected data include 187 sites of late Pliocene-Holocene age. The mean direction among these sites is Dec = 358.8°, Inc = 31.6°, α95 = 2.0°, k = 29. This direction does not overlap the expected geocentric axial dipole (GAD) but is consistent with a GAD plus a 5% quadrupole. The virtual geomagnetic pole scatter of this group of sites (12.7°, with lower and upper 95% confidence limits of 11.9° and 14.1°) is consistent with the value expected from Model G (13.6°).

  2. A COCHLEAR MODEL USING THE TIME-AVERAGED LAGRANGIAN AND THE PUSH-PULL MECHANISM IN THE ORGAN OF CORTI

    PubMed Central

    YOON, YONGJIN; PURIA, SUNIL; STEELE, CHARLES R.

    2010-01-01

    In our previous work, the basilar membrane velocity VBM for a gerbil cochlea was calculated and compared with physiological measurements. The calculated VBM showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the VBM for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase. PMID:20485540

  3. Time-averaged heat transfer and pressure measurements and comparison with prediction for a two-stage turbine

    NASA Astrophysics Data System (ADS)

    Dunn, M. G.; Kim, J.; Civinskas, K. C.; Boyle, R. J.

    1992-06-01

    Time-averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row and the first-stage blade row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the component. Stanton-number distributions are also reported for the second-stage vane at 50 percent span. A shock tube is used as a short-duration source of heated and pressurized air to which the turbine is subjected. Platinum thin-film gages are used to obtain the heat-flux measurements and miniature silicone-diaphragm pressure transducers are used to obtain the surface pressure measurements. The first-stage vane Stanton number distributions are compared with predictions obtained using a quasi-3D Navier-Stokes solution and a version of STAN5. This same N-S technique was also used to obtain predictions for the first blade and the second vane.

  4. Project teams produce successful turnaround for Illinois hospital.

    PubMed

    2003-10-01

    When Jay Kreuzer was hired as president and CEO of West Suburban Health Care, it didn't take him long to realize the organization was headed in the wrong direction. The not-for-profit system, which includes a 258-bed medical center, was projected to end fiscal year 2001 with a loss of $19 million. Kreuzer put together a team that implemented an organization-wide performance improvement effort. In just two years the turnaround has been completed, as West Suburban ended fiscal year 2003 with a small surplus.

  5. Meeting the challenge of a group practice turnaround.

    PubMed

    Porn, L M

    2001-03-01

    Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing.

  6. Time average neutralized migma: A colliding beam/plasma hybrid physical state as aneutronic energy source — A review

    NASA Astrophysics Data System (ADS)

    Maglich, Bogdan C.

    1988-08-01

    A D + beam of kinetic energy Ti = 0.7 MeV was stored in a "simple mirror" magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times, ion energy distribution, nuclear reaction rate, and reaction product spectrum. Migma formed a disc 20 cm in diameter and 0.5 cm thick. Its ion density was sharply peaked in the center; the ion-to-electron temperature ratio was TiTe ˜ 10 3; ion-electron temperature equilibrium was never reached. The volume average and central D + density were n = 3.2 × 10 9 cm -3 and nc = 3 × 10 10 cm -3 respectively, compared to the space charge limit density nsc = 4 × 10 8 cm -3. The energy confinement time was τc = 20-30 s, limited by the change exchange reactions with the residual gas in the vacuum (5 × 10 -9 Torr). The ion energy loss rate was 1.4 keV/s. None of the instabilities that were observed in mirrors at several orders of magnitude lower density occurred. The proton energy spectrum for dd + d → T + p + 4 MeV shows that dd collided at an average crossing angle of 160°. Evidence for exponential density buildup has also been observed. Relative to Migma III results and measured in terms of the product of ion energy E, density n, and confinement time τ, device performance was improved by a factor of 500. Using the central fast ion density, we obtained the triple product: Tnτ ≅ 4 × 10 14 keV s cm -3, which is greater than that of the best fusion devices. The luminosity (collision rate per unit cross section) was ˜ 10 29 cm -2s -1, with o.7 A ion current through the migma center. The stabilizing features of migma are: (1) large Larmor radius; (2) small canonical angular momentum; (3) short axial length z (disc shape); (4) nonadiabatic motions in r and z

  7. Can Granger causality delineate natural versus anthropogenic drivers of climate change from global-average multivariate time series?

    NASA Astrophysics Data System (ADS)

    Kodra, E. A.; Chatterjee, S.; Ganguly, A. R.

    2009-12-01

    The Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) notes with a high degree of certainty that global warming can be attributed to anthropogenic emissions. Detection and attribution studies, which attempt to delineate human influences on regional- and decadal-scale climate change or its impacts, use a variety of techniques, including Granger causality. Recently, Granger causality was used as a tool for detection and attribution in climate based on a spatio-temporal data mining approach. However, the degree to which Granger causality may be able to delineate natural versus anthropogenic drivers of change in these situations needs to be thoroughly investigated. As a first step, we use multivariate global-average time series of observations to test the performance of Granger causality. We apply the popular Granger F-tests to Radiative Forcing (RF), which is a transformation of carbon dioxide (CO2), and Global land surface Temperature anomalies (GT). Our preliminary results with observations appear to suggest that RF Granger-causes GT, which seem to become more apparent with more data. However, carefully designed simulations indicate that these results are not reliable and may, in fact, be misleading. On the other hand, the same observation- and simulation-driven methodologies, when applied to the El Niño Southern Oscillation (ENSO) index, clearly show reliable Granger-causality from ENSO to GT. We develop and test several hypotheses to explain why the Granger causality tests between RF and GT are not reliable. We conclude that the form of Granger causality used in this study, and in past studies reported in the literature, is sensitive to data availability, random variability, and especially whether the variables arise from a deterministic or stochastic process. Simulations indicate that Granger causality in this form performs poorly, even in simple linear effect cases, when applied to one deterministic and one stochastic time

  8. Low to Moderate Average Alcohol Consumption and Binge Drinking in Early Pregnancy: Effects on Choice Reaction Time and Information Processing Time in Five-Year-Old Children

    PubMed Central

    Kilburn, Tina R.; Eriksen, Hanne-Lise Falgreen; Underbjerg, Mette; Thorsen, Poul; Mortensen, Erik Lykke; Landrø, Nils Inge; Bakketeig, Leiv S.; Grove, Jakob; Sværke, Claus; Kesmodel, Ulrik Schiøler

    2015-01-01

    Background Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT) and information processing time (IPT) in young children. Method Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60–64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R) was administered. Results Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1–4. Conclusion This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring. PMID:26382068

  9. Finite-time H∞ control for a class of discrete-time Markovian jump systems with partly unknown time-varying transition probabilities subject to average dwell time switching

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhu, Hong; Zhong, Shouming; Zhang, Yuping; Li, Yuanyuan

    2015-04-01

    An extension of a fixed transition probability (TP) Markovian switching model to combine time-varying TPs has offered another set of useful regime-switching models. This paper is concerned with the problem of finite-time H∞ control for a class of discrete-time Markovian jump systems with partly unknown time-varying TPs subject to average dwell time switching. The so-called time-varying TPs mean that the TPs are varying but invariant within an interval. The variation of the TPs considered here is subject to a class of slow switching signal. Based on selecting the appropriate Lyapunov-Krasovskii functional, sufficient conditions of finite-time boundedness of Markovian jump systems are derived and the system trajectory stays within a prescribed bound. Finally, an example is given to illustrate the efficiency of the proposed method.

  10. Time averaging and stratigraphic disorder of molluscan assemblages in the Holocene sediments in the NE Adriatic (Piran)

    NASA Astrophysics Data System (ADS)

    Tomasovych, Adam; Gallmetzer, Ivo; Haselmair, Alexandra; Kaufman, Darrell S.; Zuschin, Martin

    2016-04-01

    Stratigraphic changes in temporal resolution of fossil assemblages and the degree of their stratigraphic mixing in the Holocene deposits are of high importance in paleoecology, conservation paleobiology and paleoclimatology. However, few studies quantified downcore changes in time averaging and in stratigraphic disorder on the basis of dating of multiple shells occurring in individual stratigraphic layers. Here, we investigate downcore changes in frequency distribution of postmortem ages of the infaunal bivalve Gouldia minima in two, ~150 cm-thick piston cores (separated by more than 1 km) in the northern Adriatic Sea, close to the Slovenian city Piran at a depth of 24 m. We use radiocarbon-calibrated amino acid racemization to obtain postmortem ages of 564 shells, and quantify age-frequency distributions in 4-5 cm-thick stratigraphic intervals (with 20-30 specimens sampled per interval). Inter-quartile range for individual 4-5 cm-thick layers varies between 850 and 1,700 years, and range encompassing 95% of age data varies between 2,000 and 5,000 years in both cores. The uppermost sediments (20 cm) are age-homogenized and show that median age of shells is ~700-800 years. The interval between 20 and 90 cm shows a gradual increase in median age from ~2,000 to ~5,000 years, with maximum age ranging to ~8,000 years. However, the lowermost parts of both cores show a significant disorder, with median age of 3,100-3,300 years. This temporal disorder implies that many shells were displaced vertically by ~1 m. Absolute and proportional abundance of the bivalve Gouldia minima strongly increases towards the top of the both cores. We hypothesize that such increase in abundance, when coupled with depth-declining reworking, can explain stratigraphic disorder because numerically abundant young shells from the top of the core were more likely buried to larger sediment depths than less frequent shells at intermediate sediment depths.

  11. Mapping the time-averaged distribution of combustion-derived air pollutants in the San Francisco Bay Area

    NASA Astrophysics Data System (ADS)

    Yu, C.; Zinniker, D. A.; Moldowan, J.

    2010-12-01

    Urban air pollution is an ongoing and complicated problem for both residents and policy makers. This study aims to provide a better understanding of the geographic source and fate of organic pollutants in a dynamic urban environment. Natural and artificial hydrophobic substrates were employed for the passive monitoring and mapping of ground-level organic pollutants in the San Francisco Bay area. We focused specifically on volatile and semi-volatile polycyclic aromatic hydrocarbons (PAHs). These compounds are proxies for a broad range of combustion related air pollutants derived from local, regional, and global combustion sources. PAHs include several well-studied carcinogens and can be measured easily and accurately across a broad range of concentrations. Estimates of time-integrated vapor phase and particle deposition were made from measuring accumulated PAHs in the leaves of several widely distributed tree species (including the Quercus agrifolia and Sequoia sempervirens) and an artificial wax film. Samples were designed to represent pollutant exposure over a period of one to several months. The selective sampling and analysis of hydrophobic substrates providess insight into the average geographic distribution of ground-level air pollutants in a simple and inexpensive way. However, accumulated organics do not directly correlated with human exposure and the source signature of PAHs may be obscured by transport, deposition, and flux processes. We attempted to address some of these complications by studying 1) PAH accumulation rates within substrates in a controlled microcosm, 2) differences in PAH abundance in different substrate types at the same locality, and 3) samples near long-term high volume air sampling stations. We also set out to create a map of PAH concentrations based on our measurements. This map can be directly compared with interpolated data from high-volume sampling stations and used to address questions concerning atmospheric heterogeneity of these

  12. Time-weighted average sampling of airborne propylene glycol ethers by a solid-phase microextraction device.

    PubMed

    Shih, H C; Tsai, S W; Kuo, C H

    2012-01-01

    A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler

  13. Tunneling-injection-induced turnaround behavior of threshold voltage in thermally nitrided oxide n-channel metal-oxide-semiconductor field-effect transistors

    NASA Astrophysics Data System (ADS)

    Ma, Z. J.; Lai, P. T.; Liu, Z. H.; Fleischer, S.; Cheng, Y. C.

    1990-12-01

    The threshold voltage (VT) degradation metal-oxide-semiconductor field-effect transistors (MOSFETs) with thermally nitrided oxide or pure oxide as gate dielectric was determined under Fowler-Nordheim (FN) stressing. A typical VT turnaround behavior was observed for both kinds of devices. The VT for nitrided oxide MOSFETs shifts more negatively than that for pure oxide MOSFETs during the initial period of FN stressing whereas the opposite is true for the positive shift after the critical time at turnaround point. The discovery that the shift of substrate current peak exhibits similar turnaround behavior reinforces the above results. In the meantime, the field-effect electron mobility and the maximum transconductance in the channel for nitrided oxide MOSFETs are only slightly degraded by stressing as compared to that for pure oxide MOSFETs. The VT turnaround behavior can be explained as follows: Net trapped charges in the oxide are initially positive (due to hole traps in the oxide) and result in the negative shift of VT. With increasing injection time, trapped electrons in the oxide as well as acceptortype interface states increase. This results in the positive shift in VT. It is revealed that VT degradation in MOSFETs is dominated by the generation of acceptortype interface states rather than electron trapping in the oxide after the critical time.

  14. Time Averaging and Fitting of Nonlinear Metabolic Changes: The Issue of the Time Index Choice Applied to 31P MRS Investigation of Muscle Energetics

    NASA Astrophysics Data System (ADS)

    Simond, G.; Bendahan, D.; Cozzone, P. J.

    2001-03-01

    We present an exact analytical method dedicated to fitting time-dependent exponential-like changes in MR spectra. As an illustration, this method has been applied to fitting metabolic changes recorded by 31P MRS in human skeletal muscle occurring during a rest-exercise-recovery protocol. When recording metabolic changes with the accumulative method, the time averaging of the MR signals implies the choice of a time index for fitting any changes in the features of the associated MR spectra. A critical examination of the different ways (constant, linear, and exponential) of choosing the time index is reported. By numerical analysis, we have calculated the errors generated by the three methods and we have compared their sensitivity to noise. In the case of skeletal muscle, both constant and linear methods introduce large and uncontrolled errors for the whole set of metabolic parameters derived from [PCr] changes. In contrast, the exponential method affords a reliable estimation of critical parameters in muscle bioenergetics in both normal and pathological situations. This method is very easy to implement and provides an exact analytical solution to fitting changes in MR spectra recorded by the accumulative method.

  15. The first two years of the Gemini Fast Turnaround Proposal Program

    NASA Astrophysics Data System (ADS)

    Andersen, Morten; Mason, Rachel; Geballe, Thomas R.; Chiboucas, Kristin; Salinas, Ricardo; Lundquist, Michael J.; scharwaechter, Julia; Schirmer, Mischa; silva, Karleyene

    2017-01-01

    Gemini Observatory has since February 2015 offered telescope time monthly through the Fast Turnaround observing route. A fast review process by the proposers themselves coupled with rapid scheduling allow the proposals to go from submission to part of the queue in a month and the observations are then active for 3 months, much faster than the traditional semester based proposal scheme.Both telescopes are included and around 10% of the available telescope time is allocated each month.Here we present the early results and lessons learned from the program. We discuss the over-subscription, the review process and the selection of proposals as well as the scheduling. The completion rates is further discussed. Finally we highlight some of the science results coming out of the program.

  16. School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform

    ERIC Educational Resources Information Center

    Peck, Craig; Reitzug, Ulrich C.

    2014-01-01

    School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…

  17. Using Federal Education Formula Funds for School Turnaround Initiatives: Opportunities for State Education Agencies

    ERIC Educational Resources Information Center

    Junge, Melissa; Krvaric, Sheara

    2016-01-01

    Much has been written on the subject of school turnaround, but relatively little about how to "pay for" turnaround-related work. Turning around low-performing schools not only requires changing instructional and related practices, but changing spending patterns as well. Too often education dollars are spent on the same costs from…

  18. On the Edge: A Study of Small Private Colleges That Have Made a Successful Financial Turnaround

    ERIC Educational Resources Information Center

    Carey, Amy Bragg

    2013-01-01

    This dissertation was a qualitative research study regarding two small private universities and their process of transformation from an institution headed toward closure to a successful turnaround. The primary questions that guided the study included the factors and persons that contributed to the institutional turnaround, the issues and…

  19. Study of modeling unsteady blade row interaction in a transonic compressor stage part 2: influence of deterministic correlations on time-averaged flow prediction

    NASA Astrophysics Data System (ADS)

    Liu, Yang-Wei; Liu, Bao-Jie; Lu, Li-Peng

    2012-04-01

    The average-passage equation system (APES) provides a rigorous mathematical framework for accounting for the unsteady blade row interaction through multistage compressors in steady state environment by introducing deterministic correlations (DC) that need to be modeled to close the equation system. The primary purpose of this study was to provide insight into the DC characteristics and the influence of DC on the time-averaged flow field of the APES. In Part 2 of this two-part paper, the influence of DC on the time-averaged flow field was systematically studied. Several time-averaging computations were conducted with various boundary conditions and DC for the downstream stator in a transonic compressor stage, by employing the CFD solver developed in Part 1 of this two-part paper. These results were compared with the time-averaged unsteady flow field and the steady one. The study indicated that the circumferential-averaged DC can take into account major part of the unsteady effects on spanwise redistribution of flow fields in compressors. Furthermore, it demonstrated that both deterministic stresses and deterministic enthalpy fluxes are necessary to reproduce the time-averaged flow field.

  20. Weaker axially dipolar time-averaged paleomagnetic field based on multidomain-corrected paleointensities from Galapagos lavas

    PubMed Central

    Wang, Huapei; Kent, Dennis V.; Rochette, Pierre

    2015-01-01

    The geomagnetic field is predominantly dipolar today, and high-fidelity paleomagnetic mean directions from all over the globe strongly support the geocentric axial dipole (GAD) hypothesis for the past few million years. However, the bulk of paleointensity data fails to coincide with the axial dipole prediction of a factor-of-2 equator-to-pole increase in mean field strength, leaving the core dynamo process an enigma. Here, we obtain a multidomain-corrected Pliocene–Pleistocene average paleointensity of 21.6 ± 11.0 µT recorded by 27 lava flows from the Galapagos Archipelago near the Equator. Our new result in conjunction with a published comprehensive study of single-domain–behaved paleointensities from Antarctica (33.4 ± 13.9 µT) that also correspond to GAD directions suggests that the overall average paleomagnetic field over the past few million years has indeed been dominantly dipolar in intensity yet only ∼60% of the present-day field strength, with a long-term average virtual axial dipole magnetic moment of the Earth of only 4.9 ± 2.4 × 1022 A⋅m2. PMID:26598664

  1. Weaker axially dipolar time-averaged paleomagnetic field based on multidomain-corrected paleointensities from Galapagos lavas.

    PubMed

    Wang, Huapei; Kent, Dennis V; Rochette, Pierre

    2015-12-08

    The geomagnetic field is predominantly dipolar today, and high-fidelity paleomagnetic mean directions from all over the globe strongly support the geocentric axial dipole (GAD) hypothesis for the past few million years. However, the bulk of paleointensity data fails to coincide with the axial dipole prediction of a factor-of-2 equator-to-pole increase in mean field strength, leaving the core dynamo process an enigma. Here, we obtain a multidomain-corrected Pliocene-Pleistocene average paleointensity of 21.6 ± 11.0 µT recorded by 27 lava flows from the Galapagos Archipelago near the Equator. Our new result in conjunction with a published comprehensive study of single-domain-behaved paleointensities from Antarctica (33.4 ± 13.9 µT) that also correspond to GAD directions suggests that the overall average paleomagnetic field over the past few million years has indeed been dominantly dipolar in intensity yet only ∼ 60% of the present-day field strength, with a long-term average virtual axial dipole magnetic moment of the Earth of only 4.9 ± 2.4 × 10(22) A ⋅ m(2).

  2. District Readiness to Support School Turnaround: A Users' Guide to Inform the Work of State Education Agencies and Districts

    ERIC Educational Resources Information Center

    Player, Daniel; Hambrick Hitt, Dallas; Robinson, William

    2014-01-01

    This guide provides state education agencies (SEAs) and districts (LEAs) with guidance about how to assess the district's readiness to support school turnaround initiatives. Often, school turnaround efforts focus only on the school's structure and leadership. Rarely do policymakers or practitioners think about school turnaround as a system-level…

  3. Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround

    NASA Astrophysics Data System (ADS)

    Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.

    An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.

  4. Timing and magnitude of peak height velocity and peak tissue velocities for early, average, and late maturing boys and girls.

    PubMed

    Iuliano-Burns, S; Mirwald, R L; Bailey, D A

    2001-01-01

    Height, weight, and tissue accrual were determined in 60 male and 53 female adolescents measured annually over six years using standard anthropometry and dual-energy X-ray absorptiometry (DXA). Annual velocities were derived, and the ages and magnitudes of peak height and peak tissue velocities were determined using a cubic spline fit to individual data. Individuals were rank ordered on the basis of sex and age at peak height velocity (PHV) and then divided into quartiles: early (lowest quartile), average (middle two quartiles), and late (highest quartile) maturers. Sex- and maturity-related comparisons in ages and magnitudes of peak height and peak tissue velocities were made. Males reached peak velocities significantly later than females for all tissues and had significantly greater magnitudes at peak. The age at PHV was negatively correlated with the magnitude of PHV in both sexes. At a similar maturity point (age at PHV) there were no differences in weight or fat mass among maturity groups in both sexes. Late maturing males, however, accrued more bone mineral and lean mass and were taller at the age of PHV compared to early maturers. Thus, maturational status (early, average, or late maturity) as indicated by age at PHV is inversely related to the magnitude of PHV in both sexes. At a similar maturational point there are no differences between early and late maturers for weight and fat mass in boys and girls.

  5. Recursive Averaging

    ERIC Educational Resources Information Center

    Smith, Scott G.

    2015-01-01

    In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…

  6. Accelerated multiple-pass moving average: a novel algorithm for baseline estimation in CE and its application to baseline correction on real-time bases.

    PubMed

    Solis, Alejandro; Rex, Mathew; Campiglia, Andres D; Sojo, Pedro

    2007-04-01

    We present a novel algorithm for baseline estimation in CE. The new algorithm which we have named as accelerated multiple-pass moving average (AMPMA) is combined to three preexisting low-pass filters, spike-removal, moving average, and multi-pass moving average filter, to achieve real-time baseline correction with commercial instrumentation. The successful performance of AMPMA is demonstrated with simulated and experimental data. Straightforward comparison of experimental data clearly shows the improvement AMPMA provides to the linear fitting, LOD, and accuracy (absolute error) of CE analysis.

  7. A new approach for analyzing average time complexity of population-based evolutionary algorithms on unimodal problems.

    PubMed

    Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin

    2009-10-01

    In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically.

  8. How Does the Supply Requisitioning Process Affect Average Customer Wait Time Onboard U.S. Navy Destroyers?

    DTIC Science & Technology

    2013-05-07

    5  3.  Information Technology ............................................................... 6  B.  LEAN SIX SIGMA APPLICATION...Control ICP Inventory Control Point IT Information Technology LRT Logistics Response Time LS Logistics Specialist MBA Master of Business...opportunity exists to leverage current technologies and practices in order to reduce the manpower involved or eliminate redundant steps in the process

  9. How Does the Supply Requisitioning Process Affect Average Customer Wait Time Onboard U.S. Navy Destroyers?

    DTIC Science & Technology

    2013-06-01

    Information Technology ......................................................................6  B.  LEAN SIX SIGMA APPLICATION...Control ICP Inventory Control Point IT Information Technology LCPO Leading Chief Petty Officer LS Logistics Specialists MBA Masters of...time for the end user. Furthermore, we evaluate whether the opportunity exists to leverage current technologies and practices in order to reduce the

  10. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    SciTech Connect

    Prevosto, L.; Mancinelli, B.; Kelly, H.

    2013-12-15

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  11. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe.

    PubMed

    Prevosto, L; Kelly, H; Mancinelli, B

    2013-12-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  12. Phase-adjusted echo time (PATE)-averaging 1H MRS: application for improved glutamine quantification at 2.89 T

    PubMed Central

    Prescot, Andrew P.; Richards, Todd; Dager, Stephen R.; Choi, Changho; Renshaw, Perry F.

    2015-01-01

    1H MRS investigations have reported altered glutamatergic neurotransmission in a variety of psychiatric disorders. The unraveling of glutamate from glutamine resonances is crucial for the interpretation of these observations, although this remains a challenge at clinical static magnetic field strengths. Glutamate resolution can be improved through an approach known as echo time (TE) averaging, which involves the acquisition and subsequent averaging of multiple TE steps. The process of TE averaging retains the central component of the glutamate methylene multiplet at 2.35 ppm, with the simultaneous attenuation of overlapping phase-modulated coupled resonances of glutamine and N-acetylaspartate. We have developed a novel post-processing approach, termed phase-adjusted echo time (PATE) averaging, for the retrieval of glutamine signals from a TE-averaged 1H MRS dataset. The method works by the application of an optimal TE-specific phase term, which is derived from spectral simulation, prior to averaging over TE space. The simulation procedures and preliminary in vivo spectra acquired from the human frontal lobe at 2.89 T are presented. Three metabolite normalization schemes were developed to evaluate the frontal lobe test–retest reliability for glutamine measurement in six subjects, and the resulting values were comparable with previous reports for within-subject (9–14%) and inter-subject (14–20%) measures. Using the acquisition parameters and TE range described, glutamine quantification is possible in approximately 10 min. The post-processing methods described can also be applied retrospectively to extract glutamine and glutamate levels from previously acquired TE-averaged 1H MRS datasets. PMID:22407923

  13. Constantly stirred sorbent and continuous flow integrative sampler: new integrative samplers for the time weighted average water monitoring.

    PubMed

    Llorca, Julio; Gutiérrez, Cristina; Capilla, Elisabeth; Tortajada, Rafael; Sanjuán, Lorena; Fuentes, Alicia; Valor, Ignacio

    2009-07-31

    Two innovative integrative samplers have been developed enabling high sampling rates unaffected by turbulences (thus avoiding the use of performance reference compounds) and with negligible lag time values. The first, called the constantly stirred sorbent (CSS) consists of a rotator head that holds the sorbent. The rotation speed given to the head generates a constant turbulence around the sorbent making it independent of the external hydrodynamics. The second, called the continuous flow integrative sampler (CFIS) consists of a small peristaltic pump which produces a constant flow through a glass cell. The sorbent is located inside this cell. Although different sorbents can be used, poly(dimethylsiloxane) PDMS under the commercial twister format (typically used for stir bar sorptive extraction) was evaluated for the sampling of six polycyclic aromatic hydrocarbons and three organochlorine pesticides. These new devices have many analogies with passive samplers but cannot truly be defined as such since they need a small energy supply of around 0.5 W supplied by a battery. Sampling rates from 181 x 10(-3) to 791 x 10(-3) L/day were obtained with CSS and 18 x 10(-3) to 53 x 10(-3) with CFIS. Limits of detection for these devices are in the range from 0.3 to 544 pg/L with a precision below 20%. An in field evaluation for both devices was carried out for a 5 days sampling period in the outlet of a waste water treatment plant with comparable results to those obtained with a classical sampling method.

  14. Computing the 7Li NMR chemical shielding of hydrated Li+ using cluster calculations and time-averaged configurations from ab initio molecular dynamics simulations.

    PubMed

    Alam, Todd M; Hart, David; Rempe, Susan L B

    2011-08-14

    Ab initio molecular dynamics (AIMD) simulations have been used to predict the time-averaged Li NMR chemical shielding for a Li(+) solution. These results are compared to NMR shielding calculations on smaller Li(+)(H(2)O)(n) clusters optimized in either the gas phase or with a polarizable continuum model (PCM) solvent. The trends introduced by the PCM solvent are described and compared to the time-averaged chemical shielding observed in the AIMD simulations where large explicit water clusters hydrating the Li(+) are employed. Different inner- and outer-coordination sphere contributions to the Li NMR shielding are evaluated and discussed. It is demonstrated an implicit PCM solvent is not sufficient to correctly model the Li shielding, and that explicit inner hydration sphere waters are required during the NMR calculations. It is also shown that for hydrated Li(+), the time averaged chemical shielding cannot be simply described by the population-weighted average of coordination environments containing different number of waters.

  15. Taphonomic trade-offs in tropical marine death assemblages: Differential time averaging, shell loss, and probable bias in siliciclastic vs. carbonate facies

    NASA Astrophysics Data System (ADS)

    Kidwell, Susan M.; Best, Mairi M. R.; Kaufman, Darrell S.

    2005-09-01

    Radiocarbon-calibrated amino-acid racemization ages of individually dated bivalve mollusk shells from Caribbean reef, nonreefal carbonate, and siliciclastic sediments in Panama indicate that siliciclastic sands and muds contain significantly older shells (median 375 yr, range up to ˜5400 yr) than nearby carbonate seafloors (median 72 yr, range up to ˜2900 yr; maximum shell ages differ significantly at p < 0.02 using extreme-value statistics). The implied difference in shell loss rates is contrary to physicochemical expectations but is consistent with observed differences in shell condition (greater bioerosion and dissolution in carbonates). Higher rates of shell loss in carbonate sediments should lead to greater compositional bias in surviving skeletal material, resulting in taphonomic trade-offs: less time averaging but probably higher taxonomic bias in pure carbonate sediments, and lower bias but greater time averaging in siliciclastic sediments from humid-weathered accretionary arc terrains, which are a widespread setting of tropical sedimentation.

  16. The average enzyme principle

    PubMed Central

    Reznik, Ed; Chaudhary, Osman; Segrè, Daniel

    2013-01-01

    The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076

  17. Estakhr's Proper-Time Averaged of Material-Geodesic Equations (an umberella term equation for Relativistic Astrophysics, Relativistic Jets, Gamma-Ray Burst, Big Bang Hydrodynamics, Supernova Hydrodynamics)

    NASA Astrophysics Data System (ADS)

    Estakhr, Ahmad Reza

    2016-10-01

    DJ̲μ/Dτ =J̲ν ∂νU̲μ + ∂νT̲μν +Γαβμ J̲αU̲β ︷ Steady Component + ∂νRμν +Γαβμ Rαβ ︷ Perturbations EAMG equations are proper time-averaged equations of relativistic motion for fluid flow and used to describe Relativistic Turbulent Flows. The EAMG equations are used to describe Relativistic Jet.

  18. On the contribution of G20 and G30 in the Time-Averaged Paleomagnetic Field: First results from a new Giant Gaussian Process inverse modeling approach

    NASA Astrophysics Data System (ADS)

    Khokhlov, A.; Hulot, G.; Johnson, C. L.

    2013-12-01

    It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.

  19. A PD-Like Protocol With a Time Delay to Average Consensus Control for Multi-Agent Systems Under an Arbitrarily Fast Switching Topology.

    PubMed

    Wang, Dong; Zhang, Ning; Wang, Jianliang; Wang, Wei

    2016-03-08

    This paper is concerned with the problem of average consensus control for multi-agent systems with linear and Lipschitz nonlinear dynamics under a switching topology. First, a proportional and derivative-like consensus algorithm for linear cases with a time delay is designed to address such a problem. By a system transformation, such a problem is converted to the stability problem of a switched delay system. The stability analysis is performed based on a proposed Lyapunov-Krasoversusii functional including a triple-integral term and sufficient conditions are obtained to guarantee the average consensus for multi-agent systems under arbitrary switching. Second, extensions to the Lipschitz nonlinear cases are further presented. Finally, numerical examples are given to illustrate the effectiveness of the results.

  20. Estimating equilibrium ensemble averages using multiple time slices from driven nonequilibrium processes: theory and application to free energies, moments, and thermodynamic length in single-molecule pulling experiments.

    PubMed

    Minh, David D L; Chodera, John D

    2011-01-14

    Recently discovered identities in statistical mechanics have enabled the calculation of equilibrium ensemble averages from realizations of driven nonequilibrium processes, including single-molecule pulling experiments and analogous computer simulations. Challenges in collecting large data sets motivate the pursuit of efficient statistical estimators that maximize use of available information. Along these lines, Hummer and Szabo developed an estimator that combines data from multiple time slices along a driven nonequilibrium process to compute the potential of mean force. Here, we generalize their approach, pooling information from multiple time slices to estimate arbitrary equilibrium expectations. Our expression may be combined with estimators of path-ensemble averages, including existing optimal estimators that use data collected by unidirectional and bidirectional protocols. We demonstrate the estimator by calculating free energies, moments of the polymer extension, the thermodynamic metric tensor, and the thermodynamic length in a model single-molecule pulling experiment. Compared to estimators that only use individual time slices, our multiple time-slice estimators yield substantially smoother estimates and achieve lower variance for higher-order moments.

  1. Most suitable mother wavelet for the analysis of fractal properties of stride interval time series via the average wavelet coefficient method.

    PubMed

    Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2017-01-01

    Human gait is a complex interaction of many nonlinear systems and stride intervals exhibiting self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a "biomarker" of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of 16 mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length.

  2. Combining site occupancy, breeding population sizes and reproductive success to calculate time-averaged reproductive output of different habitat types: an application to Tricolored Blackbirds.

    PubMed

    Holyoak, Marcel; Meese, Robert J; Graves, Emily E

    2014-01-01

    In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we

  3. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required

  4. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  5. A New Method of Deriving Time-Averaged Tropospheric Column Ozone over the Tropics Using Total Ozone Mapping Spectrometer (TOMS) Radiances: Intercomparison and Analysis Using TRACE A Data

    NASA Technical Reports Server (NTRS)

    Kim, J. H.; Hudson, R. D.; Thompson, A. M.

    1996-01-01

    Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass

  6. "I've Never Seen People Work So Hard!" Teachers' Working Conditions in the Early Stages of School Turnaround

    ERIC Educational Resources Information Center

    Cucchiara, Maia Bloomfield; Rooney, Erin; Robertson-Kraft, Claire

    2015-01-01

    School turnaround--a reform strategy that strives for quick and dramatic transformation of low-performing schools--has gained prominence in recent years. This study uses interviews and focus groups conducted with 86 teachers in 13 schools during the early stages of school turnaround in a large urban district to examine teachers' perceptions of the…

  7. Academic Turnarounds: Restoring Vitality to Challenged American Colleges and Universities. ACE/Praeger Series on Higher Education

    ERIC Educational Resources Information Center

    MacTaggart, Terrence, Ed.

    2007-01-01

    This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the true…

  8. Sustaining Turnaround at the School and District Levels: The High Reliability Schools Project at Sandfields Secondary School

    ERIC Educational Resources Information Center

    Schaffer, Eugene; Reynolds, David; Stringfield, Sam

    2012-01-01

    Beginning from 1 high-poverty, historically low-achieving secondary school's successful turnaround work, this article provides data relative to a successful school turnaround, the importance of external and system-level supports, and the importance of building for sustainable institutionalization of improvements. The evidence suggests the…

  9. Differences in the Policies, Programs, and Practices (PPPs) and Combination of PPPs across Turnaround, Moderately Improving, and Not Improving Schools

    ERIC Educational Resources Information Center

    Herman, Rebecca; Huberman, Mette

    2012-01-01

    The TALPS study aims to build on the existing research base to develop promising methodologies to identify chronically low-performing and turnaround schools, as well as to identify promising strategies for turning around chronically low-performing schools. By looking specifically at schools identified as turnaround, in comparison to nonturnaround…

  10. Time-synchronous-averaging of gear-meshing-vibration transducer responses for elimination of harmonic contributions from the mating gear and the gear pair

    NASA Astrophysics Data System (ADS)

    Mark, William D.

    2015-10-01

    The transmission-error frequency spectrum of meshing gear pairs, operating at constant speed and constant loading, is decomposed into harmonics arising from the fundamental period of the gear pair, rotational harmonics of the individual gears of the pair, and tooth-meshing harmonics. In the case of hunting-tooth gear pairs, no rotational harmonics from the individual gears, other than the tooth-meshing harmonics, are shown to occur at the same frequencies. Time-synchronous averages utilizing a number of contiguous revolutions of the gear of interest equal to an integer multiple of the number of teeth on the mating gear is shown to eliminate non-tooth-meshing transmission-error rotational-harmonic contributions from the mating gear, and those from the gear pair, in the case of hunting-tooth gear pairs, and to minimize these contributions in the case of non-hunting-tooth gear pairs. An example computation is shown to illustrate the effectiveness of the suggested time-synchronous-averaging procedure.

  11. Sediment accumulation, stratigraphic order, and the extent of time-averaging in lagoonal sediments: a comparison of 210Pb and 14C/amino acid racemization chronologies

    NASA Astrophysics Data System (ADS)

    Kosnik, Matthew A.; Hua, Quan; Kaufman, Darrell S.; Zawadzki, Atun

    2015-03-01

    Carbon-14 calibrated amino acid racemization (14C/AAR) data and lead-210 (210Pb) data are used to examine sediment accumulation rates, stratigraphic order, and the extent of time-averaging in sediments collected from the One Tree Reef lagoon (southern Great Barrier Reef, Australia). The top meter of lagoonal sediment preserves a stratigraphically ordered deposit spanning the last 600 yrs. Despite different assumptions, the 210Pb and 14C/AAR chronologies are remarkably similar indicating consistency in sedimentary processes across sediment grain sizes spanning more than three orders of magnitude (0.1-10 mm). Estimates of long-term sediment accumulation rates range from 2.2 to 1.2 mm yr-1. Molluscan time-averaging in the taphonomically active zone is 19 yrs, whereas below the depth of final burial (~15 cm), it is ~110 yrs/5 cm layer. While not a high-resolution paleontological record, this reef lagoon sediment is suitable for paleoecological studies spanning the period of Western colonization and development. This sedimentary deposit, and others like it, should be useful, albeit not ideal, for quantifying anthropogenic impacts on coral reef systems.

  12. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  13. How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds

    ERIC Educational Resources Information Center

    Kowal, Julie; Ableidinger, Joe

    2011-01-01

    In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of…

  14. Operational Authority, Support, and Monitoring of School Turnaround. NCEE Evaluation Brief. NCEE 2014-4008

    ERIC Educational Resources Information Center

    Herman, Rebecca; Graczewski, Cheryl; James-Burdumy, Susanne; Murray, Matthew; Perez-Johnson, Irma; Tanenbaum, Courtney

    2013-01-01

    The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…

  15. Choosing a School Turnaround Provider. Lessons Learned. Volume 1, Issue 3

    ERIC Educational Resources Information Center

    Lockwood, Anne Turnbaugh; Fleischman, Steve

    2010-01-01

    Droves of school turnaround providers are chasing the massive federal infusion of funds flowing into failing schools. They arrive armed with glossy materials, impressive sounding claims, and, often, citing their prior relationships or experiences with one's school to support their promises of great service and impressive outcomes. But, are their…

  16. Tinkering and Turnarounds: Understanding the Contemporary Campaign to Improve Low-Performing Schools

    ERIC Educational Resources Information Center

    Duke, Daniel L.

    2012-01-01

    An unprecedented amount of attention in recent years has been focused on turning around low-performing schools. Drawing on insights from Tyack and Cuban's (1995) "Tinkering Toward Utopia," the article analyzes the forces behind the school turnaround phenomenon and how they have evolved since passage of the No Child Left Behind Act. The…

  17. Chronically Low-Performing Schools and Turnaround: Evidence from Three States

    ERIC Educational Resources Information Center

    Hansen, Michael; Choi, Kilchan

    2012-01-01

    The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This…

  18. A Case Study of Change Strategies Implemented in a Turnaround Elementary School

    ERIC Educational Resources Information Center

    Colson, Jo Ann

    2012-01-01

    This case study examined the change strategies in a turnaround school at the elementary level to understand and describe how change occurred and was sustained at this campus. This study examined the factors which contributed to the change in academic success of students, examined beliefs about change that led to the change process, identified the…

  19. Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform

    ERIC Educational Resources Information Center

    Mette, Ian M.; Scribner, Jay P.

    2014-01-01

    This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in student…

  20. Turnaround radius in an accelerated universe with quasi-local mass

    SciTech Connect

    Faraoni, Valerio; Lapierre-Léonard, Marianne; Prain, Angus E-mail: mlapierre12@ubishops.ca

    2015-10-01

    We apply the Hawking-Hayward quasi-local energy construct to obtain in a rigorous way the turnaround radius of cosmic structures in General Relativity. A splitting of this quasi-local mass into local and cosmological parts describes the interplay between local attraction and cosmological expansion.

  1. Turnaround High School Principals: Recruit, Prepare and Empower Leaders of Change. High Schools That Work

    ERIC Educational Resources Information Center

    Schmidt-Davis, Jon; Bottoms, Gene

    2012-01-01

    Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apart…

  2. CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success

    ERIC Educational Resources Information Center

    Gorman, Lynn

    2012-01-01

    Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…

  3. Achieving Exact and Constant Turnaround Ratio in a DDS-Based Coherent Transponder

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry R.

    2011-01-01

    A report describes a non-standard direct digital synthesizer (DDS) implementation that can be used as part of a coherent transponder so as to allow any rational turnaround ratio to be exactly achieved and maintained while the received frequency varies. (A coherent transponder is a receiver-transmitter in which the transmitted carrier is locked to a pre-determined multiple of the received carrier's frequency and phase. That multiple is called the turnaround ratio.) The report also describes a general model for coherent transponders that are partly digital. A partially digital transponder is one in which analog signal processing is used to convert the signals between high frequencies at which they are radiated and relatively low frequencies at which they are converted to or from digital form, with most of the complex processing performed digitally. There is a variety of possible architectures for such a transponder, and different ones can be selected by choosing different parameter values in the general model. Such a transponder uses a DDS to create a low-frequency quasi-sinusoidal signal that tracks the received carrier s phase, and another DDS to generate an IF or near-baseband version of the transmitted carrier. With conventional DDS implementations, a given turnaround ratio can be achieved only approximately, and the error varies slightly as the received frequency changes. The non-conventional implementation employed here allows any rational turnaround ratio to be exactly maintained.

  4. The Lay of the Land: State Practices and Needs for Supporting School Turnaround

    ERIC Educational Resources Information Center

    Scott, Caitlin; Lasley, Nora

    2013-01-01

    The goal of the Center on School Turnaround (CST) is to provide technical assistance on research-based practices and emerging promising practices that will increase the capacity of states to support their districts in turning around the lowest-performing schools. When the CST opened its doors in October 2012, it began its work by asking the…

  5. State Capacity to Support School Turnaround. NCEE Evaluation Brief. NCEE 2015-4012

    ERIC Educational Resources Information Center

    Tanenbaum, Courtney; Boyle, Andrea; Graczewski, Cheryl; James-Burdumy, Susanne; Dragoset, Lisa; Hallgren, Kristin

    2015-01-01

    One objective of the U.S. Department of Education's (ED) School Improvement Grants (SIG) and Race to the Top (RTT) program is to help states enhance their capacity to support the turnaround of low-performing schools. This capacity may be important, given how difficult it is to produce substantial and sustained achievement gains in low-performing…

  6. Participatory Democracy and Struggling Schools: Making Space for Youth in School Turnarounds

    ERIC Educational Resources Information Center

    Kirshner, Ben; Jefferson, Anton

    2015-01-01

    Background/Context:Federal policy, as codified in Race to the Top (RTT) funding guidelines, outlines four types of intervention: turnaround, restart, closure, and transformation. RTT has embraced a technocratic paradigm for school reform that frames choice less as the opportunity for the public to deliberate about what it wants from its schools…

  7. Time-averaged aerodynamic loads on the vane sets of the 40- by 80-foot and 80- by 120-foot wind tunnel complex

    NASA Technical Reports Server (NTRS)

    Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.

    1987-01-01

    Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.

  8. Time-Average Molecular Rayleigh Scattering Technique for Measurement of Velocity, Denisty, Temperature, and Turbulence Intensity in High Speed Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta

    2004-01-01

    A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.

  9. Selective processing of auditory evoked responses with iterative-randomized stimulation and averaging: A strategy for evaluating the time-invariant assumption.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D

    2016-03-01

    The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed.

  10. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    PubMed

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method.

  11. Vibration mode shapes visualization in industrial environment by real-time time-averaged phase-stepped electronic speckle pattern interferometry at 10.6 μm and shearography at 532 nm

    NASA Astrophysics Data System (ADS)

    Languy, Fabian; Vandenrijt, Jean-François; Thizy, Cédric; Rochet, Jonathan; Loffet, Christophe; Simon, Daniel; Georges, Marc P.

    2016-12-01

    We present our investigations on two interferometric methods suitable for industrial conditions dedicated to the visualization of vibration modes of aeronautic blades. First, we consider long-wave infrared (LWIR) electronic speckle pattern interferometry (ESPI). The use of long wavelength allows measuring larger amplitudes of vibrations compared with what can be achieved with visible light. Also longer wavelengths allow lower sensitivity to external perturbations. Second, shearography at 532 nm is used as an alternative to LWIR ESPI. Both methods are used in time-averaged mode with the use of phase-stepping. This allows transforming Bessel fringes, typical to time averaging, into phase values that provide higher contrast and improve the visualization of vibration mode shapes. Laboratory experimental results with both techniques allowed comparison of techniques, leading to selection of shearography. Finally a vibration test on electrodynamic shaker is performed in an industrial environment and mode shapes are obtained with good quality by shearography.

  12. Fast prediction of pulsed nonlinear acoustic fields from clinically relevant sources using time-averaged wave envelope approach: comparison of numerical simulations and experimental results.

    PubMed

    Wójcik, J; Kujawska, T; Nowicki, A; Lewin, P A

    2008-12-01

    The primary goal of this work was to verify experimentally the applicability of the recently introduced time-averaged wave envelope (TAWE) method [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F=80 mm) square (20 x 20 mm) and rectangular (10 x 25mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within+/-1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [P.T. Christopher, K.J. Parker, New approaches to nonlinear diffractive field propagation, J. Acoust. Soc. Am. 90 (1) (1991) 488-499.]. The reduction in computation time depends on several parameters

  13. New universal, portable and cryogenic sampler for time weighted average monitoring of H2S, NH3, benzene, toluene, ethylbenzene, xylenes and dimethylethylamine.

    PubMed

    Juarez-Galan, Juan M; Valor, Ignacio

    2009-04-10

    A new cryogenic integrative air sampler (patent application number 08/00669), able to overcome many of the limitations in current volatile organic compounds and odour sampling methodologies is presented. The sample is spontaneously collected in a universal way at 15 mL/min, selectively dried (reaching up to 95% of moisture removal) and stored under cryogenic conditions. The sampler performance was tested under time weighted average (TWA) conditions, sampling 100L of air over 5 days for determination of NH(3), H(2)S, and benzene, toluene, ethylbenzene and xylenes (BTEX) in the ppm(v) range. Recovery was 100% (statistically) for all compounds, with a concentration factor of 5.5. Furthermore, an in-field evaluation was done by monitoring the TWA inmission levels of BTEX and dimethylethylamine (ppb(v) range) in an urban area with the developed technology and comparing the results with those monitored with a commercial graphitised charcoal diffusive sampler. The results obtained showed a good statistical agreement between the two techniques.

  14. A Segmented Chirped-Pulse Fourier Transform Mm-Wave Spectrometer (260-295 Ghz) with Real-Time Signal Averaging Capability

    NASA Astrophysics Data System (ADS)

    Harris, Brent J.; Steber, Amanda L.; Pate, Brooks H.

    2013-06-01

    The design and performance of a 260-295 GHz segmented chirped-pulse Fourier transform mm-wave spectrometer is presented. The spectrometer uses an arbitrary waveform generator to create an excitation and detection waveform. The excitation waveform is a series of chirped pulses with 720 MHz bandwidth at mm-wave and about 200 ns pulse duration. The excitation pulses are produced using an x24 active multiplier chain with a peak power of 30 mW. Following a chirped pulse excitation, the molecular emission from all transitions in the excitation bandwidth is detected using heterodyne detection. The free induction decay (FID) is collected for about 1.5 microseconds and each segment measurement time period is 2 microseconds. The local oscillator for the detection in each segment is also created from the arbitrary waveform generator. The full excitation waveform contains 50 segments that scan the chirped pulse frequency and LO frequency across the 260-295 GHz frequency range in a total measurement time of 100 microseconds. The FID from each measurement segment is digitized at 4 GSamples/s, for a record length of 400 kpts. Signal averaging is performed by accumulating the FID signals from each sweep through the spectrum in a 32-bit FPGA. This allows the acquisition of 16 million sequential 260-295 GHz spectra in real time. The final spectrum is produced from fast Fourier transform of the FID in each measurement segment with the frequency calculated using the segment's LO frequency. The agility of the arbitrary waveform generator light source makes it possible to perform several coherent spectroscopic measurements to speed the analysis of the spectrum. In particular, high-sensitivity double-resonance measurements can be performed by applying a "pi-pulse" to a selected molecular transition and observing the changes to all other transitions in the 260-295 GHz frequency range of the spectrometer. In this mode of operation, up to 50 double-resonance frequencies can be used in each

  15. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28

  16. Computer aided fast turnaround laboratory for research in VLSI (Very Large Scale Integration)

    NASA Astrophysics Data System (ADS)

    Meindl, James D.; Shott, John

    1987-05-01

    The principal objectives of the computer aided/Automated fast turn-around laboratory (CAFTAL) for VLSI are: application of cutting edge computer science and software systems engineering to fast turn-around fabrication in order to develop more productive and flexible new approaches; fast turn-around fabrication of optimized VLSI systems achieved through synergistic integration of system research and device research in aggressive applications such as superfast computers, and investigation of physical limits on submicron VLSI in order to define and explore the most promising technologies. To make a state-of-the-art integrated circuit process more manufacturable, we must be able to understand both the numerous individual process technologies used to fabricate the complete device as well as the important device, circuit and system limitations in sufficient detail to monitor and control the overall fabrication sequence. Specifically, we must understand the sensitivity of device, circuit and system performance to each important step in the fabrication sequence. Moreover, we should be able to predict the manufacturability of an integrated circuit before we actually manufacture it. The salient objective of this program is to enable accurate simulation and control of computer-integrated manufacturing of ultra large scale integrated (ULSI) systems, including millions of submicron transistors in a single silicon chip.

  17. The challenge and the future of health care turnaround plans: evidence from the Italian experience.

    PubMed

    Ferrè, Francesca; Cuccurullo, Corrado; Lega, Federico

    2012-06-01

    Over the last two decades, health policy and governance in Italy have undergone decentralisation at the regional level. The central government was expected to play a guiding role in defining minimum care standards and controlling health expenditures at the regional level in order to keep the entire Italian National Health System (INHS) on track. Although health performance trends have been consistent across regions, public health expenditures have been variable and contributed to a cumulative deficit of 38 billion Euros from 2001 to 2010. To address the deficit, the government called for a resolution introducing a partial bail-out plan and later institutionalised a process to facilitate a turnaround. The upturn started with the development of a formal regional turnaround plan that proposed strategic actions to address the structural determinants of costs. The effectiveness of this tool was widely questioned, and many critics suggested that it was focused more on methods to address short-term issues than on the long-term strategic reconfiguration that is required for regional health systems to ultimately address the structural causes of deficits.We propose an interpretative framework to understand the advantages and disadvantages of turnaround plans, and we apply the findings to the development of policy recommendations for the structure, methods, processes and contexts of the implementation of this tool.

  18. Symptoms in pediatric asthmatics and air pollution: differences in effects by symptom severity, anti-inflammatory medication use and particulate averaging time.

    PubMed Central

    Delfino, R J; Zeiger, R S; Seltzer, J M; Street, D H

    1998-01-01

    Experimental research in humans and animals points to the importance of adverse respiratory effects from short-term particle exposures and to the importance of proinflammatory effects of air pollutants, particularly O(subscript)3. However, particle averaging time has not been subjected to direct scientific evaluation, and there is a lack of epidemiological research examining both this issue and whether modification of air pollutant effects occurs with differences in asthma severity and anti-inflammatory medication use. The present study examined the relationship of adverse asthma symptoms (bothersome or interfered with daily activities or sleep) to O(3) and particles (less than or equal to)10 micrometer (PM10) in a Southern California community in the air inversion zone (1200-2100 ft) with high O(3) and low PM (R = 0.3). A panel of 25 asthmatics 9-17 years of age were followed daily, August through October 1995 (n = 1,759 person-days excluding one subject without symptoms). Exposures included stationary outdoor hourly PM10 (highest 24-hr mean, 54 microgram/m(3), versus median of 1-hr maximums, 56 microgram/m(3) and O(3) (mean of 1-hr maximums, 90 ppb, 5 days (greater than or equal to)120 ppb). Longitudinal regression analyses utilized the generalized estimating equations (GEE) model controlling for autocorrelation, day of week, outdoor fungi, and weather. Asthma symptoms were significantly associated with both outdoor O(3) and PM(10) in single pollutant- and co-regressions, with 1-hr and 8-hr maximum PM(10) having larger effects than the 24-hr mean. Subgroup analyses showed effects of current day PM(10) maximums were strongest in 10 more frequently symptomatic (MS) children: the odds ratios (ORs) for adverse symptoms from 90th percentile increases were 2.24 [95% confidence interval (CI), 1.46-3.46] for 1-hr PM10 (47 microgram/m(3); 1.82 (CI, 1.18-2.81) for 8-hr PM10 (36 microgram/m(3); and 1.50 (CI, 0.80-2.80) for 24-hr PM10 (25 microgram/m(3). Subgroup analyses

  19. Preparing Turnaround Leaders for High Needs Urban Schools

    ERIC Educational Resources Information Center

    Lochmiller, Chad R.; Chesnut, Colleen E.

    2017-01-01

    Purpose: The purpose of this paper is to describe the program structure and design considerations of a 25-day, full-time apprenticeship in a university-based principal preparation program. Design/Methodology/ Approach: The study used a qualitative case study design that drew upon interviews and focus groups with program participants as well as…

  20. Using corporate finance to engineer an organizational turnaround.

    PubMed

    Sussman, Jason H; Dziesinski, Ray R

    2002-11-01

    Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process.

  1. Temperature averaging thermal probe

    NASA Technical Reports Server (NTRS)

    Kalil, L. F.; Reinhardt, V. (Inventor)

    1985-01-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  2. Temperature averaging thermal probe

    NASA Astrophysics Data System (ADS)

    Kalil, L. F.; Reinhardt, V.

    1985-12-01

    A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.

  3. Field demonstration of rapid turnaround, multilevel groundwater screening

    SciTech Connect

    Tingle, A.R.; Baker, L.; Long, D.D.; Miracle, M.

    1994-09-01

    A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.

  4. Signal-to-noise ratio improvements in laser flow diagnostics using time-resolved image averaging and high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Giassi, Davide; Long, Marshall B.

    2016-08-01

    Two alternative image readout approaches are demonstrated to improve the signal-to-noise ratio (SNR) in temporally resolved laser-based imaging experiments of turbulent phenomena. The first method exploits the temporal decay characteristics of the phosphor screens of image intensifiers when coupled to an interline-transfer CCD camera operated in double-frame mode. Specifically, the light emitted by the phosphor screen, which has a finite decay constant, is equally distributed and recorded over the two sequential frames of the detector so that an averaged image can be reconstructed. The characterization of both detector and image intensifier showed that the technique preserves the correct quantitative information, and its applicability to reactive flows was verified using planar Rayleigh scattering and tested with the acquisition of images of both steady and turbulent partially premixed methane/air flames. The comparison between conventional Rayleigh results and the averaged ones showed that the SNR of the averaged image is higher than the conventional one; with the setup used in this work, the gain in SNR was seen to approach 30 %, for both the steady and turbulent cases. The second technique uses the two-frame readout of an interline-transfer CCD to increase the image SNR based on high dynamic range imaging, and it was tested in an unsteady non-reactive flow of Freon-12 injected in air. The result showed a 15 % increase in the SNR of the low-pixel-count regions of an image, when compared to the pixels of a conventionally averaged one.

  5. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  6. Areal Average Albedo (AREALAVEALB)

    DOE Data Explorer

    Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni

    2008-01-01

    he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.

  7. On the use of time-averaging restraints when deriving biomolecular structure from [Formula: see text]-coupling values obtained from NMR experiments.

    PubMed

    Smith, Lorna J; van Gunsteren, Wilfred F; Hansen, Niels

    2016-09-01

    Deriving molecular structure from [Formula: see text]-couplings obtained from NMR experiments is a challenge due to (1) the uncertainty in the Karplus relation [Formula: see text] connecting a [Formula: see text]-coupling value to a torsional angle [Formula: see text], (2) the need to account for the averaging inherent to the measurement of [Formula: see text]-couplings, and (3) the sampling road blocks that may emerge due to the multiple-valuedness of the inverse function [Formula: see text] of the function [Formula: see text]. Ways to properly handle these issues in structure refinement of biomolecules are discussed and illustrated using the protein hen egg white lysozyme as example.

  8. Turnaround Principals

    ERIC Educational Resources Information Center

    McLester, Susan

    2011-01-01

    The Obama administration has grand hopes for turning around the nation's lowest-performing schools, in part by allocating $3.5 billion for School Improvement Grants. Unfortunately, there simply aren't enough qualified principals to replace those mandated to be fired under two of the four school improvement models that the federal government says…

  9. States' Average College Tuition.

    ERIC Educational Resources Information Center

    Eglin, Joseph J., Jr.; And Others

    This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…

  10. Sputum smear microscopy referral rates and turnaround time in the Tonga Islands.

    PubMed

    Fonua, L; Bissell, K; Vivili, P; Gounder, S; Hill, P C

    2014-06-21

    Contexte : Le programme national de lutte contre la tuberculose et le laboratoire de référence de Tonga, situé sur l'île principale, Tongatapu, et trois laboratoires d'hôpitaux de district situés sur d'autres îles.Objectifs : Comparer Tongatapu avec les autres îles en ce qui concerne le taux de référence des crachats, le nombre d'échantillons par patient, la qualité des échantillons, les résultats de l'examen, le délai de retour du résultat et le délai entre le résultat et la mise en route du traitement.Schéma: Etude rétrospective par revue des dossiers de laboratoire et des registres de traitement anti tuberculeux des quatre hôpitaux des Tonga entre 2003 et 2012.Résultats : Parmi 3078 échantillons de crachats, 71,7% étaient de bonne qualité. Le taux de référence des crachats était presque deux fois plus élevé à Tongatapu que dans les îles extérieures (353 contre 180/100 000). Le délai de retour des crachats à Tongatapu et dans les îles extérieures était respectivement de 4,02 et 4,11 jours. Sur 83 cas positifs, 91,2% étaient traités le jour même à Tongatapu contre 80% dans les îles extérieures.Conclusion : Entre l'île principale et les îles extérieures, on note des différences en matière de taux d'examen de crachats mais pas en termes de délai de retour. Les données relatives à la qualité des frottis et aux dates de réalisation ont des limitations qui méritent une intervention avec des directives et des registres spécifiques de la TB. Une recherche supplémentaire est requise pour comprendre les différences entre les taux de référence.

  11. The Turnaround Challenge: Why America's Best Opportunity to Dramatically Improve Student Achievement Lies in Our Worst-Performing Schools. Supplement to the Main Report

    ERIC Educational Resources Information Center

    Calkins, Andrew; Guenther, William; Belfiore, Grace; Lash, Dave

    2007-01-01

    The turnaround recommendations and framework in "The Turnaround Challenge" grew out of both new research and synthesis of extensive existing research, as carried out by Mass Insight Education & Research Institute and its partners since September 2005. If the main report is the tip of the proverbial iceberg, this supplement represents…

  12. The First 90 Days of the New Middle School Principal in a Turnaround School: In-Depth Case Study of the Transition Period (First 90 Days)

    ERIC Educational Resources Information Center

    Baeza, Marco A.

    2010-01-01

    This study analyzed skills, strategies, and theories that new middle school principals used to be successful during their transition period (the first 90 days) in turnaround schools. Based on research on transitions, three research questions guided the study: 1. Do middle school principals in a turnaround school situation find the transition…

  13. Aggregation and Averaging.

    ERIC Educational Resources Information Center

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  14. Threaded average temperature thermocouple

    NASA Technical Reports Server (NTRS)

    Ward, Stanley W. (Inventor)

    1990-01-01

    A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.

  15. School Improvement and Urban Renewal: The Impact of a Turnaround School's Performance on Real Property Values in Its Surrounding Community

    ERIC Educational Resources Information Center

    Jacobson, Stephen L.; Szczesek, Jill

    2013-01-01

    This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…

  16. Portfolio District Reform Meets School Turnaround: Early Implementation Findings from the Los Angeles Public School Choice Initiative

    ERIC Educational Resources Information Center

    Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan

    2013-01-01

    Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…

  17. A Rural School/Community: A Case Study of a Dramatic Turnaround & Its Implications for School Improvement.

    ERIC Educational Resources Information Center

    Carlson, Robert V.

    This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failed…

  18. The Double Bind for Women: Exploring the Gendered Nature of Turnaround Leadership in a Principal Preparation Program

    ERIC Educational Resources Information Center

    Weiner, Jennie Miles; Burton, Laura J.

    2016-01-01

    In this study of nine participants in a turnaround principal preparation program, Jennie Miles Weiner and Laura J. Burton explore how gender role identity shaped participants' views of effective principal leadership and their place within it. The authors find that although female and male participants initially framed effective leadership…

  19. A Case Study of a Turnaround High School: An Examination of the Maryland State Department of Education Breakthrough Center Intervention

    ERIC Educational Resources Information Center

    Galindo, Claudia; Stein, Kathleen; Schaffer, Eugene

    2016-01-01

    This study examined the Maryland State Department of Education Breakthrough Center (BTC) engagement in a Baltimore City turnaround high school. Utilizing a case-study design and mixed-methods research, data were collected through interviews, informal observations, and review of administrative and achievement documents. Beginning in the 2011-2012…

  20. Developing Arizona Turnaround Leaders to Build High-Capacity Schools in the Midst of Accountability Pressures and Changing Demographics

    ERIC Educational Resources Information Center

    Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad

    2014-01-01

    Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of turnaround…

  1. The Project L.I.F.T. Story: Early Lessons from a Public-Private Education Turnaround Initiative

    ERIC Educational Resources Information Center

    Kim, Juli; Ellison, Shonaka

    2015-01-01

    Leading Charlotte foundations formed a funding collaborative to support a five-year district turnaround initiative to dramatically improve educational outcomes for students in the West Charlotte High School corridor, one of the city's lowest-performing feeder zones. The "Project L.I.F.T." initiative involves four areas of education…

  2. Comparison of two different passive air samplers (PUF-PAS versus SIP-PAS) to determine time-integrated average air concentration of volatile hydrophobic organic pollutants

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Kyu; Park, Jong-Eun

    2014-06-01

    Despite remarkable achievements with r some chemicals, a field-measurement technique has not been advanced for volatile hydrophobic organic chemicals (HOCs) that are the subjects of international concern. This study assesses the applicability of passive air sampling (PAS) by comparing PUF-PAS and its modified SIP-PAS which was made by impregnating XAD-4 powder into PUF, overviewing the principles of PAS, screening sensitive parameters, and determining the uncertainty range of PAS-derived concentration. The PAS air sampling rate determined in this study, corrected by a co-deployed low-volume active air sampler (LAS) for neutral PFCs as model chemicals, was ˜1.2 m3 day-1. Our assessment shows that the improved sorption capacity in a SIP lengthens PAS deployment duration by expanding the linear uptake range and then enlarges the effective air sampling volume and detection frequency of chemicals at trace level. Consequently, volatile chemicals can be collected during sufficiently long times without reaching equilibrium when using SIP, while this is not possible for PUF. The most sensitive parameter to influence PAS-derived CA was an air-side mass transfer coefficient (kA), implying the necessity of spiking depuration chemicals (DCs) because this parameter is strongly related with meteorological conditions. Uncertainty in partition coefficients (KPSM-A or KOA) influences PAS-derived CA to a greater extent with regard to lower KPSM-A chemicals. Also, the PAS-derived CA has an uncertainty range of a half level to a 3-fold higher level of the calculated one. This work is expected to establish solid grounds for the improvement of field measurement technique of HOCs.

  3. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  4. Averaging of TNTC counts.

    PubMed Central

    Haas, C N; Heller, B

    1988-01-01

    When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211

  5. Designing Digital Control Systems With Averaged Measurements

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.; Beale, Guy O.

    1990-01-01

    Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.

  6. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  7. Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)

    NASA Technical Reports Server (NTRS)

    Gross, R. Steven

    1996-01-01

    This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.

  8. Implications of quantum ambiguities in k =1 loop quantum cosmology: Distinct quantum turnarounds and the super-Planckian regime

    NASA Astrophysics Data System (ADS)

    Dupuy, John L.; Singh, Parampreet

    2017-01-01

    The spatially closed Friedmann-Lemaître-Robertson-Walker model in loop quantum cosmology admits two inequivalent consistent quantizations: one based on expressing the field strength in terms of the holonomies over closed loops and another using a connection operator and open holonomies. Using the effective dynamics, we investigate the phenomenological differences between the two quantizations for the single-fluid and the two-fluid scenarios with various equations of state, including the phantom matter. We show that a striking difference between the two quantizations is the existence of two distinct quantum turnarounds, either bounces or recollapses, in the connection quantization, in contrast to a single distinct quantum bounce or a recollapse in the holonomy quantization. These results generalize an earlier result on the existence of two distinct quantum bounces for stiff matter by Corichi and Karami. However, we find that in certain situations two distinct quantum turnarounds can become virtually indistinguishable. And depending on the initial conditions, a pure quantum cyclic universe can also exist undergoing a quantum bounce and a quantum recollapse. We show that for various equations of states, connection-based quantization leads to super-Planckian values of the energy density and the expansion scalar at quantum turnarounds. Interestingly, we find that very extreme energy densities can also occur for the holonomy quantization, breaching the maximum allowed density in the spatially flat loop quantized model. However, the expansion scalar in all these cases is bounded by a universal value.

  9. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    SciTech Connect

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  10. DIGIT-PHYSICS: Digits Are Bosons Are Quanta Because (On Average) Quanta and Bosons Are and Always Were Digits!!! DIGITS?: For a Very Long Time Giving Us All The FINGER!!!

    NASA Astrophysics Data System (ADS)

    Siegel, Edward Carl-Ludwig; Newcomb, Simon; Strutt-Rayleigh, John William; Poincare, Henri; Weyl, Hermann; Benford, Frederick; Antonoff, Marvin

    2015-03-01

    DIGIT-PHYSICS: DIGITS?: For a Very Long Time Giving Us All The FINGER!!!: CONTRA Wigner,``On the Unreasonable Effectiveness of Physics in Mathematics!'' A Surprise in Theoretical/Experimental Physics and/or Ostensibly Pure-Mathematics: PHYSICS: Quantum-Mechanics/Statistical-.Mechanics. DIGITS-LAW(S); DIGITS' ostensibly ``pure-mathematics' 1:1-map onto the QUANTUM!!! [Google:''http://www.benfordonline.net/ list/ chronological'']: Newcomb[Am.J.Math.4,39(1881)]-Poincare[Calcul des Probabilité(1912)]-Weyl[Math.Ann., 77, 313(1916)-Benford[J.Am.Phil Soc,78,115 (1938)]-..-Antonoff/Siegel[AMS Joint-Mtg.,San Diego(2002)-abs.# 973-60-124] empirical inter-digit{on-ANY/ALL averageS) = log[base =10] (1 + 1/d) = log[base =10] ([d +1]/d) upon algebraic-inversion is d = 1/[10⌃[ ] -1] 1/[2.303..e⌃[ ] -1] 1/[2.303..e⌃[< ω>] -1] 1/[2.303..e⌃[ ω] -1]: Digits Are Bosons Are Quanta Because (On Average) Quanta and Bosons Are and Always Were Digits!!! (Ex: atom energy-levels numbering: 0,...,9) ANY/ALL QUANTUM-physics[Planck(1901)-Einstein(1905)-Bose(1924)-Einstein(1925)-vs.Fermi(1927)-Dirac(1927)-...] is and always was Newcomb(1881) DIGIT-physics!!!

  11. Dynamic Shapes Average

    DTIC Science & Technology

    2005-01-01

    might not be time-aligned (e.g., due to different growth rates in medical applications and different motion speeds in gait analysis ). The dynamic...rates across speakers. It has also been used by a number of authors for gait analysis , but limited to the 1D path obtained by the tracking of partic

  12. Determination of hydrologic properties needed to calculate average linear velocity and travel time of ground water in the principal aquifer underlying the southeastern part of Salt Lake Valley, Utah

    USGS Publications Warehouse

    Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.

    1994-01-01

    A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to

  13. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT

    PubMed Central

    Chen, Guang-Hong; Li, Yinsheng

    2015-01-01

    Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity

  14. Time-averaged discharge rate of subaerial lava at Kīlauea Volcano, Hawai`i, measured from TanDEM-X interferometry: Implications for magma supply and storage during 2011-2013

    NASA Astrophysics Data System (ADS)

    Poland, Michael P.

    2014-07-01

    Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of Kīlauea Volcano, Hawai`i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at Kīlauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of Kīlauea's 1983-present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of Kīlauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at Kīlauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.

  15. Time-averaged discharge rate of subaerial lava at Kīlauea Volcano, Hawai‘i, measured from TanDEM-X interferometry: Implications for magma supply and storage during 2011-2013

    USGS Publications Warehouse

    Poland, Michael P.

    2014-01-01

    Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of Kīlauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at Kīlauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of Kīlauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of Kīlauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at Kīlauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.

  16. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  17. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  18. Average-cost based robust structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  19. Numerical computation of aerodynamics and heat transfer in a turbine cascade and a turn-around duct using advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Luo, J.

    1993-01-01

    The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.

  20. Do time-averaged, whole-building, effective volatile organic compound (VOC) emissions depend on the air exchange rate? A statistical analysis of trends for 46 VOCs in U.S. offices.

    PubMed

    Rackes, A; Waring, M S

    2016-08-01

    We used existing data to develop distributions of time-averaged air exchange rates (AER), whole-building 'effective' emission rates of volatile organic compounds (VOC), and other variables for use in Monte Carlo analyses of U.S. offices. With these, we explored whether long-term VOC emission rates were related to the AER over the sector, as has been observed in the short term for some VOCs in single buildings. We fit and compared two statistical models to the data. In the independent emissions model (IEM), emissions were unaffected by other variables, while in the dependent emissions model (DEM), emissions responded to the AER via coupling through a conceptual boundary layer between the air and a lumped emission source. For 20 of 46 VOCs, the DEM was preferable to the IEM and emission rates, though variable, were higher in buildings with higher AERs. Most oxygenated VOCs and some alkanes were well fit by the DEM, while nearly all aromatics and halocarbons were independent. Trends by vapor pressure suggested multiple mechanisms could be involved. The factors of temperature, relative humidity, and building age were almost never associated with effective emission rates. Our findings suggest that effective emissions in real commercial buildings will be difficult to predict from deterministic experiments or models.

  1. Practical applications of time-averaged restrained molecular dynamics to ligand-receptor systems: FK506 bound to the Q50R,A95H,K98I triple mutant of FKBP-13.

    PubMed

    Lepre, C A; Pearlman, D A; Futer, O; Livingston, D J; Moore, J M

    1996-07-01

    The ability of time-averaged restrained molecular dynamics (TARMD) to escape local low-energy conformations and explore conformational space is compared with conventional simulated-annealing methods. Practical suggestions are offered for performing TARMD calculations with ligand-receptor systems, and are illustrated for the complex of the immunosuppressant FK506 bound to Q50R,A95H,K98I triple mutant FKBP-13. The structure of (13)C-labeled FK506 bound to triple-mutant FKBP-13 was determined using a set of 87 NOE distance restraints derived from HSQC-NOESY experiments. TARMD was found to be superior to conventional simulated-annealing methods, and produced structures that were conformationally similar to FK506 bound to wild-type FKBP-12. The individual and combined effects of varying the NOE restraint force constant, using an explicit model for the protein binding pocket, and starting the calculations from different ligand conformations were explored in detail.

  2. Light propagation in the averaged universe

    SciTech Connect

    Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de

    2014-10-01

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.

  3. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  4. Average-passage flow model development

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  5. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  6. District Support Systems for the Alignment of Curriculum, Instruction, and Assessment: Can We Predict Student Achievement in Reading and Writing for School Turnaround?

    ERIC Educational Resources Information Center

    Abbott, Laura Lynn Tanner

    2014-01-01

    The purpose of this quantitative non-experimental predictive study was to determine if CIA alignment factors and related district support systems are associated with student achievement to enable the turnaround of schools in crisis. This study aimed to utilize the District Snapshot Tool to determine if the district systems that support CIA…

  7. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  8. Average power meter for laser radiation

    NASA Astrophysics Data System (ADS)

    Shevnina, Elena I.; Maraev, Anton A.; Ishanin, Gennady G.

    2016-04-01

    Advanced metrology equipment, in particular an average power meter for laser radiation, is necessary for effective using of laser technology. In the paper we propose a measurement scheme with periodic scanning of a laser beam. The scheme is implemented in a pass-through average power meter that can perform continuous monitoring during the laser operation in pulse mode or in continuous wave mode and at the same time not to interrupt the operation. The detector used in the device is based on the thermoelastic effect in crystalline quartz as it has fast response, long-time stability of sensitivity, and almost uniform sensitivity dependence on the wavelength.

  9. Reliable and sensitive detection of fragile X (expanded) alleles in clinical prenatal DNA samples with a fast turnaround time.

    PubMed

    Seneca, Sara; Lissens, Willy; Endels, Kristof; Caljon, Ben; Bonduelle, Maryse; Keymolen, Kathleen; De Rademaeker, Marjan; Ullmann, Urielle; Haentjens, Patrick; Van Berkel, Kim; Van Dooren, Sonia

    2012-11-01

    This study evaluated a large set of blinded, previously analyzed prenatal DNA samples with a novel, CGG triplet-repeat primed (TP)-PCR assay (Amplidex FMR1 PCR Kit; Asuragen, Austin, TX). This cohort of 67 fetal DNAs contained 18 full mutations (270 to 1100 repeats, including 1 mosaic), 12 premutations (59 to 150 repeats), 9 intermediate mutations (54 to 58 repeats), and 28 normal samples (17 to 50 repeats, including 3 homozygous female samples). TP-PCR accurately identified FMR1 genotypes, ranging from normal to full- mutation alleles, with a 100% specificity (95% CI, 85.0% to 100%) and a 97.4% sensitivity (95% CI, 84.9% to 99.9%) in comparison with Southern blot analysis results. Exact sizing was possible for a spectrum of normal, intermediate, and premutation (up to 150 repeats) alleles, but CGG repeat numbers >200 are only identified as full mutations. All homozygous alleles were correctly resolved. The assay is also able to reproducibly detect a 2.5% premutation and a 3% full-mutation mosaicism in a normal male background, but a large premutation in a full male mutation background was masked when the amount of the latter was >5%. Implementation of this TP-PCR will significantly reduce reflex testing using Southern blot analyses. Additional testing with methylation-informative techniques might still be needed for a few cases with (large) premutations or full mutations.

  10. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  11. Vibrational averages along thermal lines

    NASA Astrophysics Data System (ADS)

    Monserrat, Bartomeu

    2016-01-01

    A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.

  12. Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

    2012-01-01

    ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data

  13. Searching for the Beginning of the Ozone Turnaround Using a 22-Year Merged Satellite Data Set

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Meeson, Blanche W. (Technical Monitor)

    2001-01-01

    We have used the data from six satellite instruments that measure the total column amount of ozone to construct a consistent merged data set extending from late 1978 into 2000. The keys to constructing a merged data set are to minimize potential drift of individual instruments and to accurately establish instrument-to-instrument offsets. We have used the short-wavelength D-pair measurements (306nm-313nm) of the SBUV and SBUV/2 instruments near the equator to establish a relatively drift-free record for these instruments. We have then used their overlap with the Nimbus 7 and EP TOMS instruments to establish the relative calibration of the various instruments. We have evaluated the drift uncertainty in our merged ozone data (MOD) set by examining both the individual instrument drift uncertainty and the uncertainty in establishing the instrument- to-instrument differences. We conclude that the instrumental drift uncertainty over the 22-year data record is 0.9 %/decade (2-sigma). We have compared our MOD record with 37 ground stations that have a continuous record over that time period. We have a mean drift with respect to the stations of +0.3 %/decade which is within 1-sigma of our uncertainty estimate. Using the satellite record as a transfer standard, we can estimate the capability of the ground instruments to establish satellite calibration. Adding the statistical variability of the station drifts with respect to the satellite to an estimate of the overall drift uncertainty of the world standard instrument, we conclude that the stations should be able to be used to establish the drift of the satellite data record to within and uncertainty of 0.6 %/decade (2-sigma). Adding to this an uncertainty due to the-incomplete global coverage of the stations, we conclude that the station data should be able to establish the global trend with an uncertainty of about 0.7 %/decade, slightly better than for the satellite record. We conclude that merging the two records together

  14. Instrument to average 100 data sets

    NASA Technical Reports Server (NTRS)

    Tuma, G. B.; Birchenough, A. G.; Rice, W. J.

    1977-01-01

    An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.

  15. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  16. Averaged controllability of parameter dependent conservative semigroups

    NASA Astrophysics Data System (ADS)

    Lohéac, Jérôme; Zuazua, Enrique

    2017-02-01

    We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.

  17. IgG/anti-IgG immunoassay based on a turn-around point long period grating

    NASA Astrophysics Data System (ADS)

    Chiavaioli, F.; Biswas, P.; Trono, C.; Giannetti, A.; Tombelli, S.; Bandyopadhyay, S.; Basumallick, N.; Dasgupta, K.; Baldini, F.

    2014-02-01

    Long period fiber gratings (LPFGs) have been proposed as label-free optical biosensor for a few years. Refractive index changes, which modify the fiber transmission spectrum, are still used for evaluating a biochemical interaction that occurs along the grating region. A turn-around point (TAP) LPFG was manufactured for enhancing the refractive index sensitivity of these devices. Considering the simplicity and the fast process with respect to the silanization procedure, the functionalization of the fiber was carried out by Eudragit L100 copolymer. An IgG/anti-IgG immunoassay was implemented for studying the antigen/antibody interaction. A limit of detection lower than 100 μg L-1 was achieved. Based on the same model assay, we compared the resonance wavelength shifts during the injection of 10 mg L-1 anti-IgG antigen between the TAP LPFG and a standard non-TAP one, in which the coupling occurs with a lower order cladding mode, as performance improvement of the LPFG-based biosensors.

  18. Interlibrary Loan Time and Motion Study, Colorado Western Slope.

    ERIC Educational Resources Information Center

    Thomas, Sharon D.

    This report, which investigates turnaround time for interlibrary loans, presents a 1-month study of the interlibrary loan (ILL) process operating in the Western Slope areas of Colorado during 1980. It comprises introductory material presenting the importance, scope and limitations of the study, problem statement, hypothesis and term definitions; a…

  19. Kuss Middle School: Expanding Time to Accelerate School Improvement

    ERIC Educational Resources Information Center

    Massachusetts 2020, 2012

    2012-01-01

    In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning,…

  20. Achronal averaged null energy condition

    SciTech Connect

    Graham, Noah; Olum, Ken D.

    2007-09-15

    The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.

  1. Wavelet analysis of paleomagnetic data: 1. Characteristic average times (5 10 kyr) of variations in the geomagnetic field during and immediately before and after the Early Jaramillo reversal (Western Turkmenistan)

    NASA Astrophysics Data System (ADS)

    Gurarii, G. Z.; Aleksyutin, M. V.; Ataev, N.

    2007-10-01

    Joint wavelet analysis of complete and downsampled series of paleomagnetic and petromagnetic characteristics of rocks in the Matuyama-Jaramillo transitional zone in the Adzhidere section is used to extract paleomagnetic data whose variations are associated with the geomagnetic field alone and data correlating with variations in petromagnetic parameters. It supposed that this correlation can be caused by an external factor affecting weak variations in the magnetic field and climatic changes reflected in the composition and amount of the ferromagnetic fraction in rocks. Preliminary data are obtained for the characteristic times of field variations at the time of accumulation of rocks in the transitional zone.

  2. Averaging on Earth-Crossing Orbits

    NASA Astrophysics Data System (ADS)

    Gronchi, G. F.; Milani, A.

    The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)

  3. Building Turnaround Capacity for Urban School Improvement: The Role of Adaptive Leadership and Defined Autonomy

    ERIC Educational Resources Information Center

    Conrad, Jill K.

    2013-01-01

    This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,…

  4. Local average height distribution of fluctuating interfaces

    NASA Astrophysics Data System (ADS)

    Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.

  5. Local average height distribution of fluctuating interfaces.

    PubMed

    Smith, Naftali R; Meerson, Baruch; Sasorov, Pavel V

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1+1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1+1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2+1 dimensions.

  6. Short turn-around intercontinental clock synchronization using very-long-baseline interferometry

    NASA Technical Reports Server (NTRS)

    Madrid, G. A.; Yunck, T. P.; Henderson, R. B.

    1981-01-01

    During the past year work was accomplished to bring into regular operation a VLBI system for making intercontinental clock comparisons with a turn around of a few days from the time of data taking. Earlier VLBI systems required several weeks to produce results. The present system, which is not yet complete, incorporates a number of refinements not available in earlier systems, such as dual frequency inosopheric delay cancellation and wider synthesized bandwidths with instrumental phase calibration.

  7. WHOI Hawaii Ocean Timeseries Station (WHOTS): WHOTS-4 2007 Mooring Turnaround Cruise Report

    DTIC Science & Technology

    2008-01-01

    pitch and roll information from the ADCP (Fig. 23) provide useful information about the overall behavior of the mooring during its deployment. An...23 to 2007−06−29: WH 300KHz ADCP Heading, Pitch , and Roll : Raw data Jun06 Jul06 Aug06 Sep06 Oct06 Nov06 Dec06 Jan07 Feb07 Mar07 Apr07 May07 Jun07... Pitch Roll Figure 23. Heading, pitch and roll variations measured by the ADCP at 125 m depth on the WHOTS-3 mooring. Figure 24. Time-series

  8. Big Data, Miniregistries: A Rapid-Turnaround Solution to Get Quality Improvement Data into the Hands of Medical Specialists

    PubMed Central

    Herrinton, Lisa J; Liu, Liyan; Altschuler, Andrea; Dell, Richard; Rabrenovich, Violeta; Compton-Phillips, Amy L

    2015-01-01

    Context: Disease registries enable priority setting and batching of clinical tasks, such as reaching out to patients who have missed a routine laboratory test. Building disease registries requires collaboration among professionals in medicine, population science, and information technology. Specialty care addresses many complex, uncommon conditions, and these conditions are diverse. The cost to build and maintain traditional registries for many diverse, complex, low-frequency conditions is prohibitive. Objective: To develop and to test the Specialty Miniregistries platform, a collaborative interface designed to streamline the medical specialist’s contributions to the science and management of population health. Design: We used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing expected utilization, and to influence clinicians and others to change practices to improve care. The platform was composed of staff, technology, and structured collaborations, organized into a workflow. The platform was tested in five medical specialty departments. Main Outcome Measure: Proof of concept. Results: The platform enabled medical specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improvement opportunities. Their knowledge was used to build and to deploy the miniregistries. Each miniregistry required 1 to 2 hours of collaboration by a medical specialist. Turnaround was 1 to 14 days. Conclusions: The Specialty Miniregistries platform is useful for low-volume questions that often occur in specialty care, and it requires low levels of investment. The efficient organization of information workers to support accountable care is an emerging question. PMID:25785640

  9. Reflight of the First Microgravity Science Laboratory: Quick Turnaround of a Space Shuttle Mission

    NASA Technical Reports Server (NTRS)

    Simms, Yvonne

    1998-01-01

    Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.

  10. Disk-Averaged Synthetic Spectra of Mars

    NASA Astrophysics Data System (ADS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong ,William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  11. Disk-averaged synthetic spectra of Mars

    NASA Technical Reports Server (NTRS)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  12. Disk-averaged synthetic spectra of Mars.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  13. A simple algorithm for averaging spike trains.

    PubMed

    Julienne, Hannah; Houghton, Conor

    2013-02-25

    Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.

  14. Global atmospheric circulation statistics: Four year averages

    NASA Technical Reports Server (NTRS)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  15. Average deployments versus missile and defender parameters

    SciTech Connect

    Canavan, G.H.

    1991-03-01

    This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.

  16. The Real Turnaround

    ERIC Educational Resources Information Center

    Purinton, Ted; Azcoitia, Carlos

    2011-01-01

    Chilean educator and poet Gabriela Mistral warned that children's needs are immediate and comprise more than just academic concerns. Implementing comprehensive community schools is an increasingly successful approach to taking her warning to heart, particularly in neighborhoods with large immigrant populations. The reason is simple: education does…

  17. Engineering a Turnaround

    ERIC Educational Resources Information Center

    Hood, Lucy

    2006-01-01

    This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…

  18. Effect of wind averaging time on wind erosivity estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...

  19. Predictabilty of time averages: The influence of the boundary forcing

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1982-01-01

    The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.

  20. Delineating the Average Rate of Change in Longitudinal Models

    ERIC Educational Resources Information Center

    Kelley, Ken; Maxwell, Scott E.

    2008-01-01

    The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…

  1. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  2. Conditionally-averaged structures in wall-bounded turbulent flows

    NASA Technical Reports Server (NTRS)

    Guezennec, Yann G.; Piomelli, Ugo; Kim, John

    1987-01-01

    The quadrant-splitting and the wall-shear detection techniques were used to obtain ensemble-averaged wall layer structures. The two techniques give similar results for Q4 events, but the wall-shear method leads to smearing of Q2 events. Events were found to maintain their identity for very long times. The ensemble-averaged structures scale with outer variables. Turbulence producing events were associated with one dominant vortical structure rather than a pair of counter-rotating structures. An asymmetry-preserving averaging scheme was devised that allowed a picture of the average structure which more closely resembles the instantaneous one, to be obtained.

  3. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  4. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  5. Transforming Schools through Expanded Learning Time: Orchard Gardens K-8 Pilot School. Update 2013

    ERIC Educational Resources Information Center

    Chan, Roy

    2013-01-01

    For years, Orchard Gardens K-8 Pilot School was plagued by low student achievement and high staff turnover. Then, in 2010, with an expanded school schedule made possible through federal funding, Orchard Gardens began a remarkable turnaround. Today, the school is demonstrating how increased learning time, combined with other key turnaround…

  6. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  7. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  8. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  9. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  10. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  11. 40 CFR 89.204 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...

  12. Bayesian Model Averaging for Propensity Score Analysis.

    PubMed

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  13. Turnaround Insights from the Organizational Sciences: A Review of the Empirical Evidence and the Development of a Staged Model of Recovery with Potential Implications for the PK-12 Education Sector

    ERIC Educational Resources Information Center

    Murphy, Joseph

    2008-01-01

    In this article, we review research from the organizational sciences to develop lessons for educators and policy makers. The approach is an integrative review of the literature. We employ a comprehensive process to unpack and make sense of the turnaround literature from the organizational sciences. We rely on strategies appropriate for document…

  14. Cosmological ensemble and directional averages of observables

    SciTech Connect

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com

    2015-07-01

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.

  15. Spatial limitations in averaging social cues

    PubMed Central

    Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle

    2016-01-01

    The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589

  16. Cell averaging Chebyshev methods for hyperbolic problems

    NASA Technical Reports Server (NTRS)

    Wei, Cai; Gottlieb, David; Harten, Ami

    1990-01-01

    A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.

  17. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  18. Average cross-responses in correlated financial markets

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  19. Average oxidation state of carbon in proteins.

    PubMed

    Dick, Jeffrey M

    2014-11-06

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.

  20. Average oxidation state of carbon in proteins

    PubMed Central

    Dick, Jeffrey M.

    2014-01-01

    The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594

  1. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  2. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  3. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...

  4. Whatever Happened to the Average Student?

    ERIC Educational Resources Information Center

    Krause, Tom

    2005-01-01

    Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…

  5. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... class or subclass: Credit = (Average Standard − Emission Level) × (Total Annual Production) × (Useful Life) Deficit = (Emission Level − Average Standard) × (Total Annual Production) × (Useful Life) (l....000 Where: FELi = The FEL to which the engine family is certified. ULi = The useful life of the...

  6. Determinants of College Grade Point Averages

    ERIC Educational Resources Information Center

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  7. Average Transmission Probability of a Random Stack

    ERIC Educational Resources Information Center

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  8. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  9. Analogue Divider by Averaging a Triangular Wave

    NASA Astrophysics Data System (ADS)

    Selvam, Krishnagiri Chinnathambi

    2017-03-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  10. Modelling and designing digital control systems with averaged measurements

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.; Beale, Guy O.

    1988-01-01

    An account is given of the control systems engineering methods applicable to the design of digital feedback controllers for aerospace deterministic systems in which the output, rather than being an instantaneous measure of the system at the sampling instants, instead represents an average measure of the system over the time interval between samples. The averaging effect can be included during the modeling of the plant, thereby obviating the iteration of design/simulation phases.

  11. Average shape of transport-limited aggregates.

    PubMed

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z

    2005-08-12

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  12. Cosmic inhomogeneities and averaged cosmological dynamics.

    PubMed

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  13. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  14. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  15. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...

  16. Spacetime Average Density (SAD) cosmological measures

    SciTech Connect

    Page, Don N.

    2014-11-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

  17. Bimetal sensor averages temperature of nonuniform profile

    NASA Technical Reports Server (NTRS)

    Dittrich, R. T.

    1968-01-01

    Instrument that measures an average temperature across a nonuniform temperature profile under steady-state conditions has been developed. The principle of operation is an application of the expansion of a solid material caused by a change in temperature.

  18. Rotational averaging of multiphoton absorption cross sections

    NASA Astrophysics Data System (ADS)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  19. Rotational averaging of multiphoton absorption cross sections.

    PubMed

    Friese, Daniel H; Beerepoot, Maarten T P; Ruud, Kenneth

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  20. Monthly average polar sea-ice concentration

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1995-01-01

    The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.

  1. Radial averages of astigmatic TEM images.

    PubMed

    Fernando, K Vince

    2008-10-01

    The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.

  2. Real-Time Patient Survey Data During Routine Clinical Activities for Rapid-Cycle Quality Improvement

    PubMed Central

    Jones, Robert E

    2015-01-01

    Background Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. Objective We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. Methods We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Results Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)—8.0, 11.8, 16.8, seconds, respectively. Conclusions This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health

  3. Probing turbulence intermittency via autoregressive moving-average models

    NASA Astrophysics Data System (ADS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele

    2014-12-01

    We suggest an approach to probing intermittency corrections to the Kolmogorov law in turbulent flows based on the autoregressive moving-average modeling of turbulent time series. We introduce an index Υ that measures the distance from a Kolmogorov-Obukhov model in the autoregressive moving-average model space. Applying our analysis to particle image velocimetry and laser Doppler velocimetry measurements in a von Kármán swirling flow, we show that Υ is proportional to traditional intermittency corrections computed from structure functions. Therefore, it provides the same information, using much shorter time series. We conclude that Υ is a suitable index to reconstruct intermittency in experimental turbulent fields.

  4. The generic modeling fallacy: Average biomechanical models often produce non-average results!

    PubMed

    Cook, Douglas D; Robertson, Daniel J

    2016-11-07

    Computational biomechanics models constructed using nominal or average input parameters are often assumed to produce average results that are representative of a target population of interest. To investigate this assumption a stochastic Monte Carlo analysis of two common biomechanical models was conducted. Consistent discrepancies were found between the behavior of average models and the average behavior of the population from which the average models׳ input parameters were derived. More interestingly, broadly distributed sets of non-average input parameters were found to produce average or near average model behaviors. In other words, average models did not produce average results, and models that did produce average results possessed non-average input parameters. These findings have implications on the prevalent practice of employing average input parameters in computational models. To facilitate further discussion on the topic, the authors have termed this phenomenon the "Generic Modeling Fallacy". The mathematical explanation of the Generic Modeling Fallacy is presented and suggestions for avoiding it are provided. Analytical and empirical examples of the Generic Modeling Fallacy are also given.

  5. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  6. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  7. Interpreting Sky-Averaged 21-cm Measurements

    NASA Astrophysics Data System (ADS)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  8. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, Matthew

    2013-11-01

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  9. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  10. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  11. Average: the juxtaposition of procedure and context

    NASA Astrophysics Data System (ADS)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  12. Average length of stay in hospitals.

    PubMed

    Egawa, H

    1984-03-01

    The average length of stay is essentially an important and appropriate index for hospital bed administration. However, from the position that it is not necessarily an appropriate index in Japan, an analysis is made of the difference in the health care facility system between the United States and Japan. Concerning the length of stay in Japanese hospitals, the median appeared to better represent the situation. It is emphasized that in order for the average length of stay to become an appropriate index, there is need to promote regional health, especially facility planning.

  13. 34 CFR 668.196 - Average rates appeals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668... determine that you qualify, we notify you of that determination at the same time that we notify you of your... determine that you meet the requirements for an average rates appeal. (Approved by the Office of...

  14. Maximum Likelihood Estimation of Multivariate Autoregressive-Moving Average Models.

    DTIC Science & Technology

    1977-02-01

    maximizing the same have been proposed i) in time domain by Box and Jenkins [41. Astrom [3J, Wilson [23 1, and Phadke [161, and ii) in frequency domain by...moving average residuals and other convariance matrices with linear structure ”, Anna/s of Staustics, 3. 3. Astrom , K. J. (1970), Introduction to

  15. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    PubMed

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  16. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  17. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  18. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...

  19. Why Johnny Can Be Average Today.

    ERIC Educational Resources Information Center

    Sturrock, Alan

    1997-01-01

    During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…

  20. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  1. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  2. 40 CFR 63.846 - Emission averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...

  3. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  4. Average thermal characteristics of solar wind electrons

    NASA Technical Reports Server (NTRS)

    Montgomery, M. D.

    1972-01-01

    Average solar wind electron properties based on a 1 year Vela 4 data sample-from May 1967 to May 1968 are presented. Frequency distributions of electron-to-ion temperature ratio, electron thermal anisotropy, and thermal energy flux are presented. The resulting evidence concerning heat transport in the solar wind is discussed.

  5. World average top-quark mass

    SciTech Connect

    Glenzinski, D.; /Fermilab

    2008-01-01

    This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.

  6. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    SciTech Connect

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  7. How Young Is Standard Average European?

    ERIC Educational Resources Information Center

    Haspelmath, Martin

    1998-01-01

    An analysis of Standard Average European, a European linguistic area, looks at 11 of its features (definite, indefinite articles, have-perfect, participial passive, antiaccusative prominence, nominative experiencers, dative external possessors, negation/negative pronouns, particle comparatives, A-and-B conjunction, relative clauses, verb fronting…

  8. A Functional Measurement Study on Averaging Numerosity

    ERIC Educational Resources Information Center

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  9. Rapid identification of microorganisms from positive blood cultures by testing early growth on solid media using matrix-assisted laser desorption ionization-time of flight mass spectrometry.

    PubMed

    Gonzalez, Mark D; Weber, Carol J; Burnham, Carey-Ann D

    2016-06-01

    We performed a retrospective analysis of a simple modification to MALDI-TOF MS for microorganism identification to accurately improve the turnaround time (TAT) for identification of Enterobacteriaceae recovered in blood cultures. Relative to standard MALDI-TOF MS procedures, we reduced TAT from 28.3 (n=90) to 21.2h (n=107).

  10. Comparison of mouse brain DTI maps using K-space average, image-space average, or no average approach.

    PubMed

    Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan

    2013-11-01

    Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data were collected from five normal mice. Noisy data with signal-to-noise ratios (SNR) that varied between five and 30 (before averaging) were then simulated. The DTI indices, including relative anisotropy (RA), trace of diffusion tensor (TR), axial diffusivity (λ║), and radial diffusivity (λ⊥), derived from the k-avg, m-avg, and no-avg, were then compared in the corpus callosum white matter, cortex gray matter, and the ventricles. We found that k-avg and m-avg enhanced the SNR of DWI with no significant differences. However, k-avg produced lower RA in the white matter and higher RA in the gray matter, compared to the m-avg and no-avg, regardless of SNR. The latter two produced similar DTI quantifications. We concluded that k-avg is less preferred for DTI brain imaging.

  11. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  12. FLA8/KIF3B phosphorylation regulates kinesin-II interaction with IFT-B to control IFT entry and turnaround.

    PubMed

    Liang, Yinwen; Pang, Yunong; Wu, Qiong; Hu, Zhangfeng; Han, Xue; Xu, Yisheng; Deng, Haiteng; Pan, Junmin

    2014-09-08

    The assembly and maintenance of cilia depends on intraflagellar transport (IFT). Activated IFT motor kinesin-II enters the cilium with loaded IFT particles comprising IFT-A and IFT-B complexes. At the ciliary tip, kinesin-II becomes inactivated, and IFT particles are released. Moreover, the rate of IFT entry is dynamically regulated during cilium assembly. However, the regulatory mechanism of IFT entry and loading/unloading of IFT particles remains elusive. We show that the kinesin-II motor subunit FLA8, a homolog of KIF3B, is phosphorylated on the conserved S663 by a calcium-dependent kinase in Chlamydomonas. This phosphorylation disrupts the interaction between kinesin-II and IFT-B, inactivates kinesin-II and inhibits IFT entry, and is also required for IFT-B unloading at the ciliary tip. Furthermore, our data suggest that the IFT entry rate is controlled by regulation of the cellular level of phosphorylated FLA8. Therefore, FLA8 phosphorylation acts as a molecular switch to control IFT entry and turnaround.

  13. Polarized electron beams at milliampere average current

    SciTech Connect

    Poelker, M.

    2013-11-07

    This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.

  14. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  15. Average Annual Rainfall over the Globe

    ERIC Educational Resources Information Center

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  16. Stochastic Games with Average Payoff Criterion

    SciTech Connect

    Ghosh, M. K.; Bagchi, A.

    1998-11-15

    We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.

  17. The Average Velocity in a Queue

    ERIC Educational Resources Information Center

    Frette, Vidar

    2009-01-01

    A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…

  18. Digital Averaging Phasemeter for Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas

    2004-01-01

    A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.

  19. On the ensemble averaging of PIC simulations

    NASA Astrophysics Data System (ADS)

    Codur, R. J. B.; Tsung, F. S.; Mori, W. B.

    2016-10-01

    Particle-in-cell simulations are used ubiquitously in plasma physics to study a variety of phenomena. They can be an efficient tool for modeling the Vlasov or Vlasov Fokker Planck equations in multi-dimensions. However, the PIC method actually models the Klimontovich equation for finite size particles. The Vlasov Fokker Planck equation can be derived as the ensemble average of the Klimontovich equation. We present results of studying Landau damping and Stimulated Raman Scattering using PIC simulations where we use identical ``drivers'' but change the random number generator seeds. We show that even for cases where a plasma wave is excited below the noise in a single simulation that the plasma wave can clearly be seen and studied if an ensemble average over O(10) simulations is made. Comparison between the results from an ensemble average and the subtraction technique are also presented. In the subtraction technique two simulations, one with the other without the ``driver'' are conducted with the same random number generator seed and the results are subtracted. This work is supported by DOE, NSF, and ENSC (France).

  20. Modern average global sea-surface temperature

    USGS Publications Warehouse

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  1. Noise reduction in elastograms using temporal stretching with multicompression averaging.

    PubMed

    Varghese, T; Ophir, J; Céspedes, I

    1996-01-01

    Elastography uses estimates of the time delay (obtained by cross-correlation) to compute strain estimates in tissue due to quasistatic compression. Because the time delay estimates do not generally occur at the sampling intervals, the location of the cross-correlation peak does not give an accurate estimate of the time delay. Sampling errors in the time-delay estimate are reduced using signal interpolation techniques to obtain subsample time-delay estimates. Distortions of the echo signals due to tissue compression introduce correlation artifacts in the elastogram. These artifacts are reduced by a combination of small compressions and temporal stretching of the postcompression signal. Random noise effects in the resulting elastograms are reduced by averaging several elastograms, obtained from successive small compressions (assuming that the errors are uncorrelated). Multicompression averaging with temporal stretching is shown to increase the signal-to-noise ratio in the elastogram by an order of magnitude, without sacrificing sensitivity, resolution or dynamic range. The strain filter concept is extended in this article to theoretically characterize the performance of multicompression averaging with temporal stretching.

  2. Averaged initial Cartesian coordinates for long lifetime satellite studies

    NASA Technical Reports Server (NTRS)

    Pines, S.

    1975-01-01

    A set of initial Cartesian coordinates, which are free of ambiguities and resonance singularities, is developed to study satellite mission requirements and dispersions over long lifetimes. The method outlined herein possesses two distinct advantages over most other averaging procedures. First, the averaging is carried out numerically using Gaussian quadratures, thus avoiding tedious expansions and the resulting resonances for critical inclinations, etc. Secondly, by using the initial rectangular Cartesian coordinates, conventional, existing acceleration perturbation routines can be absorbed into the program without further modifications, thus making the method easily adaptable to the addition of new perturbation effects. The averaged nonlinear differential equations are integrated by means of a Runge Kutta method. A typical step size of several orbits permits rapid integration of long lifetime orbits in a short computing time.

  3. Stochastic averaging and sensitivity analysis for two scale reaction networks

    NASA Astrophysics Data System (ADS)

    Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.

    2016-02-01

    In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.

  4. Evolution of the average avalanche shape with the universality class.

    PubMed

    Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J

    2013-01-01

    A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics.

  5. Evolution of the average avalanche shape with the universality class

    PubMed Central

    Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J

    2013-01-01

    A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571

  6. Exact Averaging of Stochastic Equations for Flow in Porous Media

    SciTech Connect

    Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi

    2008-03-15

    It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.

  7. A Spectral Estimate of Average Slip in Earthquakes

    NASA Astrophysics Data System (ADS)

    Boatwright, J.; Hanks, T. C.

    2014-12-01

    We demonstrate that the high-frequency acceleration spectral level ao of an ω-square source spectrum is directly proportional to the average slip of the earthquake ∆u divided by the travel time to the station r/βao = 1.37 Fs (β/r) ∆uand multiplied by the radiation pattern Fs. This simple relation is robust but depends implicitly on the assumed relation between the corner frequency and source radius, which we take from the Brune (1970, JGR) model. We use this relation to estimate average slip by fitting spectral ratios with smaller earthquakes as empirical Green's functions. For a pair of Mw = 1.8 and 1.2 earthquakes in Parkfield, we fit the spectral ratios published by Nadeau et al. (1994, BSSA) to obtain 0.39 and 0.10 cm. For the Mw= 3.9 earthquake that occurred on Oct 29, 2012, at the Pinnacles, we fit spectral ratios formed with respect to an Md = 2.4 aftershock to obtain 4.4 cm. Using the Sato and Hirasawa (1973, JPE) model instead of the Brune model increases the estimates of average slip by 75%. These estimates of average slip are factors of 5-40 (or 3-23) times less than the average slips of 3.89 cm and 23.3 cm estimated by Nadeau and Johnson (1998, BSSA) from the slip rates, average seismic moments and recurrence intervals for the two sequences to which they associate these earthquakes. The most reasonable explanation for this discrepancy is that the stress release and rupture processes of these earthquakes is strongly heterogeneous. However, the fits to the spectral ratios do not indicate that the spectral shapes are distorted in the first two octaves above the corner frequency.

  8. Experimental measurements and analytical analysis related to gas turbine heat transfer. Part 1: Time-averaged heat-flux and surface-pressure measurements on the vanes and blades of the SSME fuel-side turbine and comparison with prediction. Part 2: Phase-resolved surface-pressure and heat-flux measurements on the first blade of the SSME fuel-side turbine

    NASA Astrophysics Data System (ADS)

    1994-05-01

    Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

  9. Experimental measurements and analytical analysis related to gas turbine heat transfer. Part 1: Time-averaged heat-flux and surface-pressure measurements on the vanes and blades of the SSME fuel-side turbine and comparison with prediction. Part 2: Phase-resolved surface-pressure and heat-flux measurements on the first blade of the SSME fuel-side turbine

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

  10. A Green's function quantum average atom model

    DOE PAGES

    Starrett, Charles Edward

    2015-05-21

    A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.

  11. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference Method... Combustion Units Constructed on or Before August 30, 1999 Continuous Emission Monitoring § 62.15210 How do...

  12. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... dioxide emissions. If you are monitoring the percent reduction of sulfur dioxide, use EPA Reference Method... dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use EPA Reference Method... Combustion Units Constructed on or Before August 30, 1999 Continuous Emission Monitoring § 62.15210 How do...

  13. Average neutronic properties of prompt fission products

    SciTech Connect

    Foster, D.G. Jr.; Arthur, E.D.

    1982-02-01

    Calculations of the average neutronic properties of the ensemble of fission products producted by fast-neutron fission of /sup 235/U and /sup 239/Pu, where the properties are determined before the first beta decay of any of the fragments, are described. For each case we approximate the ensemble by a weighted average over 10 selected nuclides, whose properties we calculate using nuclear-model parameters deduced from the systematic properties of other isotopes of the same elements as the fission fragments. The calculations were performed primarily with the COMNUC and GNASH statistical-model codes. The results, available in ENDF/B format, include cross sections, angular distributions of neutrons, and spectra of neutrons and photons, for incident-neutron energies between 10/sup -5/ eV and 20 MeV. Over most of this energy range, we find that the capture cross section of /sup 239/Pu fission fragments is systematically a factor of two to five greater than for /sup 235/U fission fragments.

  14. Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction

    PubMed Central

    Ahlfors, Seppo P.; Hinrichs, Hermann

    2016-01-01

    Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196

  15. Self-averaging in complex brain neuron signals

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.

    2002-12-01

    Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.

  16. Unpredictable visual changes cause temporal memory averaging.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2007-09-01

    Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change.

  17. Lagrangian averaging, nonlinear waves, and shock regularization

    NASA Astrophysics Data System (ADS)

    Bhat, Harish S.

    In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity

  18. High average power diode pumped solid state lasers for CALIOPE

    SciTech Connect

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.

  19. Asymmetric network connectivity using weighted harmonic averages

    NASA Astrophysics Data System (ADS)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  20. Quetelet, the average man and medical knowledge.

    PubMed

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.